UrbanMM'21: Proceedings of the 1st International Workshop on Multimedia Computing for Urban Data
UrbanMM'21: Proceedings of the 1st International Workshop on Multimedia Computing for Urban Data
SESSION: Session: Paper Presentations
Zurich Like New: Analyzing Open Urban Multimodal Data
- Marcel Granero Moya
- Thanh-Trung Phan
- Daniel Gatica-Perez
Citizen-driven platforms for enhancing local public services have been adopted in several countries like the UK and Switzerland. Local governments use data collected from these platforms to solve reported issues. Data can also be used by governments for data-driven decision-making and to improve the operation of the platforms themselves. In particular, as citizen reports become increasingly popular, there is a need to handle them more efficiently. In this paper, we present an analysis of ZüriWieNeu, a map-based website helping people in Zurich, Switzerland to report urban issues related to waste, broken streetlamps, or graffiti, among others. Our contributions are two-fold. First, we analyze what machine-extracted textual, visual, spatial and temporal features reveal about the dynamics of reporting and the content of each report category. This analysis provides a snapshot of the common patterns of urban issues in the Zurich area. Second, we perform classification to automatically infer the category of reports, achieving promising performance. Our work contributes towards developing machine learning-based systems to classify report categories, with the ultimate goal of supporting both users and platform operation.
MBDF-Net: Multi-Branch Deep Fusion Network for 3D Object Detection
- Xun Tan
- Xingyu Chen
- Guowei Zhang
- Jishiyu Ding
- Xuguang Lan
Point clouds and images could provide complementary information when representing 3D objects. Fusing the two kinds of data usually helps to improve the detection results. However, it is challenging to fuse the two data modalities, due to their different characteristics and the interference from the non-interest areas. To solve this problem, we propose a Multi-Branch Deep Fusion Network (MBDF-Net) for 3D object detection. The proposed detector has two stages. In the first stage, our multi-branch feature extraction network utilizes Adaptive Attention Fusion (AAF) modules to produce cross-modal fusion features from single-modal semantic features. In the second stage, we use a region of interest (RoI) -pooled fusion module to generate enhanced local features for refinement. A novel attention-based hybrid sampling strategy is also proposed for selecting key points in the downsampling process. We evaluate our approach on two widely used benchmark datasets including KITTI and SUN-RGBD. The experimental results demonstrate the advantages of our method over state-of-the-art approaches.
UrbanAccess: Query Driven Urban Analytics Platform for Detecting Complex Accessibility Event Patterns using Tactile Surfaces
- Dhaval Salwala
- Piyush Yadav
- Venkatesh G. Munirathnam
- Suzanne Little
- Noel E. O'Connor
- Edward Curry
The smart city concept has now become one of the key enablers in urban city management. The adoption and permeation of ICT and AI-driven techniques have enabled the authorities to resolve poor urban planning issues with improved delivery of citizen services. Major urban problem is addressing the accessibility issue across cities road crossing and facilitating visually impaired people via well-defined infrastructure. The research presented in this paper emphasized urban analytics that studies the road crossings and challenges one faces when accessing the footpaths of a city using the Tactile surfaces. This work demonstrates a distributed event analytics platform- GNOSIS to detect complex accessibility event patterns. GNOSIS ingest video data streams from cities infrastructure such as CCTV and detect tactile surface event patterns using an ensemble of deep learning models using a declarative query language. The work analyzes mainly three types of tactile surface - Blister, Cycleway and Directional, collected from different cities in Ireland using crowd-sourcing techniques. GNOSIS makes decisions in real-time based on the type of tactile surface, colour and the making pattern.
Urban Footpath Image Dataset to Assess Pedestrian Mobility
- Venkatesh G M
- Bianca Pereira
- Suzanne Little
This paper presents an urban footpath image dataset captured through crowdsourcing using the mapillary service (mobile application) and demonstrating its use for data analytics applications by employing object detection and image segmentation. The study was motivated by the unique, individual mobility challenges that many people face in navigating public footpaths, in particular those who use mobility aids such as long cane, guide digs, crutches, wheelchairs, etc., when faced with changes in pavement surface (tactile pavements) or obstacles such as bollards and other street furniture. Existing image datasets are generally captured from an instrumented vehicle and do not provide sufficient or adequate images of the footpaths from the pedestrian perspective. A citizen science project (Crowd4Access) worked with user groups and volunteers to gather a sample image dataset resulting in a set of 39,642 images collected in a range of different conditions. Preliminary studies to detect tactile pavements and perform semantic segmentation using state-of-the-art computer vision models demonstrate the utility of this dataset to enable better understanding of urban mobility issues.