| |||||||||||||||
CVRSUAD 2019 : 7th Workshop on Computer Vision for Road Scene Understanding & Autonomous Driving | |||||||||||||||
Link: https://sites.google.com/view/cvrsuad/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
Analyzing road scenes using cameras could have a crucial impact in many domains, such as autonomous driving, advanced driver assistance systems (ADAS), personal navigation, mapping of large scale environments and road maintenance. For instance, vehicle infrastructure, signage, and rules of the road have been designed to be interpreted fully by visual inspection. As the field of computer vision becomes increasingly mature, practical solutions to many of these tasks are now within reach. Nonetheless, there still seems to exist a wide gap between what is needed by the automotive industry and what is currently possible using computer vision techniques.
The goal of this workshop is to allow researchers in the fields of road scene understanding and autonomous driving to present their progress and discuss novel ideas that will shape the future of this area. In particular, we would like this workshop to bridge the gap between the community that develops novel theoretical approaches for road scene understanding and the community that builds working real-life systems performing in real-world conditions. To this end, we plan to have a broad panel of invited speakers coming from both academia and industry. We encourage submissions of original and unpublished work in the area of vision-based road scene understanding. The topics of interest include (but are not limited to): Road scene understanding in mature and emerging markets Deep learning for road scene understanding Prediction and modeling of road scenes and scenarios Semantic labeling, object detection and recognition in road scenes Dynamic 3D reconstruction, SLAM and ego-motion estimation Visual feature extraction, classification and tracking Design and development of robust and real-time architectures Use of emerging sensors (e.g., multispectral imagery, RGB-D, LIDAR and LADAR) Fusion of RGB imagery with other sensing modalities Interdisciplinary contributions across computer vision, robotics and other related fields. We encourage researchers to submit not only theoretical contributions, but also work more focused on applications. Each paper will receive 3 double blind reviews, which will be moderated by the workshop organizers. The submission site is: https://cmt3.research.microsoft.com/CVRSUAD2018. More information regarding the submission process can be found at https://cvrsuad.data61.csiro.au. |
|