posted by organizer: gros || 6534 views || tracked by 8 users: [display]

VARVAI 2016 : 1st workshop on Virtual/Augmented Reality for Visual Artificial Intelligence

FacebookTwitterLinkedInGoogle

Link: http://adas.cvc.uab.es/varvai2016
 
When Oct 8, 2016 - Oct 16, 2016
Where Amsterdam
Submission Deadline Jul 4, 2016
Notification Due Jul 25, 2016
Categories    computer graphics   computer vision   machine learning   deep learning
 

Call For Papers

----------------------------------------------------------------------
Apologies if you received multiple copies
----------------------------------------------------------------------

*** First Call for Papers ***
We are pleased to invite you to submit your work to the 1st workshop on Virtual/Augmented Reality for Visual Artificial Intelligence (VARVAI) to be held in conjunction with the 14th European Conference on Computer Vision – Amsterdam, The Netherlands (October 8-16, 2016). http://adas.cvc.uab.es/varvai2016



** Workshop Motivation **

We are currently observing a strong renewed interest in and hopes for Artificial Intelligence (AI), fueled by scientific advances that can efficiently learn powerful statistical models from large data collections processed on efficient hardware. Computer Vision is the prime example of this modern revolution. Its recent successes in many high-level visual recognition tasks, such as image classification, object detection, and semantic segmentation are thanks in part to large labeled datasets such as ImageNet and deep learning algorithms supported by new and more appropriate hardware such as GPUs.

In fact, recent results indicate that the reliability of models might not be limited by the algorithms themselves but by the type and amount of data available. The release of new and more sophisticated datasets has indeed been the trump card for many recent achievements in computer vision and machine learning, e.g., deep convolutional networks — ImageNet.

Therefore, in order to tackle more challenging and general Visual AI (VAI) tasks, such as finegrained global scene and video understanding, progress is needed not only on algorithms, but also on datasets, both for learning and quantitatively evaluating generalization performance of visual models. In particular, labeling every pixel of a large set of varied videos with ground truth depth, optical flow, semantic category, or other visual properties is neither scalable nor cost-effective. This is hinted at by the small scale of existing datasets, such as the KITTI Vision Benchmark Suite, which was acquired through an enormous engineering effort. Such labor-intensive ground truth annotation process is, in addition, prone to errors.

The purpose of this workshop is to provide a forum to gather researchers around the nascent field of Virtual/Augmented Reality (VR/AR or just VAR) used for data generation in order to learn and study VAI algorithms. VAR technologies have made impressive progress recently, in particular in the fields of computer graphics, physics engines, game engines, authoring tools, or hardware, thanks to a strong push from various big players in the industry (including Facebook/Oculus, Google, Sony/Playstation, Valve, and Unity Technologies). Although mostly designed for multimedia applications geared towards human entertainment, more and more researchers (cf. references below) have noticed the tremendous potential that VAR platforms hold as data generation tools for algorithm/AI consumption. In light of the long-standing history of synthetic data in computer vision and multimedia, VAR technologies represent the next step of multimedia data generation, vastly improving on the quantity, variety, and realism of densely and accurately labeled fine-grained data that can be generated, and needed to push the scientific boundaries of research on AI.



** Scope **

This half-day workshop will include invited talks from researchers at the forefront of modern synthetic data generation with VAR for VAI (cf. below) and invite contributions from multimedia and computer vision researchers on the following non-exclusive topics:

· Learning Transferable Multimodal Representations in VAR, e.g., via deep learning

· Virtual World design for realistic training data generation

· Augmenting real-world training datasets with renderings of 3D virtual objects

· Active & reinforcement learning algorithms for effective training data generation and accelerated learning

· Studies on the gap between VAR from the point of view of VAI algorithms

· Hybrid real/virtual data sets to train and benchmark VAI algorithms

· Large scale virtual (pre-)training of scene and video understanding algorithms for which
current data is scarce, including:

o Tracking, Re-identification

o Human Pose Estimation, Action Recognition, and Event Detection

o Object-, instance-, and scene-level segmentation

o Optical flow, Scene flow, depth estimation, and viewpoint estimation

o Visual Question Answering and spatio-temporal reasonin

o X-recognition: objects, text, faces, emotions, etc.

The main question underlying the workshop will be when, how, and how much can realistic virtual/augmented worlds be used to train and evaluate artificial intelligence algorithms for realworld efficiency?



** Format and Submision Procedure **

Authors should take into account the following:

· The submission site is https://cmt3.research.microsoft.com/VARVAI2016/.

· The contributions will consist in Extended Abstracts (EA) of 6 pages (excluding the references).

· We accept dual submissions to ECCV 2016 and VARVAI 2016. In other words, the submission to VARVAI 2016 should be a 6-page summary of the submission to ECCV 2016.

· The format of the papers is the same as the ECCV main conference.

· Submissions will be rejected without review if they: contain more than 6 pages (excluding references) or violate the double-blind policy.

· Manuscript templates can be found at the main conference website:
http://www.eccv2016.org/submission/

· The accepted papers will be linked in the VARVAI webpage, but they will not be included in the ECCV proceedings.



** Important Dates **

Submission deadline: July 4th, 2016
Author notification: July 25th, 2016
Camera-ready: TBA
Workshop: TBA



** Organizers **

Antonio M. López – Computer Vision Center & U. Autònoma de Barcelona, Spain

Adrien Gaidon – Xerox Research Center Europe (XRCE)

German Ros – Computer Vision Center & U. Autònoma de Barcelona, Spain

Eleonora Vig – German Aerospace Center, Earth Observation Center, Remote Sensing Technology Institute

David Vázquez – Computer Vision Center & U. Autònoma de Barcelona, Spain

Hao Su – Geometric Computing Lab and Artificial Intelligence Lab, Dept. of Computer Science, Stanford Univ.

Florent Perronnin – Facebook AI Research (FAIR) Paris lab



** Contact information **

Antonio M. López antonio@cvc.uab.es

Adrien Gaidon adrien.gaidon@xrce.xerox.com

German Ros gros@cvc.uab.es

Related Resources

IEEE AIxVR 2024   IEEE International Conference on Artificial Intelligence & extended and Virtual Reality
ECAI 2024   27th European Conference on Artificial Intelligence
IEEE AIxVR 2025   7th International Conference on Artificial Intelligence & extended and Virtual Reality
ICDM 2024   IEEE International Conference on Data Mining
CPAIOR 2024   International Conference on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research
CCVPR 2024   2024 International Joint Conference on Computer Vision and Pattern Recognition (CCVPR 2024)
ISCAI 2024   2024 3rd International Symposium on Computing and Artificial Intelligence
AIM@EPIA 2024   Artificial Intelligence in Medicine
ISVC 2024   19th International Symposium on Visual Computing
SPIE-Ei/Scopus-CVCM 2024   2024 5th International Conference on Computer Vision, Communications and Multimedia (CVCM 2024) -EI Compendex