Augmented Reality Meets Computer Vision: Efficient Data Generation for Urban Driving Scenes

TitleAugmented Reality Meets Computer Vision: Efficient Data Generation for Urban Driving Scenes
Publication TypeJournal Article
Year of Publication2018
AuthorsAbu Alhaija, H, Mustikovela, SKarthik, Mescheder, L, Geiger, A, Rother, C
JournalInternational Journal of Computer Vision
Volume126
Pagination961–972
Date Publishedaug
ISSN15731405
Keywordsautonomous driving, Data augmentation, instance segmentation, Object detection, synthetic training data
Abstract

The success of deep learning in computer vision is based on the availability of large annotated datasets. To lower the need for hand labeled images, virtually rendered 3D worlds have recently gained popularity. Unfortunately, creating realistic 3D content is challenging on its own and requires significant human effort. In this work, we propose an alternative paradigm which combines real and synthetic data for learning semantic instance segmentation and object detection models. Exploiting the fact that not all aspects of the scene are equally important for this task, we propose to augment real-world imagery with virtual objects of the target category. Capturing real-world images at large scale is easy and cheap, and directly provides real background appearances without the need for creating complex 3D models of the environment. We present an efficient procedure to augment these images with virtual objects. In contrast to modeling complete 3D environments, our data augmentation approach requires only a few user interactions in combination with 3D models of the target object category. Leveraging our approach, we introduce a novel dataset of augmented urban driving scenes with 360 degree images that are used as environment maps to create realistic lighting and reflections on rendered objects. We analyze the significance of realistic object placement by comparing manual placement by humans to automatic methods based on semantic scene analysis. This allows us to create composite images which exhibit both realistic background appearance as well as a large number of complex object arrangements. Through an extensive set of experiments, we conclude the right set of parameters to produce augmented data which can maximally enhance the performance of instance segmentation models. Further, we demonstrate the utility of the proposed approach on training standard deep models for semantic instance segmentation and object detection of cars in outdoor driving scenarios. We test the models trained on our augmented data on the KITTI 2015 dataset, which we have annotated with pixel-accurate ground truth, and on the Cityscapes dataset. Our experiments demonstrate that the models trained on augmented imagery generalize better than those trained on fully synthetic data or models trained on limited amounts of annotated real data.

URLhttp://arxiv.org/abs/1708.01566
DOI10.1007/s11263-018-1070-x
Citation KeyAbuAlhaija2018