Domain-adapted driving scene understanding with uncertainty-aware and diversified generative adversarial networks
Autonomous vehicles are required to operate in an uncertain environment. Recent advances in computational intelligence (CI) techniques make it possible to understand driving scenes in various environments by using a semantic segmentation neural network, which assigns a class label to each pixel. It requires massive pixel-level labelled data to optimise the network. However, it is challenging to collect sufficient data and labels in the real world. An alternative solution is to obtain synthetic dense pixel-level labelled data from a driving simulator.
Although the use of synthetic data is a promising way to alleviate the labelling problem, models trained with virtual data cannot generalise well to realistic data due to the domain shift. To fill this gap, we propose a novel uncertainty-aware generative ensemble method. In particular, ensembles are obtained from different optimisation objectives, training iterations, and network initialisation so that they are complementary to each other to produce reliable predictions. Moreover, an uncertainty-aware ensemble scheme is developed to derive fused prediction by considering the uncertainty from ensembles. Such a design can make better use of the strengths of ensembles to enhance adapted segmentation performance. Experimental results demonstrate the effectiveness of our method on three large-scale datasets.
Funding
Fisheries Innovation & Sustainability
U.K. Department for Environment, Food & Rural Affairs. Grant Numbers: FIS039, FIS045A
History
School
- Science
Department
- Computer Science
Published in
CAAI Transactions on Intelligence TechnologyPublisher
WileyVersion
- VoR (Version of Record)
Rights holder
© The AuthorsPublisher statement
This is an Open Access Article. It is published by Wiley under the Creative Commons Attribution 4.0 International Licence (CC BY). Full details of this licence are available at: https://creativecommons.org/licenses/by/4.0/Acceptance date
2023-05-23Publication date
2023-07-08Copyright date
2023eISSN
2468-2322Publisher version
Language
- en