A multimodal perception-driven self evolving autonomous ground vehicle
journal contributionposted on 05.11.2021, 13:46 by Jamie RocheJamie Roche, Varuna De-SilvaVaruna De-Silva, Ahmet KondozAhmet Kondoz
Increasingly complex automated driving functions, specifically those associated with Free Space Detection (FSD), are delegated to Convolutional Neural Networks (CNN). If the dataset used to train the network lacks diversity, modality or sufficient quantities, the driver policy that controls the vehicle may induce safety risks. Although most autonomous ground vehicles (AGV) perform well in structured surroundings, the need for human intervention significantly rises when presented with unstructured niche environments. To this end, we developed an AGV for seamless indoor and outdoor navigation to collect realistic multimodal data streams. We demonstrate one application of the AGV when applied to a self-evolving FSD framework that leverages online active machine learning (ML) paradigms and sensor data fusion. In essence, the self-evolving AGV queries image data against a reliable data stream, ultrasound, before fusing the sensor data to improve robustness. We compare the proposed framework to one of the most prominent free space segmentation methods, DeepLabV3+ . DeepLabV3+  is a state-of-the-art semantic segmentation model composed of a CNN and an auto-decoder. In consonance with the results, the proposed framework out preforms DeepLabV3+ . The performance of the proposed framework is attributed to its ability to self-learn free space. This combination of online and active ML removes the need for large datasets typically required by a CNN. Moreover, this technique provides case-specific free space classifications based on information gathered from the scenario at hand.
MIMIc: Multimodal Imitation Learning in MultI-Agent Environments
Engineering and Physical Sciences Research CouncilFind out more...
- Loughborough University London