Unmanned ground operations using semantic image segmentation through a Bayesian network
conference contributionposted on 07.10.2016 by Matthew Coombes, Will Eaton, Wen-Hua Chen
Any type of content contributed to an academic conference, such as papers, presentations, lectures or proceedings.
This paper discusses the machine vision element of a system designed to allow automated taxiing for Unmanned Aerial System (UAS) around civil aerodromes. The purpose of the computer vision system is to provide direct sensor data which can be used to validate vehicle position, in addition to detect potential collision risks. This is achieved through the use of a singular monocular sensor. Untrained clustering is used to segment the visual feed before descriptors of each cluster (primarily colour and texture) are then used to estimate the class. As the competency of each individual estimate can vary based on multiple factors (number of pixels, lighting conditions and even surface type) a Bayesian network is used to perform probabilistic data fusion, in order to improve the classification results. This result is shown to perform accurate image segmentation in real-world conditions, providing information viable for map matching.
This work was supported by the U.K. Engineering and Physical Sciences Research Council (EPSRC) Autonomous and Intelligent Systems programme under the grant number EP/J011525/1 with BAE Systems as the leading industrial partner.
- Aeronautical, Automotive, Chemical and Materials Engineering
- Aeronautical and Automotive Engineering