Carbonneau_etal_RSE_2020_final.pdf (11.52 MB)
Adopting deep learning methods for airborne RGB fluvial scene classification
journal contributionposted on 2020-09-18, 14:26 authored by Patrice Carbonneau, Stephen Dugdale, Toby Breckon, James Dietrich, Mark Fonstad, Hitoshi Miyamoto, Amy Woodget
Rivers are among the world’s most threatened ecosystems. Enabled by the rapid development of drone technology, hyperspatial resolution (<10 cm) images of fluvial environments are now a common data source used to better understand these sensitive habitats. However, the task of image classification remains challenging for this type of imagery and the application of traditional classification algorithms such as maximum likelihood, still in common use among the river remote sensing community, yields unsatisfactory results. We explore the possibility that a classifier of river imagery based on deep learning methods can provide a significant improvement in our ability to classify fluvial scenes. We assemble a dataset composed of RGB images from 11 rivers in Canada, Italy, Japan, the United Kingdom, and Costa Rica. The images were labelled into 5 land-cover classes: water, dry exposed sediment, green vegetation, senescent vegetation and roads. In total, >5 billion pixels were labelled and partitioned for the tasks of training (1 billion pixels) and validation (4 billion pixels). We develop a novel supervised learning workflow based on the NASNet convolutional neural network (CNN) called ‘CNNSupervised Classification’ (CSC). First, we compare the classification performance of maximum likelihood, a multilayer perceptron, a random forest, and CSC. Results show median F1 scores (a commonly used quality metric in machine learning) of 71%, 78%, 72% and 95%, respectively. Second, we train our classifier using data for 5 of 11 rivers. We then predict the validation data for all 11 rivers. For the 5 rivers that were used in model training, median F1 scores reach 98%. For the 6 rivers not used in model training, median F1 scores are 90%. We reach two conclusions. First, in the traditional workflow where images are classified one at a time, CSC delivers an unprecedented mix of labour savings and classification F1 scores above 95%. Second, deep learning can predict landcover classifications (F1 = 90%) for rivers not used in training. This demonstrates the potential to train a generalised open-source deep learning model for airborne river surveys suitable for most rivers ‘out of the box’. Research efforts should now focus on further development of a new generation of deep learning classification tools that will encode human image interpretation abilities and allow for fully automated, potentially real-time, interpretation of riverine landscape images.
The authors would like to thank several funding bodies who have supported image acquisition. Images of the Ste-Marguerite and Dartmouth rivers were funded by the GEOSALAR project, part of the GEOIDE network of centres of excellence. The authors thank Professor Normand Bergeron for use of the Ouelle and Kananaskis river imagery; these data were collected as part of the NSERC/CRSNG Collaborative Research and Development Grant CRDJ 379745-08 in partnership with the Ouranos consortium on regional climatology and adaptation to climate change and also as part of the NSERC/CRSNG HydroNet Strategic Network Grant. Images of the Eamont, Sesia and Kingie rivers were funded by the AMBER project, grant number 689682, part of the EU Horizon 2020 program. The images of the Dora di Veny river were acquired thanks to support from Dr Catriona Fyffe, the University or Worcester and the British Society for Geomorphology. The images of the Kinu and Kurobe rivers were funded by the KAKENHI program of the Japanese Society for the Promotion of Science, grant number JP16H04422. The authors would like to thank Dr Pollyanna Lind for the images of the Pacuare river, funded by the National Science Foundation, the Tokyo foundation and the University of Oregon.
- Social Sciences and Humanities
- Geography and Environment
Published inRemote Sensing of Environment
- AM (Accepted Manuscript)
Rights holder© Elsevier Inc.
Publisher statementThis paper was accepted for publication in the journal Remote Sensing of Environment and the definitive published version is available at https://doi.org/10.1016/j.rse.2020.112107.