A model selection algorithm for complex CNN systems based on feature-weights relation
In object recognition using machine learning, one model cannot practically be trained to identify all the possible objects it encounters. An ensemble of models may be needed to cater to a broader range of objects. Building a mathematical understanding of the relationship between various objects that share comparable outlined features is envisaged as an effective method of improving the model ensemble through a pre-processing stage, where these objects' features are grouped under a broader classification umbrella. This paper proposes a mechanism to train an ensemble of recognition models coupled with a model selection scheme to scale-up object recognition in a multi-model system. The multiple models are built with a CNN structure, whereas the image features are extracted using a CNN/VGG16 architecture. Based on the models' excitation weights, a neural network model selection algorithm, which decides how close the features of the object are to the trained models for selecting a particular model for object recognition is tested on a multi-model neural network platform. The experiment results show the proposed model selection scheme is highly effective and accurate in selecting an appropriate model for a network of multiple models.
History
School
- Loughborough University London
Published in
2023 IEEE IAS Global Conference on Emerging Technologies (GlobConET)Source
2023 IEEE IAS Global Conference on Emerging TechnologiesPublisher
IEEEVersion
- AM (Accepted Manuscript)
Rights holder
© IEEEPublisher statement
© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Acceptance date
2023-03-19Publication date
2023-06-16Copyright date
2023ISBN
9798350331790Publisher version
Language
- en