posted on 2020-07-29, 09:43authored byYanis Bahroun
Similarity matching (SM) is a framework introduced recently for deriving biologically plausible neural networks from objective functions. Three key biological properties associated with these networks are 1) Hebbian rules, 2) unsupervised learning, and 3) online implementations. In particular previous work has demonstrated that unconstrained-in-sign SM (USM) and nonnegative SM (NSM) can lead to neural networks (NN) performing linear principal subspace projection (PSP) and clustering. Starting from USM and NSM, the work undertaken in this thesis \emph{explores} capabilities and performance of SM and \emph{extends} SM to novel sets of NNs and unsupervised learning tasks.
The first objective of this work is to explore the capabilities of existing SM NN for feature learning. Representations learned from different SM NN are used as input to a linear classifier to measure their classification accuracy on established image datasets. The NN derived from NSM is employed to learn features from images with single and dual-layer architectures. The simulations show that features learned by NSM are comparable to Gabor filters and that a simple single-layer Hebbian network can outperform more complex models. The NN derived from USM is used for learning features in combination with block-wise histogram and binary-hashing. The proposed set of architectures (USMNet), when evaluated in terms of accuracy, shows competitive against unsupervised learning algorithms and multi-layer networks. Finally, Deep Hebbian Networks (DHN) are proposed. DHN combines within one architecture stages of NSM and USM. The performance of DHNs are evaluated on image classification tasks and outperforms the aforementioned models.
The second objective of this work is to extend SM beyond linear methods and static images. To incorporate nonlinearity, kernel-based versions of SM, K-USM and K-NSM, are proposed and map onto NNs performing nonlinear online clustering and PSP, outperforming traditional methods. To incorporate temporal information, a new SM cost-function is applied to pairs of consecutive images to develop the TNSM algorithm. This is mapped onto a NN that performs motion detection and recapitulates several salient features of the fly visual system. The proposed approach is also applicable to the general problem of transformation learning.