Deep neural networks suffer from the inability to preserve the learned data representation (i.e., catastrophic forgetting) in domains where the input data distribution is non-stationary, and it changes during training. Various selective synaptic
plasticity approaches have been recently proposed to preserve network parameters, which are crucial for previously learned tasks while learning new tasks.
We explore such selective synaptic plasticity approaches through a unifying lens
of memory replay and show the close relationship between methods like Elastic
Weight Consolidation (EWC) and Memory-Aware-Synapses (MAS). We then propose a fundamentally different class of preservation methods that aim at preserving the distribution of the network’s output at an arbitrary layer for previous tasks
while learning a new one. We propose the sliced Cramer distance as a suitable ´
choice for such preservation and evaluate our Sliced Cramer Preservation (SCP) ´
algorithm through extensive empirical investigations on various network architectures in both supervised and unsupervised learning settings. We show that SCP
consistently utilizes the learning capacity of the network better than online-EWC
and MAS methods on various incremental learning tasks.
History
School
Science
Department
Computer Science
Source
International Conference On Learning Representations (ICLR 2020)