Reducing Catastrophic Forgetting With Associative Learning: A Lesson From Fruit Flies

Shen, Yang, Dasgupta, Sanjoy, Navlakha, Saket (September 2023) Reducing Catastrophic Forgetting With Associative Learning: A Lesson From Fruit Flies. Neural Computation. pp. 1-23. ISSN 0899-7667

[thumbnail of 2023_Shen_Reducing_Catastrophic_Forgetting_with_Associative_Learning.pdf]
Preview
PDF
2023_Shen_Reducing_Catastrophic_Forgetting_with_Associative_Learning.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial.

Download (2MB) | Preview

Abstract

Catastrophic forgetting remains an outstanding challenge in continual learning. Recently, methods inspired by the brain, such as continual representation learning and memory replay, have been used to combat catastrophic forgetting. Associative learning (retaining associations between inputs and outputs, even after good representations are learned) plays an important function in the brain; however, its role in continual learning has not been carefully studied. Here, we identified a two-layer neural circuit in the fruit fly olfactory system that performs continual associative learning between odors and their associated valences. In the first layer, inputs (odors) are encoded using sparse, high-dimensional representations, which reduces memory interference by activating nonoverlapping populations of neurons for different odors. In the second layer, only the synapses between odor-activated neurons and the odor's associated output neuron are modified during learning; the rest of the weights are frozen to prevent unrelated memories from being overwritten. We prove theoretically that these two perceptron-like layers help reduce catastrophic forgetting compared to the original perceptron algorithm, under continual learning. We then show empirically on benchmark data sets that this simple and lightweight architecture outperforms other popular neutrally inspired algorithms when also using a three-layer feedforward architecture. Overall, fruit flies evolved an efficient continual associative learning algorithm, and circuit mechanisms from neuroscience can be translated to improve machine computation.

Item Type: Paper
Subjects: bioinformatics
organism description > animal > insect > Drosophila
bioinformatics > computational biology > algorithms
organism description > animal
organism description > animal behavior
organs, tissues, organelles, cell types and functions > organs types and functions > brain
organs, tissues, organelles, cell types and functions > cell types and functions > cell types
organs, tissues, organelles, cell types and functions > cell types and functions > cell types
organs, tissues, organelles, cell types and functions > cell types and functions > cell types
organs, tissues, organelles, cell types and functions > cell types and functions
bioinformatics > computational biology
organism description > animal > insect
organism description > animal behavior > memory
organs, tissues, organelles, cell types and functions > cell types and functions > cell types > neurons
organs, tissues, organelles, cell types and functions > cell types and functions > cell types > neurons
organs, tissues, organelles, cell types and functions > cell types and functions > cell types > neurons
organs, tissues, organelles, cell types and functions > organs types and functions
organs, tissues, organelles, cell types and functions
CSHL Authors:
Communities: CSHL labs > Navlakha lab
SWORD Depositor: CSHL Elements
Depositing User: CSHL Elements
Date: 19 September 2023
Date Deposited: 12 Oct 2023 16:05
Last Modified: 10 Jan 2024 21:08
Related URLs:
URI: https://repository.cshl.edu/id/eprint/41202

Actions (login required)

Administrator's edit/view item Administrator's edit/view item