# Biologically Plausible Error-Driven Learning Using Local Activation Differences: The Generalized Recirculation Algorithm

@article{OReilly1996BiologicallyPE, title={Biologically Plausible Error-Driven Learning Using Local Activation Differences: The Generalized Recirculation Algorithm}, author={Randall C. O'Reilly}, journal={Neural Computation}, year={1996}, volume={8}, pages={895-938} }

The error backpropagation learning algorithm (BP) is generally considered biologically implausible because it does not use locally available, activation-based variables. A version of BP that can be computed locally using bidirectional activation recirculation (Hinton and McClelland 1988) instead of backpropagated error derivatives is more biologically plausible. This paper presents a generalized version of the recirculation algorithm (GeneRec), which overcomes several limitations of the earlier… Expand

#### Topics from this paper

#### 323 Citations

Generalization in Interactive Networks: The Benefits of Inhibitory Competition and Hebbian Learning

- Computer Science, Medicine
- Neural Computation
- 2001

Simulations using the Leabra algorithm show that cognitive neuroscience models that incorporate the core mechanistic principles of interactivity, inhibitory competition, and error-driven and Hebbian learning satisfy a wider range of biological, psychological, and computational constraints than models employing a subset of these principles. Expand

Contrastive Hebbian Feedforward Learning for Neural Networks

- Computer Science, Medicine
- IEEE Transactions on Neural Networks and Learning Systems
- 2020

CHL is a general learning algorithm that can be used to steer feedforward networks toward desirable outcomes, and steer them away from undesirable outcomes without any need for the specialized feedback circuit of BP or the symmetric connections used by the Boltzmann machines. Expand

Can the Brain Do Backpropagation? - Exact Implementation of Backpropagation in Predictive Coding Networks

- Computer Science, Medicine
- NeurIPS
- 2020

A BL model is proposed that produces exactly the same updates of the neural weights as BP, while employing local plasticity, i.e., all neurons perform only local computations, done simultaneously and is modified to modify it to an alternative BL model that works fully autonomously. Expand

Dendritic cortical microcircuits approximate the backpropagation algorithm

- Biology, Computer Science
- NeurIPS
- 2018

A novel view of learning on dendritic cortical circuits and on how the brain may solve the long-standing synaptic credit assignment problem is introduced, in which error-driven synaptic plasticity adapts the network towards a global desired output. Expand

An Alternative to Backpropagation in Deep Reinforcement Learning

- Computer Science
- ArXiv
- 2020

An algorithm called MAP propagation is proposed that can reduce this variance significantly while retaining the local property of learning rule and can solve common reinforcement learning tasks at a speed similar to that of backpropagation when applied to an actor-critic network. Expand

Generalization of Equilibrium Propagation to Vector Field Dynamics

- Computer Science, Mathematics
- ArXiv
- 2018

This work presents a simple two-phase learning procedure for fixed point recurrent networks that generalizes Equilibrium Propagation to vector field dynamics, relaxing the requirement of an energy function. Expand

Bidirectional Backpropagation: Towards Biologically Plausible Error Signal Transmission in Neural Networks

- Computer Science
- ArXiv
- 2017

This work proposes a biologically plausible paradigm of neural architecture based on related literature in neuroscience and asymmetric BP-like methods with trainable feedforward and feedback weights and results show that these models outperform other asymmetrical BP- like methods on the MNIST and the CIFAR-10 datasets. Expand

An Approximation of the Error Backpropagation Algorithm in a Predictive Coding Network with Local Hebbian Synaptic Plasticity

- Medicine, Computer Science
- Neural Computation
- 2017

This work shows that a network developed in the predictive coding framework can efficiently perform supervised learning fully autonomously, employing only simple local Hebbian plasticity. Expand

Deep reinforcement learning in a time-continuous model

- 2019

Inspired by the recent success of deep learning [1], several models emerged trying to explain how the brain might realize plasticity rules reaching similar performances as deep learning [2, 3, 4, 5].… Expand

Continual Learning of Recurrent Neural Networks by Locally Aligning Distributed Representations

- Medicine, Computer Science
- IEEE Transactions on Neural Networks and Learning Systems
- 2020

Results are presented that show the P-TNCN’s ability to conduct zero-shot adaptation and online continual sequence modeling and can, in some instances, outperform full BPTT as well as variants such as sparse attentive backtracking. Expand

#### References

SHOWING 1-10 OF 76 REFERENCES

Deterministic Boltzmann Learning in Networks with Asymmetric Connectivity

- Mathematics
- 1991

Abstract The simplicity and locality of the “contrastive Hebb synapse” (CHS) used in Boltzmann machine learning makes it an attractive model for real biological synapses. The slow learning exhibited… Expand

Deterministic Boltzmann Learning Performs Steepest Descent in Weight-Space

- Mathematics, Computer Science
- Neural Computation
- 1989

By using the appropriate interpretation for the way in which a DBM represents the probability of an output vector given an input vector, it is shown that the DBM performs steepest descent in the same function as the original SBM, except at rare discontinuities. Expand

A more biologically plausible learning rule for neural networks.

- Medicine
- Proceedings of the National Academy of Sciences of the United States of America
- 1991

A more biologically plausible learning rule is described, using reinforcement learning, which is applied to the problem of how area 7a in the posterior parietal cortex of monkeys might represent visual space in head-centered coordinates and shows that a neural network does not require back propagation to acquire biologically interesting properties. Expand

Generalization of Back propagation to Recurrent and Higher Order Neural Networks

- Computer Science
- NIPS
- 1987

A general method for deriving backpropagation algorithms for networks with recurrent and higher order networks and to a constrained dynamical system for training a content addressable memory. Expand

Local Synaptic Learning Rules Suffice to Maximize Mutual Information in a Linear Network

- Computer Science
- Neural Computation
- 1992

A local synaptic Learning rule is described that performs stochastic gradient ascent in this information-theoretic quantity, for the case in which the input-output mapping is linear and the input signal and noise are multivariate gaussian. Expand

Contrastive Hebbian Learning in the Continuous Hopfield Model

- Computer Science
- 1991

Abstract This paper shows that contrastive Hebbian, the algorithm used in mean field learning, can be applied to any continuous Hopfield model. This implies that non-logistic activation functions as… Expand

Learning Representations by Recirculation

- Mathematics, Computer Science
- NIPS
- 1987

Simulations in simple networks show that the learning procedure usually converges rapidly on a good set of codes, and analysis shows that in certain restricted cases it performs gradient descent in the squared reconstruction error. Expand

Neurons with graded response have collective computational properties like those of two-state neurons.

- Computer Science, Mathematics
- Proceedings of the National Academy of Sciences of the United States of America
- 1984

A model for a large network of "neurons" with a graded response (or sigmoid input-output relation) is studied and collective properties in very close correspondence with the earlier stochastic model based on McCulloch - Pitts neurons are studied. Expand

Mean Field Theory Neural Networks for Feature Recognition, Content Addressable Memory and Optimization

- Computer Science
- 1991

Using the mean field theory technique in the context of the Boltzmann machine gives rise to a fast deterministic learning algorithm with a performance comparable with that of the backpropagation algorithm in feature recognition applications. Expand

Dynamics and architecture for neural computation

- Computer Science, Mathematics
- J. Complex.
- 1988

Backpropagation techniques for programming the architectural components are presented in a formalism appropriate for a collective nonlinear dynamical system and it is shown that conventional recurrent back Propagation is not capable of storing multiple patterns in an associative memory which starts out with an insufficient number of point attractors. Expand