LEABRA

Leabra stands for "Local, Error-driven and Associative, Biologically Realistic Algorithm". It is a model of learning which is a balance between Hebbian and error-driven learning with other network-derived characteristics. This model is used to mathematically predict outcomes based on inputs and previous learning influences

Leabra stands for "Local, Error-driven and Associative, Biologically Realistic Algorithm". It is a model of learning which is a balance between Hebbian and error-driven learning with other network-derived characteristics. This model is used to mathematically predict outcomes based on inputs and previous learning influences. This model is heavily influenced by and contributes to neural network designs and models. This algorithm is the default algorithm in Emergent (successor of PDP++) when making a new project, and is extensively used in various simulations.

Hebbian learning is performed using conditional principal components analysis (CPCA) algorithm with correction factor for sparse expected activity levels.

Error-driven learning is performed using GeneRec, which is a generalization of the Recirculation algorithm, and approximates Almeida-Pineda recurrent backpropagation. The symmetric, midpoint version of GeneRec is used, which is equivalent to thecontrastive Hebbian learning algorithm (CHL). See O'Reilly (1996; Neural Computation) for more details.

The activation function is a point-neuron approximation with both discrete spiking and continuous rate-code output.

Layer or unit-group level inhibition can be computed directly using a k-winners-take-all (KWTA) function, producing sparse distributed representations.

The net input is computed as an average, not a sum, over connections, based on normalized, sigmoidally transformed weight values, which are subject to scaling on a connection-group level to alter relative contributions. Automatic scaling is performed to compensate for differences in expected activity level in the different projections.

Documentation about this algorithm can be found in the book "Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain" published by MIT press.[1] and in the Emergent Documentation

 

Contents

  [hide
  • 1 Overview of the Leabra Algorithm
  • 2 Special algorithms
  • 3 References
  • 4 External links

 

Overview of the Leabra Algorithm[edit]

The pseudocode for Leabra is given here, showing exactly how the pieces of the algorithm described in more detail in the subsequent sections fit together.

Iterate over minus and plus phases of settling for each event. o At start of settling, for all units:   - Initialize all state variables (activation, v_m, etc).   - Apply external patterns (clamp input in minus, input & output in     plus).   - Compute net input scaling terms (constants, computed     here so network can be dynamically altered).   - Optimization: compute net input once from all static activations     (e.g., hard-clamped external inputs). o During each cycle of settling, for all non-clamped units:   - Compute excitatory netinput (g_e(t), aka eta_j or net)      -- sender-based optimization by ignoring inactives.   - Compute kWTA inhibition for each layer, based on g_i^Q:     * Sort units into two groups based on g_i^Q: top k and       remaining k+1 -> n.     * If basic, find k and k+1th highest       If avg-based, compute avg of 1 -> k & k+1 -> n.     * Set inhibitory conductance g_i from g^Q_k and g^Q_k+1   - Compute point-neuron activation combining excitatory input and     inhibition o After settling, for all units, record final settling activations   as either minus or plus phase (y^-_j or y^+_j).After both phases update the weights (based on linear current   weight values), for all connections: o Compute error-driven weight changes with CHL with soft weight bounding o Compute Hebbian weight changes with CPCA from plus-phase activations o Compute net weight change as weighted sum of error-driven and Hebbian o Increment the weights according to net weight change.

Special algorithms[edit]

References[edit]

  1. Jump up^ O'Reilly, R. C., Munakata, Y. (2000). Computational explorations in cognitive neuroscience. Cambridge: MIT Press. ISBN 0-19-510491-9.

External links[edit]

RELATED ARTICLESExplain
Machine Learning Methods & Algorithms
Reinforcement learning
Error-driven learning
LEABRA
Prefrontal Cortex Basal Ganglia Working Memory (PBWM)
Primary value learned value (PVLV)
Primary value learned value (PVLV)
Graph of this discussion
Enter the title of your article


Enter a short (max 500 characters) summation of your article
Enter the main body of your article
Lock
+Comments (0)
+Citations (0)
+About
Enter comment

Select article text to quote
welcome text

First name   Last name 

Email

Skip