Hopfield Network with State-Dependent Threshold

In a Hopfield network model, the two states of a neuron (firing or at rest) are denoted by the values . The future state of a neuron is determined by the present state of all the other neurons via the synaptic matrix, but is independent of its own present state. The overlap between the original and the time-evolved state of the network is a measure of "memory recall"; it depends upon the fractional memory load and a state-dependent neuron firing threshold . This Demonstration obtains the average time-evolution of patterns stored in a network of neurons () with a neuron-state-dependent firing threshold . The distribution of resulting overlaps between the time-evolved and original states, , shifts from a peak at one to a broad distribution at lower values of as the memory load increases, indicating the threshold memory capacity of the neural network.


  • [Snapshot]
  • [Snapshot]
  • [Snapshot]


Snapshot 1: at memory load , the recall distribution is broad, thus showing a lack of near-perfect recall when the threshold is near zero
Snapshot 2: as the threshold strength is increased, for the same memory load , the probability of recall and thus the memory capacity of the network are increased
Snapshot 3: conversely, when is decreased, even for a smaller memory load , the distribution becomes broad, showing a diminished memory capacity for the network
[1] J. J. Hopfield, "Neural Networks and Physical Systems with Emergent Collective Computational Abilities," Proceedings of the National Academy of Sciences, 79(8), 1982 pp. 2554–2558. www.pnas.org/content/79/8/2554.full.pdf.
[2] D. J. Amit, Modeling Brain Function, New York: Cambridge University Press, 1992.
    • Share:

Embed Interactive Demonstration New!

Just copy and paste this snippet of JavaScript code into your website or blog to put the live Demonstration on your site. More details »

Files require Wolfram CDF Player or Mathematica.