Web Simulation 

 

 

 

Hebbian Learning

This note provides an interactive, visual simulation of Hebbian Learning, demonstrating the fundamental principle of neural plasticity: "Neurons that fire together, wire together." The simulation visualizes how synaptic connections between neurons strengthen (Long-Term Potentiation - LTP) when neurons fire in close temporal proximity, and weaken (Long-Term Depression - LTD) when they fire out of sync.

The simulation consists of a network of neurons (circles) connected by synapses (lines). Each neuron has a voltage that decays over time and is color-coded with a blue-to-red gradient based on the weighted sum of its incoming connections (blue = low, red = high). When you click a neuron, it fires (spikes to bright yellow/white), sending signals through its outgoing synapses to connected neurons. The key learning mechanism is the Hebbian plasticity rule: if a source neuron fires and the target neuron fires shortly after (within a few frames), the connection between them strengthens. If they fire out of sync, the connection weakens.

You can interact with the simulation by clicking neurons to fire them manually, and adjusting learning parameters (LTP rate, LTD rate, weight decay, and signal strength) to observe how different settings affect the network's ability to learn and form pathways. Watch as frequently used pathways become stronger (thicker, brighter lines) while unused connections fade away.

NOTE : The simulation implements a simplified firing window approach for Hebbian learning. When a source neuron fires, it marks itself as "fired recently" for 5 frames. If the target neuron fires during this window, the connection strengthens (LTP). If the target does not fire, the connection weakens (LTD). All connections also experience passive weight decay to prevent runaway growth. This creates a self-organizing network where frequently activated pathways become stronger over time.

Math behind the Simulation

Hebbian Learning is a fundamental principle of neural plasticity first proposed by Donald Hebb in 1949. The simulation implements a discrete, agent-based version of Hebbian learning where each neuron is an individual agent with voltage dynamics, and synapses (connections) have weights that change based on the temporal correlation of firing between connected neurons.

Neuron State Variables

Each neuron in the simulation has the following state variables:

  • V: Voltage (activation level), ranges from resting state (-0.1) to threshold (0.5)
  • θ: Threshold (0.5) - voltage level required to fire
  • F: Firing state (boolean) - true when neuron fires
  • tfire: Frames since last fire (used for temporal window)

Voltage Dynamics

Each frame, the neuron's voltage decays exponentially towards the resting state:

V(t+1) = V(t) · α + I(t)

where:

  • α: Decay factor (0.95 in simulation) - voltage decays by 5% each frame
  • I(t): Input signal from connected neurons at time t
  • V(t): Voltage at time t

The voltage is clamped to a minimum of the resting state:

V(t) ≥ Vrest = -0.1

When voltage exceeds the threshold, the neuron fires:

if V(t) ≥ θ then F(t) = true, V(t+1) = Vrest

Signal Propagation

When a source neuron fires, it sends signals through all outgoing synapses to target neurons. The input to a target neuron is:

Itarget(t) = Σi wi · s · Fsource,i(t)

where:

  • wi: Weight of synapse i connecting source to target
  • s: Signal strength parameter (default 0.3)
  • Fsource,i(t): Firing state of source neuron i at time t (1 if firing, 0 otherwise)
  • The sum is over all synapses connecting to the target neuron

Hebbian Learning Rule (Plasticity)

The core of Hebbian learning is the modification of synapse weights based on the temporal correlation of firing between source and target neurons. The weight update rule is:

Δw = ηLTP · H(tsource, ttarget) - ηLTD · (1 - H(tsource, ttarget)) - δ

where:

  • ηLTP: Learning rate for Long-Term Potentiation (default 0.05)
  • ηLTD: Learning rate for Long-Term Depression (default 0.01)
  • δ: Weight decay rate (default 0.0001) - passive decay
  • H(tsource, ttarget): Heaviside function indicating temporal correlation

The temporal correlation function H is defined as:

H(tsource, ttarget) = 1 if (tsource ≤ 5 AND Ftarget(t) = true)
H(tsource, ttarget) = 0 otherwise

This implements the "firing window" approach: if the source neuron fired within the last 5 frames (tsource ≤ 5) AND the target neuron is firing now, then H = 1 (LTP occurs). Otherwise, H = 0 (LTD occurs if source fired recently).

Long-Term Potentiation (LTP)

When neurons fire together (within the temporal window), the connection strengthens:

ΔwLTP = ηLTP if (source fired recently AND target firing now)

This implements Hebb's rule: "Neurons that fire together, wire together." The weight increases, making the connection stronger and more effective at transmitting signals.

Long-Term Depression (LTD)

When neurons fire out of sync, the connection weakens:

ΔwLTD = -ηLTD if (source fired recently AND target NOT firing recently)

This prevents formation of weak associations and ensures only frequently co-activated pathways remain strong.

Weight Constraints

Synapse weights are clamped to a valid range to prevent instability:

wmin ≤ w(t) ≤ wmax
wmin = 0.01 (minimum visible weight)
wmax = 1.0 (maximum weight)

After each update, the weight is clamped:

w(t+1) = max(wmin, min(wmax, w(t) + Δw))

Classical Hebbian Rule (Reference)

The classical continuous-time Hebbian learning rule is:

dw/dt = η · xi(t) · yj(t)

where:

  • w: Synapse weight
  • η: Learning rate
  • xi(t): Activity of presynaptic (source) neuron i at time t
  • yj(t): Activity of postsynaptic (target) neuron j at time t

The simulation implements a discrete, spike-based version where:

  • Neuron activity is binary (firing or not firing)
  • The temporal window (5 frames) defines "together" - neurons must fire within this window
  • LTP occurs when both fire within the window, LTD when they don't
  • Passive decay prevents runaway growth

Key Insight: Temporal Correlation

The fundamental insight of Hebbian learning is that synaptic strength changes based on the temporal correlation of pre- and postsynaptic activity. Mathematically, this means:

  • If pre- and postsynaptic neurons fire together (within temporal window), the connection strengthens (LTP)
  • If they fire out of sync, the connection weakens (LTD)
  • This creates a self-organizing network where frequently activated pathways become stronger
  • Unused connections fade away due to passive decay and LTD

This is why the simulation shows connections becoming thicker and brighter when you repeatedly fire neurons in sequence - the network is learning the pattern through Hebbian plasticity. The 5-frame temporal window is crucial: it defines what "together" means in "fire together, wire together."

Usage Example

Follow these steps to explore Hebbian Learning and neural plasticity:

  1. Initial State: When you first load the simulation, 18 neurons are randomly distributed across the canvas, connected by synapses with random initial weights. Click the "Play" button to start the simulation. The network is ready for you to interact with.
  2. Click to Fire Neurons: Click on any neuron (circle) to manually trigger it to fire. When a neuron fires, it:
    • Flashes bright yellow/white for one frame
    • Sends signals through all its outgoing synapses to connected neurons
    • Marks itself as "fired recently" for 5 frames (the temporal window)
  3. Observe Signal Propagation: When a neuron fires, signals propagate through synapses to connected neurons. Watch how:
    • Stronger connections (thicker, brighter lines) transmit stronger signals
    • Target neurons receive input and their voltage increases
    • If a target neuron's voltage exceeds the threshold (0.5), it fires automatically
    • This creates a chain reaction of firing through the network
  4. Watch Connections Strengthen (LTP): This is the key learning moment! Try this:
    • Click Neuron A to fire it
    • Quickly click Neuron B (within a few frames) to fire it
    • Watch the connection between A and B strengthen - the line becomes thicker and brighter!
    This demonstrates "fire together, wire together" - when neurons fire in close temporal proximity, their connection strengthens (Long-Term Potentiation).
  5. Observe Connections Weaken (LTD): Try firing a neuron but NOT firing its connected targets. Over time, you'll see:
    • Unused connections become thinner and fade
    • This is Long-Term Depression - connections weaken when neurons fire out of sync
    • Passive weight decay also slowly reduces all connection strengths
  6. Experiment with Learning Rates: Adjust the parameter sliders to see how they affect learning:
    • Learning Rate (LTP): Higher values make connections strengthen faster when neurons fire together
    • Learning Rate (LTD): Higher values make connections weaken faster when neurons fire out of sync
    • Weight Decay: Higher values cause faster passive decay of all connections
    • Signal Strength: Higher values make signals more effective at activating target neurons
  7. Create Pathways: Try creating a pathway through the network:
    • Click neurons in sequence (A → B → C → D) repeatedly
    • Watch as the pathway strengthens - the connections become thicker and brighter
    • After several repetitions, the pathway may become strong enough that firing A automatically triggers B, C, and D!
    This demonstrates how Hebbian learning creates functional pathways in neural networks.
  8. Observe Network Statistics: The info display shows:
    • Active Connections: Number of synapses with weight > 0.1
    • Average Weight: Mean strength of all connections
    • Firing Neurons: Number of neurons currently firing
    Watch how these values change as you interact with the network and connections strengthen or weaken.
  9. Reset and Experiment: Click "Reset" to start with a fresh network. Try different firing patterns and observe how the network self-organizes based on your input patterns. The network learns from experience!

Tip: The key insight of Hebbian learning is that connections strengthen when neurons fire together (within a short time window) and weaken when they fire out of sync. This creates a self-organizing network where frequently activated pathways become stronger over time, while unused connections fade away. Try clicking neurons in different patterns and watch how the network adapts to your input!

Parameters

Followings are short descriptions on each parameters
  • Learning Rate (LTP): Controls how quickly connections strengthen when neurons fire together (range: 0-0.1, default: 0.05). Higher values mean connections strengthen faster when a source neuron fires recently and the target neuron fires. This implements Long-Term Potentiation - the "fire together, wire together" rule. If set too high, connections may strengthen too quickly and the network becomes unstable. If set too low, learning happens very slowly.
  • Learning Rate (LTD): Controls how quickly connections weaken when neurons fire out of sync (range: 0-0.05, default: 0.01). Higher values mean connections weaken faster when a source neuron fires but the target does not fire shortly after. This implements Long-Term Depression - connections weaken when neurons don't fire together. Typically set lower than LTP rate to allow net strengthening of frequently used pathways.
  • Weight Decay: Controls the passive decay rate of all connection weights (range: 0-0.01, default: 0.0001). Every frame, all synapse weights decrease by this amount. This prevents runaway growth and ensures only actively reinforced connections remain strong. Higher values cause faster decay, making the network more dynamic but potentially unstable. Lower values allow connections to persist longer but may lead to saturation.
  • Signal Strength: Controls the strength of signals transmitted through synapses (range: 0-1.0, default: 0.3). When a neuron fires, it sends a signal to connected neurons. The signal strength multiplied by the synapse weight determines how much the target neuron's voltage increases. Higher values make signals more effective at activating target neurons, potentially creating more chain reactions. Lower values make signals weaker, requiring stronger connections or multiple inputs to fire target neurons.

Buttons and Controls

Followings are short descriptions on each control
  • Play/Pause: Located in the control panel, this button starts or pauses the simulation. When you click Play, the simulation loop begins running, updating neurons, propagating signals, and applying Hebbian learning rules every frame. The button text changes to "Pause" during simulation, allowing you to pause and resume at any time. When paused, you can still click neurons to fire them and adjust parameter sliders.
  • Reset: Located next to the Play/Pause button, this button resets the simulation to its initial state. All neurons are reset to their resting state (voltage = -0.1), all synapses are reset to random initial weights (0.1 to 0.3), and the network topology is regenerated. This is useful for testing different parameter combinations and observing how they affect learning dynamics.

Interaction and Visualization

  • Neuron Visualization: Each neuron is drawn as a circle (12 pixel radius) with its ID number displayed:
    • Color Gradient (Blue to Red): The color of each neuron represents the weighted sum of its incoming connections. Neurons with low weighted sums appear blue, while neurons with high weighted sums appear red. The gradient automatically updates as connection weights change during Hebbian learning, providing a visual indicator of how well-connected each neuron is in the network.
    • Bright Yellow/White with Glow: Firing state - neuron has fired, flashes bright yellow for one frame (overrides the gradient color)
    • The weighted sum is calculated as the sum of weights of all synapses that target the neuron. Neurons with many strong incoming connections will appear red, while neurons with few or weak incoming connections will appear blue.
    Neurons are randomly positioned on the canvas. The network contains 18 neurons with approximately 30% connection probability between any two neurons.
  • Click to Fire: You can click on any neuron to manually trigger it to fire. This is the primary way to interact with the simulation. When you click a neuron:
    • The neuron's voltage spikes to 1.0 (above the 0.5 threshold)
    • The neuron fires immediately (flashes yellow/white)
    • Signals propagate through all outgoing synapses
    • The neuron marks itself as "fired recently" for 5 frames
    The cursor changes to a crosshair when hovering over the canvas to indicate that neurons can be clicked.
  • Synapse Visualization: Each synapse (connection) is drawn as a line with an arrow indicating direction:
    • Line Thickness: Proportional to weight (1-5 pixels, weight 0.01-1.0)
    • Line Opacity: Proportional to weight (0.1-1.0 opacity)
    • Color: Light blue/cyan (rgba(100, 200, 255, opacity))
    • Arrow: Points from source neuron to target neuron, indicating signal direction
    Stronger connections are thicker and brighter, making it easy to see which pathways have been strengthened through learning.
  • Signal Propagation: When a neuron fires, signals propagate through synapses:
    • Each outgoing synapse transmits a signal to its target neuron
    • Signal strength = synapse weight × signal strength parameter
    • Target neuron's voltage increases by the signal strength
    • If target voltage exceeds threshold (0.5), it fires automatically
    • This can create chain reactions through the network
  • Hebbian Learning Rules: The core learning mechanism runs every frame:
    • LTP (Long-Term Potentiation): If source fired recently (within 5 frames) AND target is firing now, increase weight by LTP rate
    • LTD (Long-Term Depression): If source fired recently but target did NOT fire recently, decrease weight by LTD rate
    • Weight Decay: All weights decrease by decay rate every frame (passive decay)
    • Weight Clamping: Weights are clamped between 0.01 (minimum visible) and 1.0 (maximum strength)
    The 5-frame temporal window defines "together" - neurons must fire within 5 frames of each other for LTP to occur.
  • Voltage Dynamics: Each neuron's voltage changes over time:
    • Voltage decays exponentially (multiplied by 0.95 each frame)
    • Voltage cannot go below resting state (-0.1)
    • When voltage exceeds threshold (0.5), neuron fires
    • After firing, voltage resets to resting state (refractory period)
    • Input signals from synapses add to voltage
  • Network Statistics: The info display shows real-time statistics:
    • Active Connections: Number of synapses with weight > 0.1 (visible connections)
    • Average Weight: Mean strength of all synapses (indicates overall network connectivity)
    • Firing Neurons: Number of neurons currently firing (shows network activity)
    These statistics update every frame, allowing you to monitor how the network changes as learning occurs.
  • Performance: The simulation runs at 60 FPS using requestAnimationFrame for smooth animation. With 18 neurons and approximately 30% connection probability, there are typically 50-100 synapses, making the simulation very efficient. The O(n²) synapse updates are fast enough for real-time interaction.