Web Simulation 

 

 

 

 

Perceptron I 

This note provides an interactive, visual simulation of a single perceptron. It is to help you build up intuition on how a neural network works at the simplest level.

This note with simple example for mimicking the operation of a logica gate is to show how a perceptron produces an output from two binary inputs by calculating a weighted sum and applying an activation function. It also helps you see how changing weights and bias shifts the classification behavior in real time.

The simulation lets you select an activation function such as step, sigmoid, tanh, ReLU, or linear. It shows the transfer function curve and it marks the current weighted-sum position on that curve. It displays the input values, the computed sum, and the output value so you can connect the equation Σ = Σ(xiwi) + b to the final decision.

This note also demonstrates how a single perceptron can learn simple logic gates. It provides presets for AND, OR, NAND, and NOR, and it uses each truth table as training data. When you run training, it updates weights and bias step by step using a perceptron learning rule for step activation or gradient-based updates for differentiable activations. It shows the total error and it stops automatically when all truth-table cases are classified correctly.

NOTE : Refer to this note for the theoretical details.

 

Parameters

Followings are short descriptions on each parameters
  • Activation Function: Selects the transfer function f(Σ) used to convert the weighted sum into an output. For example it is set to Step, so the output becomes 0 or 1 based on whether Σ ≥ 0.
  • Gate Preset: Loads a predefined set of weights and bias to represent a logic gate behavior. For example it is set to AND Gate.
  • Bias (b): Adds a constant offset to the weighted sum. It shifts the decision boundary without changing the input values. For example b = 0.1.
  • Weight w1: Multiplier applied to input x1. It controls how strongly x1 affects the sum. For example w1 = 0.3.
  • Weight w2: Multiplier applied to input x2. It controls how strongly x2 affects the sum. For example w2 = 0.3.
  • Input x1: Binary input value for the first input node. The checkbox controls 0 (unchecked) or 1 (checked). For example x1 = 0.
  • Input x2: Binary input value for the second input node. The checkbox controls 0 (unchecked) or 1 (checked). For example x2 = 0.
  • Learning Rate: Step size used when updating weights and bias during training. Larger values change parameters faster. For examplet it is 0.10.
  • Train Update Speed (sec): Controls how often one training step runs. Smaller values make training update more frequently. For example it is 0.5 sec.

Buttons

Followings are short descriptions on each Button
  • Reset: Stops training (if running) and restores the simulation to a known starting state. If a gate preset is selected, it reloads that gate's default weights, bias, activation, and the initial input pattern. If no preset is selected (manual mode), it resets to the default manual values.
  • Randomize: Stops training (if running) and assigns random values to the weights and bias. It also randomizes the binary inputs (0 or 1). It keeps the currently selected activation function and it keeps the selected gate preset as-is.
  • Train: Starts the training loop using the selected gate's truth table as training data. It updates weights and bias repeatedly based on the learning rate and the update speed. It stops automatically when all truth-table cases are classified correctly, or it can be stopped manually by pressing the button again.
  • Test: Stops training (if running) and try all the possible input combinations. If all the input combinations lead to Decision 1(pass), it pops up 'Test PASS'