Neural Network Playground

Interactive visualization of neural network architectures and concepts

Network Components

Drag components to the canvas to build your neural network

Input Layer
Hidden Layer
Output Layer
Convolutional
Pooling
LSTM
RNN
GRU

Network Settings

0.1
Build Your Neural Network Drag components from the left panel and drop them here.
Connect them by dragging from output (right) to input (left) ports.

Layer Properties

Hover over a node to see its properties

Activation Function

Layer Weights

Training Progress

Loss: -
Accuracy: -

Backpropagation Explained

Backpropagation is the key algorithm that allows neural networks to learn from data. It works by calculating how much each weight in the network contributes to the overall error and adjusting the weights to minimize this error.

The Steps of Backpropagation:

  1. Forward Pass

    Input data flows through the network to produce a prediction.

  2. Calculate Error

    Compare the prediction with the expected output to compute the error.

  3. Backward Pass

    Propagate the error backward through the network.

  4. Update Weights

    Adjust each weight based on its contribution to the error.

Step: Forward Pass

Data is flowing through the network...

Mathematical Insight

Gradient Descent Update: w = w - η ∇L(w)
Chain Rule: ∂L/∂w = (∂L/∂y) × (∂y/∂w)

The gradient (∇L) shows the direction of steepest increase in error. By moving in the opposite direction, we minimize the error. The learning rate (η) controls the step size.

Understanding the Animation

Forward Signal Flow
Error Calculation
Backward Error Propagation
Weight Updates

Current Variables

Forward Propagation Explained

Forward propagation is the process by which input data flows through a neural network to generate predictions. This is how neural networks make inferences after they've been trained.

How Forward Propagation Works:

  1. Input Layer

    The network receives data through its input neurons.

  2. Hidden Layer Computation

    Each hidden neuron computes a weighted sum of inputs and applies an activation function.

  3. Output Generation

    The final layer produces the network's prediction or classification.

Current Layer: Input

Data enters the network through the input layer.

Computations in Detail

z = w₁x₁ + w₂x₂ + ... + wₙxₙ + b
a = σ(z)

Network Details

Architecture

Input layer: 3 neurons

Hidden layer: 4 neurons (ReLU activation)

Output layer: 2 neurons (Sigmoid activation)

Activation Functions

ReLU:
f(x) = max(0, x)
Sigmoid:
f(x) = 1 / (1 + e-x)

Neural Network Visualization

This visualization represents neurons firing in a neural network. Watch as activation patterns form and spread across the network, simulating how information flows through neural pathways.

Visualization Controls

150
100
5

About This Visualization

This animation represents a simplified view of neural activity. Each dot represents a neuron, and the lines represent connections between neurons. When a neuron "fires," it activates connected neurons based on the strength of their connections.

In real neural networks, neurons only fire when their activation exceeds a threshold, and the pattern of connections is learned during training.

Statistics

Active Neurons:
0
Connections:
0
Firing Rate:
0 Hz