|
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
This interactive tutorial demonstrates Markov Chains, a fundamental mathematical model for systems that transition between discrete states based on probabilistic rules. The key insight is that the next state depends only on the current state, not on the history of how we got there—this is called the Markov Property (or "memoryless" property). The simulation visualizes the connection between the abstract mathematics (transition matrices) and a concrete visual (state diagrams). Watch as the system "rolls dice" to determine which state to visit next, and observe how the long-run statistics converge to the steady-state distribution! What is a Markov Chain?A Markov Chain is a stochastic process that transitions between a finite set of states according to fixed probabilities. Formally:
P(Xn+1 = j | Xn = i, Xn-1, ..., X0) = P(Xn+1 = j | Xn = i) = pij
This says: "The probability of going to state j depends ONLY on being in state i right now, not on the entire history." The Transition MatrixAll transition probabilities are organized in a transition matrix P:
P = [pij] where pij = P(Xn+1 = j | Xn = i)
Key properties of the transition matrix:
Steady-State DistributionAfter many transitions, a Markov Chain often converges to a steady-state (stationary) distribution π:
π = πP (or equivalently: πj = Σi πi pij)
The steady-state π is an eigenvector of PT with eigenvalue 1. It represents the long-run fraction of time spent in each state! Mathematical FoundationMulti-Step Transitions: The probability of going from state i to state j in exactly n steps is given by:
P(Xn = j | X0 = i) = (Pn)ij
This means: "Raise the matrix to the n-th power to find n-step transition probabilities." Convergence Theorem: For a finite, irreducible, aperiodic Markov Chain:
limn→∞ Pn = � where each row of � equals π
Regardless of the starting state, the chain converges to the same steady-state distribution.
Markov Chain Controls
800ms
STATE DIAGRAM
TRANSITION MATRIX P
VISIT STATISTICS
STEADY STATE π (theoretical)
Current State:
A
Total Steps:
0
HOW NEXT STATE IS DETERMINED
Click Step ▶ or Run to see the decision process
STATE TRANSITION HISTORY (last 50)
VISIT STATISTICS TRAJECTORY
Steps →
Dashed lines show theoretical steady state π
Current State (glowing)
Transition Probability
Traveling Packet
Usage GuideFollow these steps to explore Markov Chain dynamics:
Understanding the VisualizationThe State DiagramEach circle represents a state in the Markov Chain. Arrows show possible transitions:
The Transition MatrixThe matrix P contains all transition probabilities:
The Simulation AlgorithmHere's how the "Step" function determines the next state—using weighted random choice: Step 1: Get Probabilities for Current StateLook up the row in the transition matrix corresponding to the current state: Step 2: Roll Random Number & AccumulateGenerate a random number r between 0 and 1, then find which "region" it falls into: The Algorithm in Code// Weighted Random Choice Algorithm
1. Get probabilities for current state: row = P[currentState]
2. Roll random number: r = Math.random() // 0.0 to 1.0
3. Accumulate probabilities until r is exceeded:
accumulated = 0
for each state j: accumulated += row[j] if (r ≤ accumulated) → nextState = j; break 4. Transition: currentState = nextState
The Decision PanelThe simulation includes a "How Next State is Determined" panel that shows this algorithm in action for each step:
This panel updates in real-time with each step, making the weighted random choice algorithm completely transparent! The Trajectory GraphThe bottom panel shows a line graph of visit statistics over time, demonstrating convergence:
As you run more steps, watch the solid lines approach the dashed lines—this visually demonstrates the Law of Large Numbers and convergence to the stationary distribution! Key ConceptsThe Markov Property (Memorylessness)The defining feature: future depends only on present, not past. No matter how you arrived at state B, the probabilities for leaving B are always the same. This makes Markov Chains mathematically tractable and computationally efficient. Steady-State DistributionThe vector π such that πP = π. It represents:
Watch the simulation: after many steps, the visit statistics approach the theoretical steady state! Types of States
Chain Properties
Applications of Markov Chains
Calculating Steady StateFor a 2-state chain, the steady state can be solved analytically:
P = [1-a, a; b, 1-b] → π = [b/(a+b), a/(a+b)]
For larger chains, we use:
Presets ExplainedWeather ModelA classic example: tomorrow's weather depends only on today's:
Gambler's RuinClassic absorbing chain: a gambler with $1-$2 bets $1 per round:
Question: Starting with $1, what's the probability of reaching $3 vs $0? PageRankSimplified web graph: pages link to each other, creating transition probabilities:
Controls Reference
Visual Panels
Limitations and Extensions
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||