Markov Chains
Markov Chains are a fundamental tool for modeling systems that transition between a finite number of states, where the future state depends only on the current state, not on the sequence of events that preceded it.
I. Core Definitions and Properties
The Markov Property
A sequence of random variables is a Markov Chain if it satisfies the Markov Property (or memoryless property): the conditional probability distribution of the next state, given the present state and all the past states, depends only on the present state.
Transition Matrix
For a discrete state space , the dynamics of the chain are governed by the Transition Matrix .
- Entry : The probability of transitioning from state to state .
- Properties: Each entry , and the sum of entries for each row must total 1 (i.e., ). This makes a stochastic matrix.
- -step Transition: The probability of moving from state to state in steps is given by the -th entry of the matrix .
II. Classification of States and Chains
The long-term behavior of a Markov Chain is determined by the properties of its states.
| Property | Definition | Relevance |
|---|---|---|
| Irreducible | Every state is reachable from every other state. | Guarantees that a unique stationary distribution may exist. |
| Aperiodic | The chain does not return to a state in a fixed, regular cycle. | Necessary for the chain to converge to the stationary distribution regardless of the starting state. |
| Ergodic | A chain that is both irreducible and aperiodic. | Crucial: An ergodic chain has a unique stationary distribution, and the chain will converge to it over time. |
| Recurrent | The chain is guaranteed to return to the state it left. | All states in a finite, irreducible chain are recurrent. |
| Transient | The chain has a non-zero probability of never returning to the state it left. | The chain will eventually leave transient states forever. |
III. Stationary Distribution and Long-Term Behavior
The Stationary Distribution is a probability vector that, once reached, remains unchanged by further transitions.
- Defining Equation:
- Interpretation: is the long-run proportion of time the chain spends in state . In finance, this can represent the long-run probability of a market being in a certain regime (e.g., high volatility).
- Existence and Uniqueness: A stationary distribution exists for any finite-state Markov Chain. It is unique if and only if the chain is irreducible.
IV. Absorbing Chains and Expected Hitting Time
An Absorbing State is a state from which the chain cannot leave (i.e., ). A chain is Absorbing if it has at least one absorbing state and every non-absorbing state can reach an absorbing state.
Expected Time to State (Expected Hitting Time)
To find the expected number of steps to reach a target state (often an absorbing state) starting from state , we solve a system of linear equations.
For a target state (where ):
- Example: To find the expected time to reach from in a chain:
Gambler's Ruin Problem (A Classic Absorbing Chain)
This is a classic example of an absorbing Markov Chain where the states are the player's current capital, and the absorbing states are 0 (ruin) and (opponent's ruin).
- Fair Coin (): The probability of ruin (reaching 0) starting with capital against an opponent with capital is:
(Correction: The probability of ruin is , not as stated in the original content. The probability of winning is .)
- Unfair Coin (): Let (the odds ratio of losing to winning). The probability of ruin is:
(Correction: The original formula was for the probability of reaching state starting from in a slightly different formulation. The standard ruin probability is given above.)