Preparing for your next Quant Interview?
Practice Here!
OpenQuant
Section 1 of 6
Probability & StatisticsMarkov Chains

Markov Chains

Markov Chains are a fundamental tool for modeling systems that transition between a finite number of states, where the future state depends only on the current state, not on the sequence of events that preceded it.

I. Core Definitions and Properties

The Markov Property

A sequence of random variables X1,X2,X3,X_1, X_2, X_3, \dots is a Markov Chain if it satisfies the Markov Property (or memoryless property): the conditional probability distribution of the next state, given the present state and all the past states, depends only on the present state.

P(Xt+1=xjXt=xi,Xt1=xk,)=P(Xt+1=xjXt=xi)\mathbb{P}(X_{t+1} = x_{j} | X_t = x_i, X_{t-1} = x_{k}, \dots) = \mathbb{P}(X_{t+1} = x_{j} | X_t = x_i)

Transition Matrix

For a discrete state space X={x1,x2,,xn}\mathcal{X} = \{x_1, x_2, \dots, x_n\}, the dynamics of the chain are governed by the n×nn \times n Transition Matrix PP.

  • Entry PijP_{ij}: The probability of transitioning from state xix_i to state xjx_j.
  • Properties: Each entry Pij[0,1]P_{ij} \in [0, 1], and the sum of entries for each row must total 1 (i.e., j=1nPij=1\sum_{j=1}^n P_{ij} = 1). This makes PP a stochastic matrix.
  • kk-step Transition: The probability of moving from state ii to state jj in kk steps is given by the (i,j)(i, j)-th entry of the matrix PkP^k.

II. Classification of States and Chains

The long-term behavior of a Markov Chain is determined by the properties of its states.

PropertyDefinitionRelevance
IrreducibleEvery state is reachable from every other state.Guarantees that a unique stationary distribution may exist.
AperiodicThe chain does not return to a state in a fixed, regular cycle.Necessary for the chain to converge to the stationary distribution regardless of the starting state.
ErgodicA chain that is both irreducible and aperiodic.Crucial: An ergodic chain has a unique stationary distribution, and the chain will converge to it over time.
RecurrentThe chain is guaranteed to return to the state it left.All states in a finite, irreducible chain are recurrent.
TransientThe chain has a non-zero probability of never returning to the state it left.The chain will eventually leave transient states forever.

III. Stationary Distribution and Long-Term Behavior

The Stationary Distribution π=(π1,,πn)\pi = (\pi_1, \dots, \pi_n) is a probability vector that, once reached, remains unchanged by further transitions.

  • Defining Equation:
    π=πP,i=1nπi=1\pi = \pi P, \quad \sum_{i=1}^n \pi_i = 1
  • Interpretation: πi\pi_i is the long-run proportion of time the chain spends in state xix_i. In finance, this can represent the long-run probability of a market being in a certain regime (e.g., high volatility).
  • Existence and Uniqueness: A stationary distribution exists for any finite-state Markov Chain. It is unique if and only if the chain is irreducible.

IV. Absorbing Chains and Expected Hitting Time

An Absorbing State xix_i is a state from which the chain cannot leave (i.e., Pii=1P_{ii} = 1). A chain is Absorbing if it has at least one absorbing state and every non-absorbing state can reach an absorbing state.

Expected Time to State (Expected Hitting Time)

To find the expected number of steps μi\mu_i to reach a target state (often an absorbing state) starting from state xix_i, we solve a system of linear equations.

For a target state xnx_n (where μn=0\mu_n = 0):

μi=1+j=1n1Pijμjfor i=1,,n1\mu_i = 1 + \sum_{j=1}^{n-1} P_{ij}\mu_j \quad \text{for } i = 1, \dots, n-1
  • Example: To find the expected time to reach x3x_3 from x1x_1 in a 3×33 \times 3 chain:
    μ1=1+P11μ1+P12μ2+P13μ3(where μ3=0)\mu_1 = 1 + P_{11}\mu_1 + P_{12}\mu_2 + P_{13}\mu_3 \quad (\text{where } \mu_3 = 0)
    μ2=1+P21μ1+P22μ2+P23μ3(where μ3=0)\mu_2 = 1 + P_{21}\mu_1 + P_{22}\mu_2 + P_{23}\mu_3 \quad (\text{where } \mu_3 = 0)

Gambler's Ruin Problem (A Classic Absorbing Chain)

This is a classic example of an absorbing Markov Chain where the states are the player's current capital, and the absorbing states are 0 (ruin) and a+ba+b (opponent's ruin).

  • Fair Coin (p=0.5p=0.5): The probability of ruin (reaching 0) starting with capital aa against an opponent with capital bb is:
    P(Ruin)=ba+b\mathbb{P}(\text{Ruin}) = \frac{b}{a+b}
    (Correction: The probability of ruin is ba+b\frac{b}{a+b}, not aa+b\frac{a}{a+b} as stated in the original content. The probability of winning is aa+b\frac{a}{a+b}.)
  • Unfair Coin (p0.5p \ne 0.5): Let ρ=1pp\rho = \frac{1-p}{p} (the odds ratio of losing to winning). The probability of ruin is:
    P(Ruin)=ρaρa+b1ρa+b\mathbb{P}(\text{Ruin}) = \frac{\rho^a - \rho^{a+b}}{1 - \rho^{a+b}}
    (Correction: The original formula was for the probability of reaching state a+ba+b starting from aa in a slightly different formulation. The standard ruin probability is given above.)

Probability & Statistics

Quantitative Researcher
Quantitative Trader
Table of Contents