Fork me on GitHub
Math for the people, by the people.

User login

Markov chain


% used for TeXing text within eps files
% need this for including graphics (\includegraphics)
% for neatly defining theorems and propositions
% making logically defined graphics

% there are many more packages, add them here as you need them

% define commands here
\newcommand{\mv}[1]{\mathbf{#1}}	% matrix or vector
\newcommand{\mderiv}[1]{\frac{\md}{\md {#1}}} %d/dx
\newcommand{\mnthderiv}[2]{\frac{\md^{#2}}{\md {#1}^{#2}}} %d^n/dx
\newcommand{\mpderiv}[1]{\frac{\partial}{\partial {#1}}} %partial d^n/dx
\newcommand{\mnthpderiv}[2]{\frac{\partial^{#2}}{\partial {#1}^{#2}}} %partial d^n/dx
\newcommand{\bra}[1]{\langle#1 \vert}
\newcommand{\ket}[1]{\vert \hspace{1pt}#1\rangle}
\newcommand{\braket}[2]{\langle #1 \ket{#2}}
\newcommand{\esssup}{\mathrm{ess\ sup}}
We begin with a probability space $(\Omega, \mathcal{F}, \mathbb{P})$.  Let $I$ be a countable set, $(X_n: n \in \integers)$ be a collection of random variables taking values in $I$, $\mv{T} = (t_{ij}: i,j \in I)$ be a stochastic matrix, and $\mv{\lambda}$ be a distribution.  We call $(X_n)_{n\ge 0}$ a \emph{Markov chain} with initial distribution $\mv{\lambda}$ and \emph{transition matrix} $\mv{T}$ if:

\item{$X_0$ has distribution $\mv{\lambda}$.}
\item{For $n \ge 0$, $\mathbb{P}(X_{n+1}=i_{n+1} | X_0 = i_0, \ldots, X_n = i_n) = t_{i_n i_{n+1}}$.}

That is, the next value of the chain depends only on the current value, not any previous values.  This is often summed up in the pithy phrase, ``Markov chains have no memory.''

As a special case of (2) we have that $\mathbb{P}(X_{n+1} = i | X_n = j) = t_{ij}$ whenever $i,j \in I$. The values $t_{ij}$ are therefore called 
\emph{transition probabilities} for the Markov chain.

Markov chains are arguably the simplest examples of random processes.  They come in discrete and continuous versions; the discrete version is presented above.