Fork me on GitHub
Math for the people, by the people.

User login

Markov chain

\documentclass{article}
\usepackage{amssymb}
\usepackage{amsmath}
\usepackage{amsfonts}

% used for TeXing text within eps files
%\usepackage{psfrag}
% need this for including graphics (\includegraphics)
%\usepackage{graphicx}
% for neatly defining theorems and propositions
%\usepackage{amsthm}
% making logically defined graphics
%%%\usepackage{xypic} 

% there are many more packages, add them here as you need them

% define commands here
\newcommand{\md}{d}
\newcommand{\mv}[1]{\mathbf{#1}}	% matrix or vector
\newcommand{\mvt}[1]{\mv{#1}^{\mathrm{T}}}
\newcommand{\mvi}[1]{\mv{#1}^{-1}}
\newcommand{\mderiv}[1]{\frac{\md}{\md {#1}}} %d/dx
\newcommand{\mnthderiv}[2]{\frac{\md^{#2}}{\md {#1}^{#2}}} %d^n/dx
\newcommand{\mpderiv}[1]{\frac{\partial}{\partial {#1}}} %partial d^n/dx
\newcommand{\mnthpderiv}[2]{\frac{\partial^{#2}}{\partial {#1}^{#2}}} %partial d^n/dx
\newcommand{\borel}{\mathfrak{B}}
\newcommand{\integers}{\mathbb{Z}}
\newcommand{\rationals}{\mathbb{Q}}
\newcommand{\reals}{\mathbb{R}}
\newcommand{\complexes}{\mathbb{C}}
\newcommand{\naturals}{\mathbb{N}}
\newcommand{\defined}{:=}
\newcommand{\var}{\mathrm{var}}
\newcommand{\cov}{\mathrm{cov}}
\newcommand{\corr}{\mathrm{corr}}
\newcommand{\set}[1]{\left\{#1\right\}}
\newcommand{\powerset}[1]{\mathcal{P}(#1)}
\newcommand{\bra}[1]{\langle#1 \vert}
\newcommand{\ket}[1]{\vert \hspace{1pt}#1\rangle}
\newcommand{\braket}[2]{\langle #1 \ket{#2}}
\newcommand{\abs}[1]{\left|#1\right|}
\newcommand{\norm}[1]{\left|\left|#1\right|\right|}
\newcommand{\esssup}{\mathrm{ess\ sup}}
\newcommand{\Lspace}[1]{L^{#1}}
\newcommand{\Lone}{\Lspace{1}}
\newcommand{\Ltwo}{\Lspace{2}}
\newcommand{\Lp}{\Lspace{p}}
\newcommand{\Lq}{\Lspace{q}}
\newcommand{\Linf}{\Lspace{\infty}}
\newcommand{\sequence}[1]{\{#1\}}
\begin{document}
\paragraph{Definition}
We begin with a probability space $(\Omega, \mathcal{F}, \mathbb{P})$.  Let $I$ be a countable set, $(X_n: n \in \integers)$ be a collection of random variables taking values in $I$, $\mv{T} = (t_{ij}: i,j \in I)$ be a stochastic matrix, and $\mv{\lambda}$ be a distribution.  We call $(X_n)_{n\ge 0}$ a \emph{Markov chain} with initial distribution $\mv{\lambda}$ and \emph{transition matrix} $\mv{T}$ if:

\begin{enumerate}
\item{$X_0$ has distribution $\mv{\lambda}$.}
\item{For $n \ge 0$, $\mathbb{P}(X_{n+1}=i_{n+1} | X_0 = i_0, \ldots, X_n = i_n) = t_{i_n i_{n+1}}$.}
\end{enumerate}

That is, the next value of the chain depends only on the current value, not any previous values.  This is often summed up in the pithy phrase, ``Markov chains have no memory.''

As a special case of (2) we have that $\mathbb{P}(X_{n+1} = i | X_n = j) = t_{ij}$ whenever $i,j \in I$. The values $t_{ij}$ are therefore called 
\emph{transition probabilities} for the Markov chain.

\paragraph{Discussion}
Markov chains are arguably the simplest examples of random processes.  They come in discrete and continuous versions; the discrete version is presented above.
%%%%%
%%%%%
nd{document}