# consistent estimator

Given a set of samples $X_{1},\ldots,X_{n}$ from a given probability distribution $f$ with an unknown parameter $\theta\in\Theta$, where $\Theta$ is the parameter space that is a subset of $\mathbb{R}^{m}$. Let $U(=U(X_{1},\ldots,X_{n}))$ be an estimator  of $\theta$. Allowing the sample size $n$ to vary, we get a sequence of estimators for $\theta$:

 $\displaystyle U_{1}$ $\displaystyle=$ $\displaystyle U(X_{1}),$ $\displaystyle\vdots$ $\displaystyle U_{n}$ $\displaystyle=$ $\displaystyle U(X_{1},\ldots,X_{n}),$ $\displaystyle\vdots$

We say that the sequence of estimators $\{U_{n}\}$ consistent (or that $U$ is a consistent estimator of $\theta$), if $U_{i}$ converges in probability to $\theta$ for every $\theta\in\Theta$. That is, for every $\varepsilon>0$,

 $\lim_{n\rightarrow\infty}P(|h_{n}-\theta|\geq\varepsilon)=0$

for all $\theta\in\Theta$.

Remark. Suppose $U$ is an estimator of $\theta$ such that the sequence $\{U_{n}\}$ is consistent. If $\alpha_{n}\to\alpha\in\mathbb{R}$ and $\beta_{n}\to\beta\in\mathbb{R}^{m}$ are two convergent sequences of constants with $0<|\alpha|<\infty$ and $|\beta|<\infty$, then the sequence $\{V_{n}\}$, defined by $V_{n}:=\alpha_{n}U_{n}+\beta_{n}$, is consistent, $V$ is an estimator of $\alpha\theta+\beta$.

###### Proof.

First, observe that

 $\displaystyle|V_{n}-(\alpha\theta+\beta)|$ $\displaystyle=$ $\displaystyle|\alpha_{n}U_{n}+\beta_{n}-\alpha\theta-\beta|$ $\displaystyle\leq$ $\displaystyle|\alpha_{n}U_{n}-\alpha\theta|+|\beta_{n}-\beta|$ $\displaystyle=$ $\displaystyle|\alpha_{n}U_{n}-\alpha_{n}\theta+\alpha_{n}\theta-\alpha\theta|+% |\beta_{n}-\beta|$ $\displaystyle\leq$ $\displaystyle|\alpha_{n}U_{n}-\alpha_{n}\theta|+|\alpha_{n}\theta-\alpha\theta% |+|\beta_{n}-\beta|$ $\displaystyle=$ $\displaystyle|\alpha_{n}||U_{n}-\theta|+|\alpha_{n}-\alpha||\theta|+|\beta_{n}% -\beta|.$

This implies

 $\displaystyle P(|V_{n}-(\alpha\theta+\beta)|\geq\varepsilon)$ $\displaystyle\leq$ $\displaystyle P(|\alpha_{n}||U_{n}-\theta|+|\alpha_{n}-\alpha||\theta|+|\beta_% {n}-\beta|\geq\varepsilon)$ $\displaystyle=$ $\displaystyle P(|U_{n}-\theta|\geq\frac{\varepsilon-|\beta_{n}-\beta|-|\alpha_% {n}-\alpha||\theta|}{|\alpha_{n}|}).$

As $n\to\infty$, $|\beta_{n}-\beta|\to 0$, $|\alpha_{n}-\alpha||\theta|\to 0$, and $|\alpha_{n}|\to|\alpha|\neq 0$. So the last expression goes to $0$ as $n\to\infty$. Therefore,

 $\lim_{n\to\infty}P(|V_{n}-(\alpha\theta+\beta)|\geq\varepsilon)=0,$

and thus $\{V_{n}\}$ is a consistent sequence of estimators of $\alpha\theta+\beta$. ∎

Title consistent estimator ConsistentEstimator 2013-03-22 15:26:34 2013-03-22 15:26:34 CWoo (3771) CWoo (3771) 5 CWoo (3771) Definition msc 62F12 consistent sequence of estimators