sufficient statistic
Let $\{{f}_{\theta}\}$ be a statistical model with parameter $\theta $. Let $\bm{X}=({X}_{1},\mathrm{\dots},{X}_{n})$ be a random vector of random variables^{} representing $n$ observations. A statistic^{} $T=T(\bm{X})$ of $\bm{X}$ for the parameter $\theta $ is called a sufficient statistic, or a sufficient estimator, if the conditional probability distribution of $\bm{X}$ given $T(\bm{X})=t$ is not a function of $\theta $ (equivalently, does not depend on $\theta $).
In other words, all the information about the unknown parameter $\theta $ is captured in the sufficient statistic $T$. If, say, we are interested in finding out the percentage of defective light bulbs in a shipment of new ones, it is enough, or sufficient, to count the number of defective ones (sum of the ${X}_{i}$’s), rather than worrying about which individual light bulbs are the defective ones (the vector $({X}_{1},\mathrm{\dots},{X}_{n})$). By taking the sum, a certain “reduction” of data has been achieved.
Examples

1.
Let ${X}_{1},\mathrm{\dots},{X}_{n}$ be $n$ independent^{} observations from a uniform distribution^{} on integers $1,\mathrm{\dots},\theta $. Let $T=\mathrm{max}\{{X}_{1},\mathrm{\dots},{X}_{n}\}$ be a statistic for $\theta $. Then the conditional probability distribution of $\bm{X}=({X}_{1},\mathrm{\dots},{X}_{n})$ given $T=t$ is
$$P(\bm{X}\mid t)=\frac{P({X}_{1}={x}_{1},\mathrm{\dots},{X}_{n}={x}_{n},\mathrm{max}\{{X}_{n}\}=t)}{P(\mathrm{max}\{{X}_{n}\}=t)}.$$ The numerator is $0$ if $\mathrm{max}\{{x}_{n}\}\ne t$. So in this case, $P(\bm{X}\mid t)=0$ and is not a function of $\theta $. Otherwise, the numerator is ${\theta}^{n}$ and $P(\bm{X}\mid t)$ becomes
$$\frac{{\theta}^{n}}{P(\mathrm{max}\{{X}_{n}\}=t)}={({\theta}^{n}P({X}_{(1)}\le \mathrm{\cdots}\le {X}_{(n)}=t))}^{1},$$ where ${X}_{(i)}$’s are the rearrangements of the ${X}_{i}$’s in a nondecreasing order from $i=1$ to $n$. For the denominator, we first note that
$P({X}_{(1)}\le \mathrm{\cdots}\le {X}_{(n)}=t)$ $=$ $$ $=$ $P({X}_{(1)}\le \mathrm{\cdots}\le {X}_{(n)}\le t)P({X}_{(1)}\le \mathrm{\cdots}\le {X}_{(n)}\le t1).$ From the above equation, we find that there are ${t}^{n}{(t1)}^{n}$ ways to form nondecreasing finite sequences^{} of $n$ positive integers such that the maximum of the sequence is $t$. So
$${({\theta}^{n}P({X}_{(1)}\le \mathrm{\cdots}\le {X}_{(n)}=t))}^{1}={({\theta}^{n}({t}^{n}{(t1)}^{n}){\theta}^{n})}^{1}={({t}^{n}{(t1)}^{n})}^{1}$$ again is not a function of $\theta $. Therefore, $T=\mathrm{max}\{{X}_{i}\}$ is a sufficient statistic for $\theta $. Here, we see that a reduction of data has been achieved by taking only the largest member of set of observations, not the entire set.

2.
If we set $T({X}_{1},\mathrm{\dots},{X}_{n})=({X}_{1},\mathrm{\dots},{X}_{n})$, then we see that $T$ is trivially a sufficient statistic for any parameter $\theta $. The conditional probability distribution of $({X}_{1},\mathrm{\dots},{X}_{n})$ given $T$ is 1. Even though this is a sufficient statistic by definition (of course, the individual observations provide as much information there is to know about $\theta $ as possible), and there is no loss of data in $T$ (which is simply a list of all observations), there is really no reduction of data to speak of here.

3.
The sample mean
$$\overline{X}=\frac{{X}_{1}+\mathrm{\cdots}+{X}_{n}}{n}$$ of $n$ independent observations from a normal distribution^{} $N(\mu ,{\sigma}^{2})$ (both $\mu $ and ${\sigma}^{2}$ unknown) is a sufficient statistic for $\mu $. This is the result of the factorization criterion. Similarly, one sees that any partition^{} of the sum of $n$ observations ${X}_{i}$ into $m$ subtotals is a sufficient statistic for $\mu $. For instance,
$$T({X}_{1},\mathrm{\dots},{X}_{n})=(\sum _{i=1}^{j}{X}_{i},\sum _{i=j+1}^{k}{X}_{i},\sum _{i=k+1}^{n}{X}_{i})$$ is a sufficient statistic for $\mu $.

4.
Again, assume there are $n$ independent observations ${X}_{i}$ from a normal distribution $N(\mu ,{\sigma}^{2})$ with unknown mean and variance^{}. The sample variance
$$\frac{1}{n1}\sum _{i=1}^{n}{({X}_{i}\overline{X})}^{2}$$ is not a sufficient statistic for ${\sigma}^{2}$. However, if $\mu $ is a known constant, then
$$\frac{1}{n1}\sum _{i=1}^{n}{({X}_{i}\mu )}^{2}$$ is a sufficient statistic for ${\sigma}^{2}$.
A sufficient statistic for a parameter $\theta $ is called a minimal sufficient statistic if it can be expressed as a function of any sufficient statistic for $\theta $.
Example. In example $3$ above, both the sample mean $\overline{X}$ and the finite sum $S={X}_{1}+\mathrm{\cdots}+{X}_{n}$ are minimal sufficient statistics for the mean $\mu $. Since, by the factorization criterion, any sufficient statistic $T$ for $\mu $ is a vector whose coordinates form a partition of the finite sum, taking the sum of these coordinates is just the finite sum $S$. So, we have just expressed $S$ as a function of $T$. Therefore, $S$ is minimal^{}. Similarly, $\overline{X}$ is minimal.
Two sufficient statistics ${T}_{1},{T}_{2}$ for a parameter $\theta $ are said to be equivalent^{} provided that there is a bijection^{} $g$ such that $g\circ {T}_{1}={T}_{2}$. $\overline{X}$ and $S$ from the above example are two equivalent sufficient statistics. Two minimal sufficient statistics for the same parameter are equivalent.
Title  sufficient statistic 
Canonical name  SufficientStatistic 
Date of creation  20130322 15:02:42 
Last modified on  20130322 15:02:42 
Owner  CWoo (3771) 
Last modified by  CWoo (3771) 
Numerical id  11 
Author  CWoo (3771) 
Entry type  Definition 
Classification  msc 62B05 
Synonym  sufficient estimator 
Synonym  minimally sufficient statistic 
Synonym  minimal sufficient 
Synonym  minimally sufficient 
Defines  minimal sufficient statistic 
Defines  equivalent statistic 