Shannon’s entropy
Definition (Discrete)
Let $X$ be a discrete random variable on a finite set^{} $\mathcal{X}=\{{x}_{1},\mathrm{\dots},{x}_{n}\}$, with probability distribution function $p(x)=\mathrm{Pr}(X=x)$. The entropy^{} $H(X)$ of $X$ is defined as
$$H(X)=\sum _{x\in \mathcal{X}}p(x){\mathrm{log}}_{b}p(x).$$  (1) 
The convention $0\mathrm{log}0=0$ is adopted in the definition. The logarithm is usually taken to the base 2, in which case the entropy is measured in “bits,” or to the base e, in which case $H(X)$ is measured in “nats.”
If $X$ and $Y$ are random variables on $\mathcal{X}$ and $\mathcal{Y}$ respectively, the joint entropy of $X$ and $Y$ is
$$H(X,Y)=\sum _{(x,y)\in \mathcal{X}\times \mathcal{Y}}p(x,y){\mathrm{log}}_{b}p(x,y),$$ 
where $p(x,y)$ denote the joint distribution^{} of $X$ and $Y$.
Discussion
The Shannon entropy was first introduced by Shannon in 1948 in his landmark paper “A Mathematical Theory of Communication.” The entropy is a functional^{} of the probability distribution function $p(x)$, and is sometime written as
$$H(p({x}_{1}),p({x}_{2}),\mathrm{\dots},p({x}_{n})).$$ 
It is noted that the entropy of $X$ does not depend on the actual values of $X$, it only depends on $p(x)$. The definition of Shannon’s entropy can be written as an expectation
$$H(X)=E[{\mathrm{log}}_{b}p(X)].$$ 
The quantity ${\mathrm{log}}_{b}p(x)$ is interpreted as the information content of the outcome $x\in \mathcal{X}$, and is also called the Hartley information of $x$. Hence the Shannon’s entropy is the average^{} amount of information contained in random variable $X$, it is also the uncertainty removed after the actual outcome of $X$ is revealed.
Characterization
We write $H(X)$ as ${H}_{n}({p}_{1},\mathrm{\dots},{p}_{n})$. The Shannon entropy satisfies the following properties.

1.
For any $n$, ${H}_{n}({p}_{1},\mathrm{\dots},{p}_{n})$ is a continuous^{} and symmetric function on variables ${p}_{1}$, ${p}_{2},\mathrm{\dots},{p}_{n}$.

2.
Event of probability zero does not contribute to the entropy, i.e. for any $n$,
$${H}_{n+1}({p}_{1},\mathrm{\dots},{p}_{n},0)={H}_{n}({p}_{1},\mathrm{\dots},{p}_{n}).$$ 
3.
Entropy is maximized when the probability distribution is uniform. For all $n$,
$${H}_{n}({p}_{1},\mathrm{\dots},{p}_{n})\le {H}_{n}(\frac{1}{n},\mathrm{\dots},\frac{1}{n}).$$ This follows from Jensen inequality^{},
$$H(X)=E\left[{\mathrm{log}}_{b}\left(\frac{1}{p(X)}\right)\right]\le {\mathrm{log}}_{b}\left(E\left[\frac{1}{p(X)}\right]\right)={\mathrm{log}}_{b}(n).$$ 
4.
If ${p}_{ij}$, $1\le i\le m$, $1\le j\le n$ are nonnegative real numbers summing up to one, and ${q}_{i}={\sum}_{j=1}^{n}{p}_{ij}$, then
$${H}_{mn}({p}_{11},\mathrm{\dots},{p}_{mn})={H}_{m}({q}_{1},\mathrm{\dots},{q}_{m})+\sum _{i=1}^{m}{q}_{i}{H}_{n}(\frac{{p}_{i1}}{{q}_{i}},\mathrm{\dots},\frac{{p}_{in}}{{q}_{i}}).$$ If we partition^{} the $mn$ outcomes of the random experiment into $m$ groups, each group contains $n$ elements, we can do the experiment in two steps: first determine the group to which the actual outcome belongs to, and second find the outcome in this group. The probability that you will observe group $i$ is ${q}_{i}$. The conditional probability distribution function given group $i$ is $({p}_{i1}/{q}_{i},\mathrm{\dots},{p}_{in}/{q}_{i})$. The entropy
$${H}_{n}(\frac{{p}_{i1}}{{q}_{i}},\mathrm{\dots},\frac{{p}_{in}}{{q}_{i}})$$ is the entropy of the probability distribution conditioned on group $i$. Property 4 says that the total information is the sum of the information you gain in the first step, ${H}_{m}({q}_{1},\mathrm{\dots},{q}_{m})$, and a weighted sum of the entropies conditioned on each group.
Khinchin in 1957 showed that the only function satisfying the above assumptions^{} is of the form:
$${H}_{n}({p}_{1},\mathrm{\dots},{p}_{n})=k\sum _{i=1}^{n}{p}_{i}\mathrm{log}{p}_{i}$$ 
where $k$ is a positive constant, essentially a choice of unit of measure.
Definition (Continuous)
Entropy in the continuous case is called differential entropy^{} (http://planetmath.org/DifferentialEntropy).
Discussion—Continuous Entropy
Despite its seductively analogous form, continuous entropy cannot be obtained as a limiting case of discrete entropy.
We wish to obtain a generally finite measure as the “bin size” goes to zero. In the discrete case, the bin size is the (implicit) width of each of the $n$ (finite or infinite^{}) bins/buckets/states whose probabilities are the ${p}_{n}$. As we generalize to the continuous domain, we must make this width explicit.
To do this, start with a continuous function $f$ discretized as shown in the figure:
As the figure indicates, by the meanvalue theorem there exists a value ${x}_{i}$ in each bin such that
$$f({x}_{i})\mathrm{\Delta}={\int}_{i\mathrm{\Delta}}^{(i+1)\mathrm{\Delta}}f(x)\mathit{d}x$$  (2) 
and thus the integral of the function $f$ can be approximated (in the Riemannian sense) by
$${\int}_{\mathrm{\infty}}^{\mathrm{\infty}}f(x)\mathit{d}x=\underset{\mathrm{\Delta}\to 0}{lim}\sum _{i=\mathrm{\infty}}^{\mathrm{\infty}}f({x}_{i})\mathrm{\Delta}$$  (3) 
where this limit and “bin size goes to zero” are equivalent^{}.
We will denote
$${H}^{\mathrm{\Delta}}:=\sum _{i=\mathrm{\infty}}^{\mathrm{\infty}}\mathrm{\Delta}f({x}_{i})\mathrm{log}\mathrm{\Delta}f({x}_{i})$$  (4) 
and expanding the $\mathrm{log}$ we have
${H}^{\mathrm{\Delta}}$  $={\displaystyle \sum _{i=\mathrm{\infty}}^{\mathrm{\infty}}}\mathrm{\Delta}f({x}_{i})\mathrm{log}\mathrm{\Delta}f({x}_{i})$  (5)  
$={\displaystyle \sum _{i=\mathrm{\infty}}^{\mathrm{\infty}}}\mathrm{\Delta}f({x}_{i})\mathrm{log}f({x}_{i}){\displaystyle \sum _{i=\mathrm{\infty}}^{\mathrm{\infty}}}f({x}_{i})\mathrm{\Delta}\mathrm{log}\mathrm{\Delta}.$  (6) 
As $\mathrm{\Delta}\to 0$, we have
$\sum _{i=\mathrm{\infty}}^{\mathrm{\infty}}}f({x}_{i})\mathrm{\Delta$  $\to {\displaystyle \int f(x)\mathit{d}x}=1\mathit{\hspace{1em}\hspace{1em}}\text{and}$  (7)  
$\sum _{i=\mathrm{\infty}}^{\mathrm{\infty}}}\mathrm{\Delta}f({x}_{i})\mathrm{log}f({x}_{i})$  $\to {\displaystyle \int f(x)\mathrm{log}f(x)\mathit{d}x}$  (8) 
This leads us to our definition of the differential entropy (continuous entropy):
$$h[f]=\underset{\mathrm{\Delta}\to 0}{lim}\left[{H}^{\mathrm{\Delta}}+\mathrm{log}\mathrm{\Delta}\right]={\int}_{\mathrm{\infty}}^{\mathrm{\infty}}f(x)\mathrm{log}f(x)\mathit{d}x.$$  (9) 
Title  Shannon’s entropy 
Canonical name  ShannonsEntropy 
Date of creation  20130322 12:00:53 
Last modified on  20130322 12:00:53 
Owner  kshum (5987) 
Last modified by  kshum (5987) 
Numerical id  26 
Author  kshum (5987) 
Entry type  Definition 
Classification  msc 94A17 
Synonym  entropy 
Synonym  Shannon entropy 
Related topic  Logarithm 
Related topic  ConditionalEntropy 
Related topic  MutualInformation 
Related topic  DifferentialEntropy 
Related topic  HartleyFunction 
Defines  joint entropy 