existence of the conditional expectation
Let (Ω,ℱ,ℙ) be a probability space and X be a random variable
. For any σ-algebra 𝒢⊆ℱ, we show the existence of the conditional expectation 𝔼[X∣𝒢]. Although it is possible to do this using the Radon-Nikodym theorem
, a different approach is used here which relies on the completeness of the vector space
L2.
The defining property of the conditional expectation Y=𝔼[X∣𝒢] is
𝔼[1GY]=𝔼[1GX] | (1) |
for sets G∈𝒢. We shall prove the existence of the conditional expectation for all nonnegative random variables and, more generally, whenever 𝔼[|X|∣𝒢] is almost surely finite.
First, the conditional expectation of every square-integrable random variable exists.
Theorem 1.
Suppose that E[X2]<∞. Then there is a G-measurable random variable Y satisfying E[Y2]<∞ and equation (1) is satisfied for all G∈G.
Proof.
Consider the norm ∥Y∥2≡𝔼[Y2]1/2 on the vector space V=L2(Ω,ℱ,ℙ) of real valued random variables Y satisfying 𝔼[Y2]<∞ (up to ℙ almost everywhere equivalence). This is given by the following inner product
⟨Y1,Y2⟩≡𝔼[Y1Y2]. |
As Lp-spaces are complete, this makes V into a Hilbert space
(see also, L2-spaces are Hilbert spaces (http://planetmath.org/L2SpacesAreHilbertSpaces)). Then, U≡L2(Ω,𝒢,ℙ) is a complete, and hence closed, subspace
of V.
By the existence of orthogonal projections (http://planetmath.org/ProjectionsAndClosedSubspaces) onto closed subspaces of Hilbert spaces, there is an orthogonal projection π:V→U. In particular, ⟨πY-Y,Z⟩=0 for all Y∈V and Z∈U. Setting Y=πX gives
𝔼[1GY]-𝔼[1GX]=⟨1G,πX-X⟩=0 |
as required. ∎
We can now prove the existence of conditional expectations of nonnegative random variables. Note that here there are no integrability conditions on X.
Theorem 2.
Let X be a nonnegative random variable taking values in R∪{∞}. Then, there exists a nonnegative G-measurable random variable Y taking values in R∪{∞} and satisfying (1) for all G∈G. Furthermore, Y is uniquely defined P-almost everywhere (http://planetmath.org/AlmostSurely).
Proof.
First, let Xn=min(n,X). As this is bounded, theorem 1 says that the conditional expectations Yn=𝔼[Yn∣𝒢] exist. Furthermore, as X0=0, we may take Y0=0.
For any n, setting G={Yn+1<Yn}∈𝒢 gives
𝔼[1G(Yn-Yn+1)]=𝔼[1G(Xn-Xn+1)]≤0. |
So 1G(Yn-Yn+1) is a nonnegative random variable with nonpositive expectation, hence is almost surely equal to zero. Therefore, Yn+1≥Yn (almost surely) and, by replacing Yn with the maximum of Y1,…Yn we may suppose that (Yn) is an increasing sequence of random variables. Setting Y=sup, the monotone convergence theorem gives
as required.
Finally, we show existence of the conditional expectation of every random variable satisfying almost surely. Note, in particular, that this is satisfied whenever is integrable, as
Theorem 3.
Let be a random variable such that almost surely. Then, there exists a -measurable random variable such that and (1) is satisfied for every with .
Furthermore, is uniquely defined up to -a.e. equivalence.
Proof.
The positive and negative parts of satisfy
almost surely. We can therefore set and .
If satisfies then , so and,
as required.
Finally, suppose that satisfies the same conditions as . For any set . Then,
So, and are finite, hence (1) gives
So and, letting increase to infinity, almost surely. Similarly, and therefore almost surely. ∎
Title | existence of the conditional expectation |
---|---|
Canonical name | ExistenceOfTheConditionalExpectation |
Date of creation | 2013-03-22 18:39:28 |
Last modified on | 2013-03-22 18:39:28 |
Owner | gel (22282) |
Last modified by | gel (22282) |
Numerical id | 5 |
Author | gel (22282) |
Entry type | Theorem |
Classification | msc 60A10 |