You are here
Homeproof of LindemannWeierstrass theorem and that e and $\pi$ are transcendental
Primary tabs
proof of LindemannWeierstrass theorem and that e and $\pi$ are transcendental
This article provides a proof of the LindemannWeierstrass theorem, using a method similar to those used by Ferdinand von Lindemann and Karl Weierstrass. This material is taken from [1] and expanded for clarity.
Before attacking the general case, we first use the same methods to prove two earlier theorems, namely that both $e$ and $\pi$ are transcendental. These proofs introduce the methods to be used in the more general theorem. At the end, we present some trivial but important corollaries.
Both $e$ and $\pi$ were both known to be irrational in the 1700’s (Euler showed the former; Lambert the latter). But $e$ was not shown to be transcendental until 1873 (by Hermite, see [3] and [4]), and Lindemann showed $\pi$ to be transcendental as well in the late 1870’s. He also sketched a proof of the general theorem, which was fleshed out by Weierstrass and Hilbert among others in the late 1800’s.
The following construct is used in all three proofs. Suppose $f(x)$ is a real polynomial, and let
$I(t)=\int_{0}^{t}e^{{tx}}f(x)\,dx.$ 
Integrating by parts, we get
$I(t)=(e^{{tx}}f(x))\bigg_{0}^{t}+\int_{0}^{t}e^{{tx}}f^{{\prime}}(x)\,dx=e% ^{t}f(0)f(t)+\int_{0}^{t}e^{{tx}}f^{{\prime}}(x)\,dx.$ 
Continuing, and integrating by parts a total of $m=\deg f$ times, we get
$I(t)=e^{t}\sum_{{j=0}}^{m}f^{{(j)}}(0)\sum_{{j=0}}^{m}f^{{(j)}}(t)$  (1) 
where $f^{{(j)}}(x)$ is the $j^{{\mathrm{th}}}$ derivative of $f$.
If $f(x)=\sum a_{i}x^{i}$, let $F(x)=\sum\lvert a_{i}\rvert x^{i}$; i.e., the polynomial whose coefficients are the absolute values of those for $f$. Then using trivial bounds on the integrand, we get
$\lvert I(t)\rvert\leq\int_{0}^{t}\lvert e^{{tx}}f(x)\rvert dx\leq\lvert t% \rvert e^{{\lvert t\rvert}}F(\lvert t\rvert).$  (2) 
We now proceed to prove the theorems. The proofs of all three are similar, although the proof for $e$ is the easiest. The steps of the proofs are as follows:

Assume the theorem is false, and write down an equation in exponential form that shows that the number in question is algebraic (for $\pi$, we will use $\displaystyle e^{{i\pi}}=1$ to write the equation in that form).

Define a polynomial or set of polynomials $f$, and an associated number $J$ (or a sequence of numbers) that is a linear combination of the values of $I$ at the exponents in question. The motivation for the choice of $f$ used in each theorem is not given in Hermite’s proof. An excellent exposition of how these definitions are relevant to the problem is given in [2]. In essence, the ratio of the terms of $I(t)$ in the proof that $e$ is transcendental relates to a Padé approximation to $e^{t}$; this approximation is better the larger $\deg f$ gets.

Analyze $J$ to show that it is integral and nonzero, and derive a lower bound on $J$.

Use equation (2) to derive a trivial upper bound on $J$.

Note that the upper bound is lower than the lower bound, disproving the original assumption.
The “magic” in the proof consists of finding the appropriate choice of $f$, as well as in defining the transformation $I$ to begin with. Once that is done, the work in the proof is in showing $J$ integral (which is harder for the more general theorems) and in deriving the lower bound. But the outline of the proof remains the same across all three theorems.
Theorem 1.
$e$ is transcendental.
Proof.
Suppose not, so that $e$ is algebraic. Then $e$ satisfies some integer polynomial $\sum a_{i}x^{i}=0$ with $a_{0}\neq 0$. This means that
$a_{0}+a_{1}e+a_{2}e^{2}+\cdots a_{n}e^{n}=0.$ 
Let $p$ be a (sufficiently large) prime, define a polynomial of degree $m=(n+1)p1$ by
$f(x)=x^{{p1}}(x1)^{p}\cdots(xn)^{p}$ 
and let
$J=a_{0}I(0)+a_{1}I(1)+\cdots a_{n}I(n).$ 
We derive two sets of inconsistent bounds on $J$, thus showing that the original hypothesis is false and $e$ is transcendental.
First, apply equation (1) to $J$:
$\displaystyle J$  $\displaystyle=\sum_{{k=0}}^{n}a_{k}I(k)$  
$\displaystyle=\sum_{{k=0}}^{n}a_{k}\left(e^{k}\sum_{{j=0}}^{m}f^{{(j)}}(0)% \sum_{{j=0}}^{m}f^{{(j)}}(k)\right)$  
$\displaystyle=\sum_{{k=0}}^{n}a_{k}e^{k}\sum_{{j=0}}^{m}f^{{(j)}}(0)\sum_{{k=% 0}}^{n}a_{k}\sum_{{j=0}}^{m}f^{{(j)}}(k)$  
$\displaystyle=\sum_{{j=0}}^{m}\sum_{{k=0}}^{n}a_{k}f^{{(j)}}(k)$ 
where the last equality follows because of the assumed linear dependence of the powers of $e$.
Consider the values of $f^{{(j)}}(k)$. If $j<p1$, then none of the factors in $f$ vanish by differentiation, so that $f^{{(j)}}(k)=0$ for all $k$. If $j=p1$, then only the initial factor $x^{{p1}}$ can vanish in any term in $f^{{(j)}}(x)$, and so $f^{{(j)}}(k)=0$ if $k>0$. In the case where $j=p1,k=0$, the only term in the derivative that contributes a nonzero value is the term where $x^{{p1}}$ is differentiated each time; that term gives
$f^{{(p1)}}(0)=(p1)!(1)^{{np}}(n!)^{p}.$ 
Finally, if $j\geq p$, then the terms in $f^{{(j)}}(k)$ that are nonzero are those terms in which $(xk)^{p}$ has been differentiated away; in these cases, the terms have a leading coefficient that is thus a multiple of $p!$.
Now assume $p>n$; then $f^{{(p1)}}(0)$ is a multiple of $(p1)!$ but not of $p!$. Putting together the above computations, we get
$J=Np!+a_{0}M(p1)!=(p1)!(Np+a_{0}M)$ 
where $M,N$ are integers and $p\nmid M$. Assume also $p>a_{0}$; then $Np+a_{0}M$ must be nonzero and thus $\lvert J\rvert\geq(p1)!$.
On the other hand, obviously $F(k)\leq(2n)^{m}$ and thus
$\lvert J\rvert\leq\lvert a_{1}\rvert eF(1)+\cdots\lvert a_{n}\rvert ne^{n}F(n)% \leq ane^{n}(2n)^{{(n+1)p1}}=\frac{ane^{n}}{2n}((2n)^{{n+1}})^{p}\leq c^{p}$ 
where $a=\max(\lvert a_{1}\rvert,\ldots,\lvert a_{n}\rvert)$ and $c$ is some constant that does not depend on $p$.
But for $p$ large enough, $(p1)!>c^{p}$ no matter what $c$ is, so these two bounds on $J$ are contradictory. ∎
Theorem 2.
$\pi$ is transcendental.
Proof.
Again suppose not. Then $i\pi$ is also algebraic. Suppose the minimal polynomial, $f$, of $i\pi$ has degree $d$, say
$f(x)=\sum_{{i=0}}^{d}a_{i}x^{i}$ 
with $a_{0}\neq 0$, and let $\theta_{1}=i\pi,\theta_{2},\ldots,\theta_{d}$ be the conjugates of $i\pi$ (the other roots of $f$). Then since $e^{{i\pi}}=1$, we have
$(1+e^{{\theta_{1}}})(1+e^{{\theta_{2}}})\cdots(1+e^{{\theta_{d}}})=0$ 
Note that since $\theta_{i}$ is algebraic for each $i$, then $a_{d}\theta_{i}$ is an algebraic integer.
Each term in this product can be written as a power of $e$, where the exponent is of the form
$\beta_{{\epsilon_{1},\ldots,\epsilon_{d}}}=\epsilon_{1}\theta_{1}+\epsilon_{2}% \theta_{2}+\cdots+\epsilon_{d}\theta_{d}$ 
and each $\epsilon_{i}$ is either $0$ or $1$. Denote by $\alpha_{1},\ldots,\alpha_{n}$ those exponents that are nonzero. Note that at least one exponent is zero, and thus $n<2^{d}$. We then have
$(2^{d}n)+e^{{\alpha_{1}}}+\cdots e^{{\alpha_{n}}}=0$  (3) 
We will show that if we define $f$ by
$f(x)=a_{d}^{{np}}x^{{p1}}(x\alpha_{1})^{p}\cdots(x\alpha_{n})^{p}$ 
with $p$ a (sufficiently large) prime, then
$J=I(\alpha_{1})+\cdots+I(\alpha_{n})$ 
satisfies the same incompatible bounds as in the previous theorem.
Let $m=\deg f=(n+1)p1$. As before, we see that
$\displaystyle J$  $\displaystyle=\sum_{{k=1}}^{n}I(\alpha_{k})$  
$\displaystyle=\sum_{{k=1}}^{n}\left(e^{{\alpha_{k}}}\sum_{{j=0}}^{m}f^{{(j)}}(% 0)\sum_{{j=0}}^{m}f^{{(j)}}(\alpha_{k})\right)$  
$\displaystyle=\sum_{{k=1}}^{n}e^{{\alpha_{k}}}\sum_{{j=0}}^{m}f^{{(j)}}(0)% \sum_{{k=1}}^{n}\sum_{{j=0}}^{m}f^{{(j)}}(\alpha_{k})$  
$\displaystyle=(n2^{d})\sum_{{j=0}}^{m}f^{{(j)}}(0)\sum_{{j=0}}^{m}\sum_{{k=1% }}^{n}f^{{(j)}}(\alpha_{k})$ 
where the last equality follows from equation (3).
The remainder of the proof is quite similar to the above proof for $e$, except that we must first show that the sum over $k$ above is an integer; this was clear in the previous theorem.
Consider the inner sum over $k$. It is clear that this is a symmetric polynomial with integer coefficients in $a_{d}\alpha_{1},\ldots,a_{d}\alpha_{n}$. The $a_{d}\alpha_{i}$ are algebraic integers since the $a_{d}\theta_{i}$ are. By the fundamental theorem of symmetric polynomials, it follows that the inner sum is in fact a polynomial in the elementary symmetric functions on the $a_{d}\alpha_{i}$. Since the $\alpha_{i}$ are the nonzero elements of the $\beta_{{\ldots}}$, we see that the sum is also a polynomial in the elementary symmetric functions of the $a_{d}\beta_{{\ldots}}$, and thus is a symmetric polynomial with integer coefficients in the $a_{d}\theta_{i}$. Again applying the fundamental theorem of symmetric polynomials, we see that the sum over $k$ must be a polynomial in the elementary symmetric functions of the $a_{d}\theta_{i}$. But these elementary symmetric functions are simply the coefficients of the minimal polynomial of $a_{d}i\pi$, which are integers. Thus the inner sum is an integer.
By arguments identical to those in the previous theorem, we have that $f^{{(j)}}(\alpha_{k})=0$ when $j<p$ and thus that $f^{{(j)}}(\alpha_{k})$ is an integral multiple of $p!$; that $f^{{(j)}}(0)$ is an integral multiple of $p!$ when $j\neq p1$; and that
$f^{{(p1)}}(0)=(p1)!(a_{d})^{{np}}(\alpha_{1}\ldots\alpha_{n})^{p}$ 
is an integral multiple of $(p1)!$ but is not divisible by $p$ if $p$ is chosen to exceed $a_{d}\alpha_{1}\ldots\alpha_{n}$. Thus if also $p>n2^{d}$, we have that $J$ is nonzero and divisible by $(p1)!$ and thus $\lvert J\rvert\geq(p1)!$. But again, similar to the proof in the previous theorem, we have that
$\lvert J\rvert\leq\lvert\alpha_{1}\rvert e^{{\lvert\alpha_{1}\rvert}}F(\lvert% \alpha_{1}\rvert)+\cdots+\lvert\alpha_{1}\rvert e^{{\lvert\alpha_{n}\rvert}}F(% \lvert\alpha_{n}\rvert)\leq c^{p}$ 
and again these estimates are contradictory. ∎
The LindemannWeierstrass theorem generalizes both these two statements and their proofs.
Theorem 3.
If $\alpha_{1},\ldots,\alpha_{n}$ are algebraic and distinct, and if $\beta_{1},\ldots,\beta_{n}$ are algebraic and nonzero, then
$\beta_{1}e^{{\alpha_{1}}}+\cdots\beta_{n}e^{{\alpha_{n}}}\neq 0$ 
Note that the facts that $e$ and $\pi$ are transcendental follow trivially from this theorem. For example, if $e$ were algebraic, then $e$ is the root of a polynomial $\sum\beta_{i}x^{i}$ where $\beta_{i}\in\mathbb{Q}$, in contradiction to the theorem.
Proof.
The proof follows the same general lines as above, but there are additional complexities introduced by the arbitrary $\alpha_{i}$. In the proof of the transcendality of $\pi$, we were able to use facts about the relationship of the exponents in the proof; no such relationship is available to us in this more general setting.
Again start by supposing
$\beta_{1}e^{{\alpha_{1}}}+\cdots\beta_{n}e^{{\alpha_{n}}}=0$  (4) 
where the $\alpha_{i},\beta_{i}$ are as given.
Claim we can assume, without loss of generality, that $\beta_{i}\in\mathbb{Z}$. For if not, take all the expressions formed by substituting for one or more of the $\beta_{i}$ one of its conjugates, and multiply those by the equation above. The result is a new expression of the same form (with different $\alpha_{i}$), but where the coefficients are rational numbers. Clear denominators, proving the claim.
Next, claim we can assume that the $\alpha_{i}$ are a complete set of conjugates, and that if $\alpha_{i},\alpha_{j}$ are conjugates, then $\beta_{i}=\beta_{j}$. To see this, choose an irreducible integral polynomial having $\alpha_{1},\ldots,\alpha_{n}$ as roots; let $\alpha_{{n+1}},\ldots,\alpha_{N}$ be the remaining roots, and define $\beta_{{n+1}}=\ldots\beta_{N}=0$. Then clearly we have
$\prod_{{\sigma\in S_{N}}}\left(\beta_{1}e^{{\alpha_{{\sigma(1)}}}}+\cdots\beta% _{N}e^{{\alpha_{{\sigma(N)}}}}\right)=0$ 
(Note the similarity with the proof for $\pi$). There are $N!$ factors in this product, so expanding the product, it is a sum of terms of the form
$e^{{h_{1}\alpha_{1}+\cdots h_{N}\alpha_{N}}}$ 
with integral coefficients, and $h_{1}+\cdots+h_{N}=N!$. Clearly the set of all such exponents forms a complete set of conjugates. By symmetry considerations, we see that the coefficients of two conjugate terms are equal. Also, the product is not identically zero. To see this, consider the term in the product formed by multiplying together, from each factor, the nonzero terms with the largest exponents in the lexicographic order on $\mathbb{C}$. Since the $\alpha_{i}$ are unique (because the polynomial is irreducible), there is only one term with this largest exponent, and it has a nonzero coefficient by construction.
Finally, order the terms so that the conjugates of a particular $\alpha_{i}$ appear together. That is, for the remainder of the proof we may assume that
$\beta_{1}e^{{\alpha_{1}}}+\cdots+\beta_{n}e^{{\alpha_{n}}}=0$ 
with the $\beta_{i}\in\mathbb{Z}$,and that there are integers $0=n_{0}<n_{1}<\cdots<n_{r}=n$ chosen so that, foreach $0\leq t<n$ we have
$\displaystyle\alpha_{{n_{t}+1}},\ldots,\alpha_{{n_{{t+1}}}}\text{ form a % complete set of conjugates}$  
$\displaystyle\beta_{{n_{t}+1}}=\beta_{{n_{t}+2}}=\cdots\beta_{{n_{{t+1}}}}$ 
Now, since $\alpha_{i},\beta_{i}$ are algebraic, we can choose $l$ such that $l\alpha_{i},l\beta_{i}$ are algebraic integers. Let
$f_{i}(x)=l^{{np}}\frac{((x\alpha_{1})\cdots(x\alpha_{n}))^{p}}{(x\alpha_{i}% )},\ 1\leq i\leq n$ 
where again $p$ is a (large) prime. We will develop contradictory estimates for $\lvert J_{1}\ldots J_{n}\rvert$, where
$J_{i}=\beta_{1}I_{i}(\alpha_{1})+\cdots+\beta_{n}I_{i}(\alpha_{n}),\ 1\leq i\leq n$ 
and $I_{i}$ is the integral associated with $f_{i}$, as above (see equation (1)).
Using equations (1) and (4), we see that
$\displaystyle J_{i}$  $\displaystyle=\sum_{{k=1}}^{n}\beta_{k}I_{i}(\alpha_{k})$  
$\displaystyle=\sum_{{k=1}}^{n}\left(\beta_{k}e^{{\alpha_{k}}}\sum_{{j=0}}^{{np% 1}}f_{i}^{{(j)}}(0)\right)\sum_{{k=1}}^{n}\left(\beta_{k}\sum_{{j=0}}^{{np1% }}f_{i}^{{(j)}}(\alpha_{k})\right)$  
$\displaystyle=\left(\sum_{{j=0}}^{{np1}}f_{i}^{{(j)}}(0)\right)\left(\sum_{{k% =1}}^{n}\beta_{k}e^{{\alpha_{k}}}\right)\sum_{{k=1}}^{n}\left(\beta_{k}\sum_{% {j=0}}^{{np1}}f_{i}^{{(j)}}(\alpha_{k})\right)$  
$\displaystyle=\sum_{{j=0}}^{{np1}}\sum_{{k=1}}^{n}\left(\beta_{k}f_{i}^{{(j)% }}(\alpha_{k})\right).$ 
Arguing similarly to the foregoing proofs, we see that $f_{i}^{{(j)}}(\alpha_{k})$ is an algebraic integer divisible by $p!$ unless $j=p1$ and $k=i$. In this particular case, we have that
$f_{i}^{{p1}}(\alpha_{i})=l^{{np}}(p1)!\prod_{{\substack{k=1\\ k\neq i}}}^{n}(\alpha_{i}\alpha_{k})^{p}$ 
and so again, if $p$ is large enough, this is divisible by $(p1)!$ but not by $p!$. Thus $J_{i}$ is a nonzero algebraic integer divisible by $(p1)!$ but not by $p!$. As before, we can prove that $J_{i}\neq 0$.
$J_{i}$ can be written as follows:
$J_{i}=\sum_{{j=0}}^{{np1}}\sum_{{t=0}}^{{r1}}\beta_{{n_{{t+1}}}}(f_{i}(j)% \left(\alpha_{{n_{t}+1}}+\cdots+\alpha_{{n_{{t+1}}}}\right)$ 
Note that by construction, $f_{i}(x)$ can be written as a polynomial whose coefficients are polynomials in $\alpha_{i}$, and the integral coefficients of those polynomials are integers independent of $i$. Thus, noting that the $\alpha_{i}$ form a complete set of conjugates and using the fundamental theorem on symmetric polynomials as in the previous proof, we see that the product of the $J_{i}$ is in fact a rational number. But it is an algebraic integer, hence an integer. Thus $J_{1}\cdot\ldots\cdot J_{n}\in\mathbb{Z}$, and it is divisible by $((p1)!)^{n}$. Thus $\lvert J_{1}\cdots J_{n}\rvert\geq(p1)!$. But the same estimate as in the previous proofs shows that for each $i$,
$\lvert J_{i}\rvert\leq\sum_{{k=1}}^{n}\lvert\beta_{k}\rvert\lvert I_{i}(\alpha% _{k})\rvert\leq\sum_{{k=1}}^{n}\lvert\beta_{k}\alpha_{k}\rvert e^{{\lvert% \alpha_{k}\rvert}}F_{i}(\lvert\alpha_{k}\rvert)$ 
which as before is $\leq c^{p}$ for some sufficiently large $c$. These estimates are again in contradiction, proving the theorem.
∎
Note that Theorems 1 and 2 are trivial corollaries of Theorem 3, as one would expect. (To prove that $\pi$ is transcendental, note that if it were algebraic, then $e^{{i\pi}}=1$ would be transcendental).
Here are some other more or less trivial corollaries.
Corollary 4.
If $\alpha\neq 0$ is algebraic, then $e^{{i\alpha}}$ is transcendental.
Proof.
If it were algebraic, say $e^{{i\alpha}}=\beta$, then we have
$e^{{i\alpha}}\beta e^{0}=0$ 
in contradiction to the above theorem since $\alpha\neq 0$. ∎
Corollary 5.
If $\alpha\neq 0$ is algebraic, then $\cos\alpha$ and $\sin\alpha$ are both transcendental.
Proof.
Recall that $\cos\alpha+i\sin\alpha=e^{{i\alpha}}$, which is transcendental. If either $\cos\alpha$ or $\sin\alpha$ were algebraic, then the other would be as well (and thus their sum would be) since $\cos^{2}\alpha+\sin^{2}\alpha=1$. Therefore, both $\cos\alpha$ and $\sin\alpha$ are transcendental. ∎
Corollary 6.
If $\alpha>0$ is algebraic with $\alpha\neq 1$, then $\ln\alpha$ is transcendental.
Proof.
If $\beta=\ln\alpha$, then $e^{\beta}=\alpha$. By Corollary 4, since $\alpha$ is algebraic, $\beta$ cannot be. ∎
References
 1 A. Baker, Transcendental Number Theory, Cambridge University Press, 1990.
 2 H. Cohn, A Short Proof of the Simple Continued Fraction Expansion of e, American Mathematical Monthly, Jan. 2006, pp. 5762.
 3 C. Hermite, Sur la fonction exponentielle, Compte Rendu Acad. Sci. 77 (1873) 1824, 7479, 226233, and 285293; also in Œuvres, v. 3, GauthierVilliers, Paris, 1912 pp. 150181.
 4 U. G. Mitchell, M. Strain, Osiris, Vol. 1, Jan. 1936, pp. 476496.
Mathematics Subject Classification
11J85 no label found12D99 no label found Forums
 Planetary Bugs
 HS/Secondary
 University/Tertiary
 Graduate/Advanced
 Industry/Practice
 Research Topics
 LaTeX help
 Math Comptetitions
 Math History
 Math Humor
 PlanetMath Comments
 PlanetMath System Updates and News
 PlanetMath help
 PlanetMath.ORG
 Strategic Communications Development
 The Math Pub
 Testing messages (ignore)
 Other useful stuff
 Corrections
Comments
Critical reading?
I'd love someone to do a critical reading of this article, which I finally finished after a short break, primarily for clarity but also for completeness. If someone wants to volunteer, I'd be happy to give you edit rights.
Roger