Lagrange multipliers on manifolds

We discuss in this article the theoretical aspects of the Lagrange multiplier method.

To enhance understanding, proofs and intuitive explanations of the Lagrange multipler method will be given from several different viewpoints, both elementary and advanced.

1 Statements of theorem

Let N be a n-dimensional differentiable manifold (without boundary), and f:N, and gi:N, for i=1,,k, be continuously differentiable. Set M=i=1kgi-1({0}).

1.1 Formulation with differential forms

Theorem 1.

Suppose dgi are linearly independentMathworldPlanetmath at each point of M. If pM is a local minimumMathworldPlanetmath or maximum point of f restricted to M, then there exist Lagrange multipliers λ1,,λkR, depending on p, such that


Here, d denotes the exterior derivativeMathworldPlanetmath.

Of course, as in one-dimensional calculus, the condition df(p)=iλidgi(p) by itself does not guarantee p is a minimum or maximum point, even locally.

1.2 Formulation with gradients

The version of Lagrange multipliers typically used in calculus is the special case N=n in Theorem 1. In this case, the conclusionMathworldPlanetmath of the theorem can also be written in terms of gradientsMathworldPlanetmath instead of differential forms:

Theorem 2.

Suppose gi are linearly independent at each point of M. If pM is a local minimum or maximum point of f restricted to M, then there exist Lagrange multipliers λ1,,λkR, depending on p, such that


This formulation and the first one are equivalentMathworldPlanetmathPlanetmathPlanetmathPlanetmathPlanetmath since the 1-form df can be identified with the gradient f, via the formulaMathworldPlanetmathPlanetmath f(p)v=df(p;v)=dfp(v).

1.3 Formulation with tangent maps

The functions gi can also be coalesced into a vector-valued functionPlanetmathPlanetmath g:Nk. Then we have:

Theorem 3.

Let g=(g1,,gk):NRk. Suppose the tangent map Dg is surjectivePlanetmathPlanetmath at each point of M. If pM is a local minimum or maximum point of f restricted to M, then there exists a Lagrange multiplier vector λ(Rk)*, depending on p, such that


Here, Dg(p):(Rk)*(TpN)* denotes the pullback of the linear transformation ( Dg(p):TpNRk.

If Dg is represented by its Jacobian matrix, then the condition that it be surjective is equivalent to its Jacobian matrix having full rank.

Note the deliberate use of the space (k)* instead of k — to which the former is isomorphic to — for the Lagrange multiplier vector. It turns out that the Lagrange multiplier vector naturally lives in the dual spaceMathworldPlanetmathPlanetmathPlanetmath and not the original vector spaceMathworldPlanetmath k. This distinction is particularly important in the infinite-dimensional generalizationsPlanetmathPlanetmath of Lagrange multipliers. But even in the finite-dimensional setting, we do see hints that the dual space has to be involved, because a transposeMathworldPlanetmath is involved in the matrix expression for Lagrange multipliers.

If the expression Dg(p)*λ is written out in coordinatesMathworldPlanetmathPlanetmath, then it is apparent that the components λi of the vector λ are exactly those Lagrange multipliers from Theorems 1 and 2.

2 Proofs

The proof of the Lagrange multiplier theorem is surprisingly short and elegant, when properly phrased in the languagePlanetmathPlanetmath of abstract manifolds and differential forms.

However, for the benefit of the readers not versed in these topics, we provide, in addition to the abstract proof, a concrete translationMathworldPlanetmathPlanetmath of the argumentsMathworldPlanetmath in the more familiar setting N=n.

2.1 Beautiful abstract proof


Since dgi are linearly independent at each point of M=i=1kgi-1({0}), M is an embedded submanifold of N, of dimensionMathworldPlanetmathPlanetmath m=n-k. Let α:UM, with U open in m, be a coordinate chart for M such that α(0)=p. Then α*f has a local minimum or maximum at 0, and therefore 0=d(α*f)=α*df at 0. But α* at p is an isomorphismPlanetmathPlanetmathPlanetmath (TpM)*(T0m)*, so the preceding equation says that df vanishes on TpM.

Now, by the definition of gi, we have α*gi=0, so 0=d(α*gi)=α*dgi. So like df, dgi vanishes on TpM.

In other words, dgi(p) is in the annihilatorMathworldPlanetmathPlanetmathPlanetmath ( (TpM)0 of the subspaceMathworldPlanetmathPlanetmathPlanetmath TpMTpN. Since TpM has dimension m=n-k, and TpN has dimension n, the annihilator (TpM)0 has dimension k. Now dgi(p)(TpM)0 are linearly independent, so they must in fact be a basis for (TpM)0. But we had argued that df(p)(TpM)0. Therefore df(p) may be written as a unique linear combinationMathworldPlanetmath of the dgi(p):


The last paragraph of the previous proof can also be rephrased, based on the same underlying ideas, to make evident the fact that the Lagrange multiplier vector lives in the dual space (k)*.

Alternative argument..

A general theorem in linear algebra states that for any linear transformation L, the image of the pullback L* is the annihilator of the kernel of L. Since kerDg(p)=TpM and df(p)(TpM)0, it immediately follows that λ(k)* exists such that df(p)=Dg(p)*λ. ∎

Yet another proof could be devised by observing that the result is obvious if N=n and the constraint functions are just coordinate projections on n:


We clearly must have f/yk+1==f/yn=0 at a point p that minimizes f(y) over y1==yk=0. The general case can be deduced to this by a coordinate change:

Alternate argument..

Since dgi are linearly independent, we can find a coordinate chart for N about the point p, with coordinate functions y1,,yn:N such that yi=gi for i=1,,k. Then

df =fy1dy1++fyndyn

but f/yk+1==f/yn=0 at the point p. Set λi=f/gi at p. ∎

2.2 Clumsy, but down-to-earth proof


We assume that N=n. Consider the list vector g=(g1,,gk) discussed earlier, and its Jacobian matrix Dg in EuclideanPlanetmathPlanetmath coordinates. The ith row of this matrix is


So the matrix Dg has full rank (i.e. rankDg=k) if and only if the k gradients gi are linearly independent.

Consider each solution qM of g(q)=0. Since Dg has full rank, we can apply the implicit function theorem, which states that there exist smooth solution parameterizations α:UM around each point qM. (U is an open set in m, m=n-k.) These α are the coordinate charts which give to M=g-1({0}) a manifold structureMathworldPlanetmath.

We now consider specially the point q=p; without loss of generality, assume α(0)=p. Then fα is a function on Euclidean spaceMathworldPlanetmath having a local minimum or maximum at 0, so its derivativePlanetmathPlanetmath vanishes at 0. Calculating by the chain ruleMathworldPlanetmath, we have 0=D(fα)(0)=Df(p)Dα(0). In other words, kerDf(p)range of Dα(0)=TpM. Intuitively, this says that the directional derivativesMathworldPlanetmath at p of f lying in the tangent spaceMathworldPlanetmathPlanetmath TpM of the manifold M vanish.

By the definition of g and α, we have gα=0. By the chain rule again, we derive 0=Dg(p)Dα(0).

Let the columns of Dα(0) be the column vectors v1,,vm, which span the m-dimensional space TpM, and look at the matrix equation 0=Df(p)Dα(0) again. The equation for each entry of this matrix, which consists of only one row, is:


In other words, f(p) is orthogonalMathworldPlanetmathPlanetmathPlanetmath to v1,,vm, and hence it is orthogonal to the entire tangent space TpM.

Similarly, the matrix equation 0=Dg(p)Dα(0) can be split into individual scalar equations:


Thus gi(p) is orthogonal to TpM. But gi(p) are, by hypothesisMathworldPlanetmath, linearly independent, and there are k of these gradients, so they must form a basis for the orthogonal complementMathworldPlanetmath of TpM, of n-m=k dimensions. Hence f(p) can be written as a unique linear combination of gi(p):


3 Intuitive interpretations

We now discuss the intuitive and geometric interpretationsMathworldPlanetmath of Lagrange multipliers.

3.1 Normals to tangent hyperplanes

Each equation gi=0 defines a hypersurface Mi in n, a manifold of dimension n-1. If we consider the tangentPlanetmathPlanetmathPlanetmath hyperplaneMathworldPlanetmathPlanetmath at p of these hypersurfaces, TpMi, the gradient gi(p) gives the normal vectorMathworldPlanetmath to these hyperplanes.

The manifold M is the intersectionMathworldPlanetmath of the hypersurfaces Mi. Presumably, the tangent space TpM is the intersection of the TpMi, and the subspace perpendicularMathworldPlanetmathPlanetmath to TpM would be spanned by the normals gi(p). Now, the direction derivatives at p of f with respect to each vector in TpM, as we have proved, vanish. So the direction of f(p), the direction of the greatest change in f at p, should be perpendicular to TpM. Hence f(p) can be written as a linear combination of the gi(p).

Note, however, that this geometric picture, and the manipulations with the gradients f(p) and gi(p), do not carry over to abstract manifolds. The notions of gradients and normals to surfaces depend on the inner productMathworldPlanetmath structure of n, which is not present in an abstract manifold (without a Riemannian metricMathworldPlanetmath).

On the other hand, this explains the mysterious appearance of annihilators in the last paragraph of the abstract proof. Annihilators and dual space theory serve as the proper tools to formalize the manipulations we made with the matrix equations 0=Df(p)Dα(0) and 0=Dg(p)Dα(0), without resorting to Euclidean coordinates, which, of course, are not even defined on an abstract manifold.

3.2 With infinitesimals

If we are willing to interpret the quantities df and dgi as infinitesimalsMathworldPlanetmathPlanetmath, even the abstract version of the result has an intuitive explanation. Suppose we are at the point p of the manifold M, and consider an infinitesimal movement Δp about this point. The infinitesimal movement Δp is a vector in the tangent space TpM, because, near p, M looks like the linear space TpM. And as p moves, the function f changes by a corresponding infinitesimal amount df that is approximately linear in Δp.

Furthermore, the change df may be decomposed as the sum of a change as p moves along the manifold M, and a change as p moves out of the manifold M. But if f has a local minimum at p, then there cannot be any change of f along M; thus f only changes when moving out of M. Now M is described by the equations gi=0, so a movement out of M is described by the infinitesimal changes dgi. As df is linear in the change Δp, we ought to be able to write it as a weighted sum of the changes dgi. The weights are, of course, the Lagrange multipliers λi.

The linear algebra performed in the abstract proof can be regarded as the precise, rigorous translation of the preceding argument.

3.3 As rates of substitution

Observe that the formula for Lagrange multipliers is formally very similarMathworldPlanetmathPlanetmath to the standard formula for expressing a differential form in terms of a basis:


In fact, if dgi(p) are linearly independent, then they do form a basis for (TpM)0, that can be extended to a basis for (TpN)*. By the uniqueness of the basis representation, we must have


That is, λi is the differentialMathworldPlanetmath of f with respect to changes in gi.

In applications of Lagrange multipliers to economic problems, the multipliers λi are rates of substitution — they give the rate of improvement in the objective function f as the constraints gi are relaxed.

4 Stationary points

In applications, sometimes we are interested in finding stationary points p of f — defined as points p such that df vanishes on TpM, or equivalently, that the Taylor expansion of f at p, under any system of coordinates for M, has no terms of first order. Then the Lagrange multiplier method works for this situation too.

The following theorem incorporates the more general notion of stationary points.

Theorem 4.

Let N be a n-dimensional differentiable manifold (without boundary), and f:NR, gi:NR, for i=1,,k, be continuously differentiable. Suppose pM=i=1kgi-1({0}), and dgi(p) are linearly independent.

Then p is a stationary point (e.g. a local extremum point) of f restricted to M, if and only if there exist λ1,,λkR such that


The Lagrange multipliers λi, which depend on p, are unique when they exist.

In this formulation, M is not necessarily a manifold, but it is one when intersected with a sufficiently small neighborhood about p. So it makes sense to talk about TpM, although we are abusing notation here. The subspace in question can be more accurately described as the annihilated subspace of span{dgi(p)}.

It is also enough that dgi be linearly independent only at the point p. For dgi are continuousMathworldPlanetmathPlanetmath, so they will be linearly independent for points near p anyway, and we may restrict our viewpoint to a sufficiently small neighborhood around p, and the proofs carry through.

The proof involves only simple modifications to that of Theorem 1 — for instance, the converse implication follows because we have already proved that the dgi(p) form a basis for the annihilator of TpM, independently of whether or not p is a stationary point of f on M.


  • 1 Friedberg, Insel, Spence. Linear Algebra. Prentice-Hall, 1997.
  • 2 David Luenberger. Optimization by Vector Space Methods. John Wiley & Sons, 1969.
  • 3 James R. Munkres. Analysis on Manifolds. Westview Press, 1991.
  • 4 R. Tyrrell Rockafellar. “Lagrange Multipliers and Optimality”. SIAM Review. Vol. 35, No. 2, June 1993.
  • 5 Michael Spivak. Calculus on Manifolds. Perseus Books, 1998.
Title Lagrange multipliers on manifolds
Canonical name LagrangeMultipliersOnManifolds
Date of creation 2013-03-22 15:25:45
Last modified on 2013-03-22 15:25:45
Owner stevecheng (10074)
Last modified by stevecheng (10074)
Numerical id 24
Author stevecheng (10074)
Entry type Topic
Classification msc 58C05
Classification msc 49-00
Related topic Manifold