# lecture notes on polynomial interpolation

## 2 The polynomial interpolation problem

###### Definition 1.

Let $\mathbf{x}\in\mathbb{R}^{n}$ be given. Define $\operatorname{ev}_{\mathbf{x}}:\mathcal{P}_{m}\to\mathbb{R}^{n}$, the multi-evaluation mapping, to be the linear transformation given by

 $\operatorname{ev}_{\mathbf{x}}:p\to\begin{bmatrix}p(x_{1})\\ \vdots\\ p(x_{n})\end{bmatrix},\quad p\in\mathcal{P}_{m}.$
###### Problem 2 (Polynomial interpolation).

Given $(x_{1},y_{1}),\ldots,(x_{n},y_{n})\in\mathbb{R}^{2}$ to find a polynomial $p\in\mathcal{P}_{m}$ such that $p(x_{i})=y_{i},\;i=1,\ldots,n$. Equivalently, given $\mathbf{x},\mathbf{y}\in\mathbb{R}^{n}$ to find the preimage  $\operatorname{ev}_{\mathbf{x}}^{-1}(\mathbf{y})$.

## 3 Vandermonde matrix, polynomial, and determinant

###### Definition 4.

For a given $\mathbf{x}\in\mathbb{R}^{n}$, the following matrix

 $\mathsf{VM}(\mathbf{x})=\mathsf{VM}(x_{1},\ldots,x_{n})=\begin{bmatrix}1&x_{1}% &x_{1}^{2}&\ldots&x_{1}^{n-1}\\ 1&x_{2}&x_{2}^{2}&\ldots&x_{2}^{n-1}\\ 1&x_{3}&x_{3}^{2}&\ldots&x_{3}^{n-1}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 1&x_{n}&x_{n}^{2}&\ldots&x_{n}^{n-1}\end{bmatrix}$

is called the Vandermonde matrix. The expression,

 $\mathsf{V}(\mathbf{x})=\mathsf{V}(x_{1},\ldots,x_{n})=\prod_{1\leq i

is called the Vandermonde polynomial. Note: we define $\mathsf{V}(x_{1})=1$; the usual convention is that the empty product is equal to $1$.

###### Proposition 5.

The Vandermonde matrix is the transformation matrix of $\operatorname{ev}_{\mathbf{x}}$ with the monomial basis $[1,x,\ldots,x^{n}]$ as the input basis and the standard basis $[\mathbf{e}_{1},\ldots,\mathbf{e}_{n+1}]$ as the output basis.

###### Theorem 6.

The Vandermonde polynomial gives the determinant  of the Vandermonde matrix:

 $\left|\begin{matrix}1&x_{1}&\ldots&x_{1}^{n-1}\\ 1&x_{2}&\ldots&x_{2}^{n-1}\\ \vdots&\vdots&\ddots&\vdots\\ 1&x_{n}&\ldots&x_{n}^{n-1}\end{matrix}\right|=\prod_{1\leq i (1)

or more succinctly,

 $\det\mathsf{VM}(\mathbf{x})=\mathsf{V}(\mathbf{x}).$
###### Proof.
 $\mathsf{VM}(x_{1})=\begin{bmatrix}1\end{bmatrix},\quad\mathsf{V}(x_{1})=1.$

Evidently then, the formula works for $n=1$. Next, suppose that we believe the formula for a given $n$. We show that the formula is valid for $n+1$. For $x_{1},\ldots,x_{n}\in\mathbb{R}$ and a variable $x$, and consider the $n^{\text{th}}$ degree polynomial

 $p(x)=\det\mathsf{VM}(x_{1},\ldots,x_{n},x)=\left|\begin{matrix}1&x_{1}&\ldots&% x_{1}^{n-1}&x_{1}^{n}\\ 1&x_{2}&\ldots&x_{2}^{n-1}&x_{2}^{n}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 1&x_{n}&\ldots&x_{n}^{n-1}&x_{n}^{n}\\ 1&x&\ldots&x^{n-1}&x^{n}\end{matrix}\right|$

By the properties of determinants, $x_{1},\ldots,x_{n}$ are roots of $p(x)$. Taking the cofactor expansion along the bottom row, we see that the coefficient of $x^{n}$ is $\mathsf{V}(x_{1},\ldots,x_{n})$. Therefore,

 $p(x)=\mathsf{V}(x_{1},\ldots,x_{n})(x-x_{1})\cdots(x-x_{n})=\mathsf{V}(x_{1},% \ldots,x_{n},x),$

as was to be shown. ∎

## 4 Lagrange interpolation formula

Let $x_{1},\ldots,x_{n}\in\mathbb{R}$ be distinct. We know that $\operatorname{ev}_{\mathbf{x}}:\mathcal{P}_{n-1}\to\mathbb{R}^{n}$ is invertible   . Let $\operatorname{pol}_{\mathbf{x}}:\mathbb{R}^{n}\to\mathcal{P}_{n-1}$ denote the inverse   . In principle, this inverse is described by the inverse of the Vandermonde matrix. Is there another way to solve the interpolation problem? For $i=1,\ldots,n$ let us define the polynomial

 $p_{i}(x)=\prod_{j\neq i}\frac{x-x_{j}}{x_{i}-x_{j}}.$

These $n-1$ degree polynomials have been “engineered” so that $p_{i}(x_{i})=1$ and so that $p_{i}(x_{j})=0$ for $i\neq j$

###### Theorem 7 (Lagrange interpolation formula).

Let $x_{1},\ldots,x_{n}\in\mathbb{R}$ be distinct. Then,

 $\operatorname{pol}_{\mathbf{x}}(\mathbf{y})=y_{1}p_{1}+\cdots+y_{n}p_{n}.$

## 5 Underdetermined and overdetermined interpolation

###### Definition 8.

Let $T:\mathcal{U}\to\mathcal{V}$ be a linear transformation of finite dimensional vector spaces. A linear problem is an equation of the form

 $T(\mathbf{u})=\mathbf{v},$

where $\mathbf{v}\in\mathcal{V}$ is given, and $\mathbf{u}\in\mathcal{U}$ is the unknown. To be more precise, the problem is to determine the preimage $T^{-1}(\mathbf{v})$ for a given $\mathbf{v}\in\mathcal{V}$. We say that the problem is overdetermined if $\dim\mathcal{V}>\dim\mathcal{U}$, i.e. if there are more equations than unknowns. The linear problem is said to be underdetermined if $\dim\mathcal{V}<\dim\mathcal{U}$, i.e., if there are more variables than equations.

###### Remark 9.

By the rank-nullity theorem  , an underdetermined linear problem will either be inconsistent, or will have multiple solutions. Thus, an underdetermined system arises when $T$ is not one-to-one; i.e., the kernel $\ker(T)$ is non-trivial. An overdetermined linear system is inconsistent, unless $\mathbf{v}$ satisfies a number linear compatibility constraints, equations that describe the image $\operatorname{Im}(T)$. To put it another way, an overdetermined system arises when $T$ is not onto.

###### Definition 10.

Let $x_{1},\ldots,x_{n}\in\mathbb{R}$ be distinct and let $\operatorname{ev}_{\mathbf{x}}:\mathcal{P}_{m}\to\mathbb{R}^{n}$ be the corresponding multi-evaluation mapping. Let $\mathbf{y}\in\mathbb{R}^{n}$ be given. The linear equation

 $\operatorname{ev}_{\mathbf{x}}(p)=\mathbf{y},\quad p\in\mathcal{P}_{m}$

is called an underdetermined interpolation problem if $m\geq n$. If $m\leq n-2$, we call the above equation an overdetermined interpolation problem. Note that if $m=n-1$, the interpolation problem is “just right”; there is exactly one solution, namely the polynomial $\operatorname{pol}_{\mathbf{x}}(\mathbf{y})$ given by the Lagrange interpolation formula.

###### Proposition 11 (Underdetermined interpolation).

Let $x_{1},\ldots,x_{n}\in\mathbb{R}$ be distinct. Define

 $q(x)=(x-x_{1})\cdots(x-x_{n})$

to be the $n^{\text{th}}$ degree polynomial with the $x_{i}$ as its roots. Suppose that $m\geq n$. Then, the multi-evaluation mapping $\operatorname{ev}_{\mathbf{x}}:\mathcal{P}_{m}\to\mathbb{R}^{n}$ is not one-to-one. A basis for the kernel is given by $q(x),xq(x),\ldots,x^{m-n}q(x)$. Let $y_{1},\ldots,y_{n}\in\mathbb{R}$ be given. The solution set to the interpolation problem $\operatorname{ev}_{\mathbf{x}}(p)=\mathbf{y}$, is given by

 $\operatorname{ev}_{\mathbf{x}}^{-1}(\mathbf{y})=\{r+sq:s\in\mathcal{P}_{m-n}\},$

where $r=\operatorname{pol}_{\mathbf{x}}(\mathbf{y})$ is the $n-1^{\text{st}}$ degree polynomial given by the Lagrange interpolation formula. To put it another way, the general solution of the equation

 $\operatorname{ev}_{\mathbf{x}}(p)=\mathbf{y}$

is given by

 $p=r+sq,\quad s\in\mathcal{P}_{n-m}\text{ free.}$
###### Proposition 12 (Overdetermined interpolation).

Let $x_{1},\ldots,x_{n}\in\mathbb{R}$ be distinct. Suppose that $m\leq n-2$. Then, the multi-evaluation mapping $\operatorname{ev}_{\mathbf{x}}:\mathcal{P}_{m}\to\mathbb{R}^{n}$ is not onto. If $m=n-2$, then $\mathbf{y}\in\mathbb{R}^{4}$ belongs to $\operatorname{Im}(\operatorname{ev}_{\mathbf{x}})$ if and only if

 $\left|\begin{matrix}1&x_{1}&\ldots&x_{1}^{m}&y_{1}\\ 1&x_{2}&\ldots&x_{2}^{m}&y_{2}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 1&x_{n-1}&\ldots&x_{n-1}^{m}&y_{n-1}\\ 1&x_{n}&\ldots&x_{n}^{m}&y_{n}\end{matrix}\right|=0$

More generally, for any $m\leq n-2$, the interpolation problem $\operatorname{ev}_{\mathbf{x}}(p)=\mathbf{y},\;\mathbf{y}\in\mathbb{R}^{n}$ has a solution if and only if $\operatorname{rank}M(\mathbf{x},\mathbf{y})=m+1$ where

 $M(\mathbf{x},\mathbf{y})=\begin{bmatrix}1&x_{1}&\ldots&x_{1}^{m}&y_{1}\\ 1&x_{2}&\ldots&x_{2}^{m}&y_{2}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 1&x_{m+1}&\ldots&x_{m+1}^{m}&y_{m+1}\\ \vdots&\vdots&\vdots&\vdots&\vdots\\ 1&x_{n}&\ldots&x_{n}^{m}&y_{n}\end{bmatrix}$
###### Remark 13.

The compatibility constraints on $y_{1},\ldots,y_{n}$ for an overdetermined system amount to the condition that $\mathbf{y}$ lie in the image of $\operatorname{ev}_{\mathbf{x}}$, or what is equivalent     , belong to the column space of the corresponding transformation matrix. We can therefore obtain the constraint equations by row reducing the augmented matrix $M(\mathbf{x},\mathbf{y})$ to echelon form  . The back entries of the last $n-m-1$ rows will hold the constraint equations. Equivalently, we can solve the interpolation problem for $(x_{1},y_{1}),\ldots,(x_{m},y_{m})$ to obtain a $p=y_{1}p_{1}+y_{m+1}p_{m+1}\in\mathcal{P}_{m}$. The additional equations

 $y_{i}=y_{1}p_{1}(x_{i})+\cdots+y_{m+1}p_{m+1}(x_{i}),\;i=m+2,\ldots,n$

are the desired compatibility constraints on $y_{1},\ldots,y_{n}$.

 Title lecture notes on polynomial interpolation Canonical name LectureNotesOnPolynomialInterpolation Date of creation 2013-03-22 16:51:59 Last modified on 2013-03-22 16:51:59 Owner rmilson (146) Last modified by rmilson (146) Numerical id 7 Author rmilson (146) Entry type Topic Classification msc 41A05 Related topic EvaluationHomomorphism Related topic LagrangeInterpolationFormula Related topic VanderMondeMatrix Defines multi-evaluation mapping Defines interpolation mapping Defines polynomial interpolation