# Frobenius method

Let us consider the linear homogeneous differential equation

 $\sum_{\nu=0}^{n}k_{\nu}(x)y^{(n-\nu)}(x)\;=\;0$

of order (http://planetmath.org/DifferentialEquation) $n$.  If the coefficient functions $k_{\nu}(x)$ are continuous  and the coefficient $k_{0}(x)$ of the highest order derivative (http://planetmath.org/HigherOrderDerivatives) does not vanish on a certain interval (resp. a domain (http://planetmath.org/Domain2) in $\mathbb{C}$), then all solutions $y(x)$ are continuous on this interval (resp. ).  If all coefficients have the continuous derivatives up to a certain , the same concerns the solutions.

If, instead, $k_{0}(x)$ vanishes in a point $x_{0}$, this point is in general a singular point.  After dividing the differential equation  by $k_{0}(x)$ and then getting the form

 $y^{(n)}(x)+\sum_{\nu=1}^{n}c_{\nu}(x)y^{(n-\nu)}(x)\;=\;0,$

some new coefficients $c_{\nu}(x)$ are discontinuous  in the singular point.  However, if the discontinuity is so, that the products

 $(x-x_{0})c_{1}(x),\quad(x-x_{0})^{2}c_{2}(x),\quad\ldots,\quad(x-x_{0})^{n}c_{% n}(x)$

are continuous, and analytic in $x_{0}$, the point $x_{0}$ is a regular singular point  of the differential equation.

We introduce the so-called    for finding solution functions in a neighbourhood of the regular singular point $x_{0}$, confining us to the case of a second order (http://planetmath.org/DifferentialEquation) differential equation.  When we use the quotient (http://planetmath.org/Division) forms

 $(x-x_{0})c_{1}(x)\;:=\;\frac{p(x)}{r(x)},\quad(x-x_{0})^{2}c_{2}(x)\;:\;=\frac% {q(x)}{r(x)},$

where $r(x)$, $p(x)$ and $q(x)$ are analytic in a neighbourhood of $x_{0}$ and  $r(x)\neq 0$,  our differential equation reads

 $\displaystyle(x-x_{0})^{2}r(x)y^{\prime\prime}(x)+(x-x_{0})p(x)y^{\prime}(x)+q% (x)y(x)\;=\;0.$ (1)

Since a change  $x\!-\!x_{0}\mapsto x$  of variable brings to the case that the singular point is the origin, we may suppose such a starting situation.  Thus we can study the equation

 $\displaystyle x^{2}r(x)y^{\prime\prime}(x)+xp(x)y^{\prime}(x)+q(x)y(x)\;=\;0,$ (2)

where the coefficients have the converging power series  expansions

 $\displaystyle r(x)\;=\;\sum_{n=0}^{\infty}r_{n}x^{n},\quad p(x)\;=\;\sum_{n=0}% ^{\infty}p_{n}x^{n},\quad q(x)\;=\;\sum_{n=0}^{\infty}q_{n}x^{n}$ (3)

and

 $r_{0}\;\neq\;0.$

In the Frobenius method one examines whether the equation (2) allows a series solution of the form

 $\displaystyle y(x)\;=\;x^{s}\sum_{n=0}^{\infty}a_{n}x^{n}\;=\;a_{0}x^{s}+a_{1}% x^{s+1}+a_{2}x^{s+2}+\ldots,$ (4)

where $s$ is a constant and  $a_{0}\neq 0$.

Substituting (3) and (4) to the differential equation (2) converts the left hand to

 $\displaystyle[r_{0}s(s\!-\!1)\!+\!p_{0}s\!+\!q_{0}]a_{0}x^{s}+$ $\displaystyle[[r_{0}(s\!+\!1)s\!+\!p_{0}(s\!+\!1)\!+\!q_{0}]a_{1}\!+\![r_{1}s(% s\!-\!1)\!+\!p_{1}s\!+\!q_{1}]a_{0}]x^{s+1}+$ $\displaystyle[[r_{0}(s\!+\!2)(s\!+\!1)\!+\!p_{0}(s\!+\!2)\!+\!q_{0}]a_{2}\!+\!% [r_{1}(s\!+\!1)s\!+\!p_{1}(s\!+\!1)\!+\!q_{1}]a_{1}\!+\![r_{2}s(s\!-\!1)\!+\!p% _{2}s\!+\!q_{2}]a_{0}]x^{s+2}\!+\ldots$

Our equation seems clearer when using the notations  $f_{\nu}(s):=r_{\nu}{s}(s\!-\!1)+p_{\nu}{s}+q_{n}u$:

 $\displaystyle f_{0}(s)a_{0}x^{s}+[f_{0}(s\!+\!1)a_{1}+f_{1}(s)a_{0}]x^{s+1}+[f% _{0}(s\!+\!2)a_{2}+f_{1}(s\!+\!1)a_{1}+f_{2}(s)a_{0}]x^{s+2}+\ldots\;=\;0$ (5)

Thus the condition of satisfying the differential equation by (4) is the infinite system of equations

 $\displaystyle\begin{cases}f_{0}(s)a_{0}\;=\;0\\ f_{0}(s\!+\!1)a_{1}+f_{1}(s)a_{0}\;=\;0\\ f_{0}(s\!+\!2)a_{2}+f_{1}(s\!+\!1)a_{1}+f_{2}(s)a_{0}\;=\;0\\ \qquad\cdots\qquad\cdots\qquad\cdots\end{cases}$ (6)

In the first , since  $a_{0}\neq 0$,  the indicial equation

 $\displaystyle f_{0}(s)\equiv r_{0}s^{2}+(p_{0}-r_{0})s+q_{0}\;=\;0$ (7)

must be satisfied.  Because  $r_{0}\neq 0$,  this quadratic equation determines for $s$ two values, which in special case may coincide.

The first of the equations (6) leaves $a_{0}\,(\neq 0)$ arbitrary.  The next linear equations in $a_{n}$ allow to solve successively the constants $a_{1},\,a_{2},\,\ldots$ provided that the first coefficients $f_{0}(s\!+\!1)$,  $f_{0}(s\!+\!2),$$\ldots$ do not vanish; this is evidently the case when the roots (http://planetmath.org/Equation) of the indicial equation don’t differ by an integer (e.g. when the are complex conjugates  or when $s$ is the having greater real part  ).  In any case, one obtains at least for one of the of the indicial equation the definite values of the coefficients $a_{n}$ in the series (4).  It is not hard to show that then this series converges in a neighbourhood of the origin.

For obtaining the solution of the differential equation (2) it suffices to have only one solution $y_{1}(x)$ of the form (4), because another solution $y_{2}(x)$, linearly independent  on $y_{1}(x)$, is gotten via mere integrations; then it is possible in the cases  $s_{1}\!-\!s_{2}\in\mathbb{Z}$  that $y_{2}(x)$ has no expansion of the form (4).

## References

• 1 Pentti Laasonen: Matemaattisia erikoisfunktioita.  Handout No. 261. Teknillisen Korkeakoulun Ylioppilaskunta; Otaniemi, Finland (1969).
 Title Frobenius method Canonical name FrobeniusMethod Date of creation 2013-03-22 17:43:49 Last modified on 2013-03-22 17:43:49 Owner pahio (2872) Last modified by pahio (2872) Numerical id 18 Author pahio (2872) Entry type Topic Classification msc 15A06 Classification msc 34A05 Synonym method of Frobenius Related topic FuchsianSingularity Related topic BesselsEquation Related topic SpecialCasesOfHypergeometricFunction Defines indicial equation