## You are here

Homeproof of cyclic vector theorem

## Primary tabs

# proof of cyclic vector theorem

First, let’s assume $f$ has a cyclic vector $v$. Then $B=\{v,f(v),...,f^{{n-1}}(v)\}$ is a basis for $V$. Suppose $g$ is a linear transformation which commutes with $f$. Consider the coordinates $(\alpha_{{0}},...,\alpha_{{n-1}})$ of $g(v)$ in B, that is

$g(v)=\sum_{{i=0}}^{{n-1}}\alpha_{{i}}f^{{i}}(v).$ |

Let

$P=\sum_{{i=0}}^{{n-1}}\alpha_{{i}}X^{{i}}\in k[X].$ |

We show that $g=P(f)$. For $w\in V$, write

$w=\sum_{{j=0}}^{{n-1}}\beta_{{j}}f^{{j}}(v),$ |

then

$\displaystyle g(w)$ | $\displaystyle=$ | $\displaystyle\sum_{{j=0}}^{{n-1}}\beta_{{j}}g(f^{{j}}(v))=\sum_{{j=0}}^{{n-1}}% \beta_{{j}}f^{{j}}(g(v))$ | ||

$\displaystyle=$ | $\displaystyle\sum_{{j=0}}^{{n-1}}\beta_{{j}}f^{{j}}(\sum_{{i=0}}^{{n-1}}\alpha% _{{i}}f^{{i}}(v))=\sum_{{j=0}}^{{n-1}}\beta_{{j}}\sum_{{i=0}}^{{n-1}}\alpha_{{% i}}f^{{j+i}}(v)=\sum_{{j=0}}^{{n-1}}\sum_{{i=0}}^{{n-1}}\beta_{{j}}\alpha_{{i}% }f^{{j+i}}(v)$ | |||

$\displaystyle=$ | $\displaystyle\sum_{{i=0}}^{{n-1}}\sum_{{j=0}}^{{n-1}}\beta_{{j}}\alpha_{{i}}f^% {{j+i}}(v)=\sum_{{i=0}}^{{n-1}}\alpha_{{i}}f^{{i}}(\sum_{{j=0}}^{{n-1}}\beta_{% {j}}f^{{j}}(v))=\sum_{{i=0}}^{{n-1}}\alpha_{{i}}f^{{i}}(w)$ |

Now, to finish the proof, suppose $f$ doesn’t have a cyclic vector (we want to see that there is a linear transformation $g$ which commutes with $f$ but is not a polynomial evaluated in $f$). As $f$ doesn’t have a cyclic vector, then due to the cyclic decomposition theorem $V$ has a basis of the form

$B=\{v_{{1}},f(v_{{1}}),...,f^{{j_{{1}}}}(v_{{1}}),v_{{2}},f(v_{{2}}),...,f^{{j% _{{2}}}}(v_{{2}}),...,v_{{r}},f(v_{{r}}),...,f^{{j_{{r}}}}(v_{{r}})\}.$ |

Let $g$ be the linear transformation defined in $B$ as follows:

$g(f^{{k}}(v_{{1}}))=\left\{\begin{array}[]{ll}0&\textrm{for every }k=0,\ldots,% j_{1}\\ f^{{k_{{i}}}}(v_{{i}})&\textrm{for every }i=2,\ldots,r\textrm{ and }k_{{i}}=0,% \ldots,j_{{i}}.\end{array}\right.$ |

The fact that $f$ and $g$ commute is a consequence of $g$ being defined as zero on one $f$-invariant subspace and as the identity on its complementary $f$-invariant subspace. Observe that it’s enough to see that $g$ and $f$ commute in the basis $B$ (this fact is trivial). We see that, if $k=0,...,j_{{1}}-1$, then

$(gf)(f^{{k}}(v_{{1}}))=g(f^{{k+1}}(v_{{1}}))=0\quad\mbox{ and }\quad(fg)(f^{{k% }}(v_{{1}}))=f(g(f^{{k}}(v_{{1}}))=f(0)=0.$ |

If $k=j_{{1}}$, we know there are $\lambda_{{0}},...,\lambda_{{j_{{1}}}}$ such that

$f^{{j_{{1}}+1}}(v_{{1}})=\sum_{{k=0}}^{{j_{{1}}}}\lambda_{{k}}f^{{k}}(v_{{1}}),$ |

so

$(gf)(f^{{j_{{1}}}}(v_{{1}}))=\sum_{{k=0}}^{{j_{{1}}}}\lambda_{{k}}g(f^{{k}}(v_% {{1}}))=0\quad\mbox{ and }\quad(fg)(f^{{j_{{1}}}}(v_{{1}}))=f(0)=0.$ |

Now, let $i=2,...,r$ and $k_{{i}}=0,...,j_{{i}}-1$, then

$(gf)(f^{{k_{{i}}}}(v_{{i}}))=g(f^{{k_{{i}}+1}}(v_{{i}}))=f^{{k_{{i}}+1}}(v_{{i% }})\quad\mbox{ and }\quad(fg)(f^{{k_{{i}}}}(v_{{i}}))=f(g(f^{{k_{{i}}}}(v_{{i}% }))=f^{{k_{{i}}+1}}(v_{{i}}).$ |

In the case $k_{{i}}=j_{{i}}$, we know there are $\lambda_{{0,i}},...,\lambda_{{j_{{i}},i}}$ such that

$f^{{j_{{i}}+1}}(v_{{i}})=\sum_{{k=0}}^{{j_{{i}}}}\lambda_{{k,i}}f^{{k}}(v_{{i}})$ |

then

$(gf)(f^{{j_{{i}}}}(v_{{i}}))=g(f^{{j_{{i}}+1}}(v_{{i}}))=\sum_{{k=0}}^{{j_{{i}% }}}\lambda_{{k,i}}g(f^{{k}}(v_{{i}}))=\sum_{{k=0}}^{{j_{{i}}}}\lambda_{{k,i}}f% ^{{k}}(v_{{i}})=f^{{j_{{i}}+1}}(v_{{i}}),$ |

and

$(fg)(f^{{j_{{i}}}}(v_{{i}}))=f(g(f^{{j_{{i}}}}(v_{{i}}))=f(f^{{j_{{i}}}}(v_{{i% }}))=f^{{j_{{i}}+1}}(v_{{i}}).$ |

This proves that $g$ and $f$ commute in $B$. Suppose now that $g$ is a polynomial evaluated in $f$. So there is a

$P=\sum_{{k=0}}^{{h}}c_{{k}}X^{{k}}\in K[X]$ |

such that $g=P(f)$. Then, $0=g(v_{{1}})=P(f)(v_{{1}})$, and so the annihilator polynomial $m_{{v_{{1}}}}$ of $v_{{1}}$ divides $P$. But then, as the annihilator $m_{{v_{{2}}}}$ of $v_{{2}}$ divides $m_{{v_{{1}}}}$ (see the cyclic decomposition theorem), we have that $m_{{v_{{2}}}}$ divides $P$, and then $0=P(f)(v_{{2}})=g(v_{{2}})=v_{{2}}$ which is absurd because $v_{{2}}$ is a vector of the basis $B$. This finishes the proof.

## Mathematics Subject Classification

15A04*no label found*

- Forums
- Planetary Bugs
- HS/Secondary
- University/Tertiary
- Graduate/Advanced
- Industry/Practice
- Research Topics
- LaTeX help
- Math Comptetitions
- Math History
- Math Humor
- PlanetMath Comments
- PlanetMath System Updates and News
- PlanetMath help
- PlanetMath.ORG
- Strategic Communications Development
- The Math Pub
- Testing messages (ignore)

- Other useful stuff

## Recent Activity

new question: Harshad Number by pspss

Sep 14

new problem: Geometry by parag

Aug 24

new question: Scheduling Algorithm by ncovella

new question: Scheduling Algorithm by ncovella

## Comments

## Cyclic Vector

Given some linear operator

T, I'm having trouble understanding what a cyclic vector is and how to find one.

For example; given some matrix representing a linear operator on say R^2 how can I determine if it has a cyclic vector?

## Re: Cyclic Vector

Hi, I am not sure what you mean by a cyclic vector, I suppose $v$ is cyclic if there is a natural number $k$ such that $A^kv=v$ where $A$ is the matrix of your transformation (is it?).

If so, you need to find $k$ such that the kernel of $A^k-Id$ is non-zero.

I hope that helps,

Alvaro

## Re: Cyclic Vector

Usually, the term cyclic vector refers to a vector v such that the set {v, Av, A^2 v, A^3 v, ...} spans the vector space. (In the infinite dimensional case, one usually interprets this to mean that the span of the set is dense.)

I have not yet seen anyone use the term "cyclic vector" in the sense which is A. L. R. suggested in his reply to your question.

To make the job of looking for a cyclic vector easier, one could simplify the transformation using a similarity transform. In the case of a finite-dimensional matrix, one could put it into Jordan normal form.

Let us see what happens for 2x2 matrices. There are three possibilities: 1) The matrix is a multiple of the identity. 2) The matrix has two distinct eigenvalues. 3) The matrix has two equal eigenvalues but is not the identity matrix.

Case I. There can be no cyclic vector --- if we multiply any vector v by A, all we obtain are multiples of v. The span of these is a one-dimensional subspace, so we cannot span the whole vector space by repeatedly applying A to a vector.

Case II. The matrix can be diagonalized. Call the two distinct eigenvalues be p and q (one of which may be zero). Then the matrix looks like

p 0

0 q

I claim that the vector v = (1,1) is cyclic. Note that A v = (p,q). Since p is not equal to q, the vectors (1,1) and (p,q) are linearly independant, hence they span the two-dimensional vector space.

Case III. The matrix can be put in the Jordan form

x 1

0 x

Again, consider the vector v = (1,1). Then Av = (x+1,x). Since the set {(1,1), (1+x,x)} is linearly independant for any possible value of x, the vector (1,1) is cyclic.

Hope this helps,

Ray

## Re: Cyclic Vector

Thanks,

That really helps clear it up for me. My text book is really ambigous as is my instructor. I'm just tying to understand it.

Melinda