In the textbook ‘Linear Algebra Done Right’ Sheldon Axler presents the abstract theory of linear algebra in a way that eschews the use of determinants until the final chapter. The cornerstone of this approach is a straightforward iterative argument that proves that every linear operator has at least one complex eigenvalue. This proof is however more than just a theoretical tool. In fact in the paper ‘Computing Eigenvalues and Eigenvectors Without Determinants’ William A. McWorter, Jr. and Leroy F. Meyers extend this idea to produce an algorithm for finding eigenvectors and eigenvalues. For a brief summary of the historical context of this idea please see the introduction to that paper.

The purpose of the following series of notes is to present the ideas in McWorter and Meyers in a more relaxed fashion.

A *dynamical system* consists of an initial state and a function that prescribes how the state of the system changes over time.
For instance if we release from rest a hammer (or a feather) metre above the ground Newton’s second law tells us that the distance above the ground at time is:

where we have assumed that we can neglect resistance forces.
If we instead only measure the system at discrete time intervals then the resulting system is called a *discrete dynamical system*.
The case that we are interested in is even simpler: we assume that the function is a linear operator.

A *discrete linear dynamical system* consists of a sequence of vectors

such that for some linear operator .

This means that the sequence is equivalently described as:

where denotes the -fold action of .

Recall that is a *eigenvector* with *eigenvalue* for the linear operator iff

and .

Eigenvectors and eigenvalues give us valuable information about the long-term behaviour of discrete linear dynamical systems. For instance, suppose that is a linear operator and we are in the nicest possible situation: has two (linearly independent) eigenvectors and with eigenvalues and respectively. Then we can find real numbers and such that whence:

Now this expression is simple enough that we can read off the long-term behaviour of this system once we have worked out the values of and . For instance if and have magnitude less than one the system will converge to zero. If instead and the system will eventually oscillate between the vectors and .

Clearly the evolution of a dynamical system depends on its initial conditions. If we return to the example of the falling hammer: throwing the hammer gently up into the air will result in a different trajectory than just dropping it. However, provided that the throw is not too strong, the long-term behaviour of the system is similar: the hammer eventually falls down to the ground. When we consider the long-term behaviour of the system we think of these outcomes as equivalent. By contrast, if we accelerate the hammer with so much force that it escapes the Earth’s gravitational field then the long-term behaviour will be different. In this way we obtain different initial conditions that produce non-equivalent long-term behaviours.

Unsurprisingly the long-term behaviour of a general discrete linear dynamical system also depends on its initial conditions. We will exploit this fact to simplify our calculations of eigenvalues and eigenvectors. In fact the initial vectors can be collected into linear subspaces with the property that vectors in the same subspace induce the same long-term behaviour.

We have seen that a knowledge of the eigenvalues and eigenvectors tells us about the long-term behaviour of a discrete linear dynamical system. But how do we know if a given linear operator has any eigenvectors? As a first step, let us recall the following proof which is for instance Theorem 5.10 on page 81 in ‘Linear Algebra Done Right’ by Sheldon Axler.

Theorem: Every linear operator has a complex eigenvalue when .

Proof: Starting with an arbitrary non-zero initial vector consider the sequence

and ask the question: are there any dependence relations between the vectors in this sequence?

Of course since the dimension of is we see that *at the very least* the first vectors in the sequence must necessarily be linearly dependent.
Hence we can find real numbers such that

which we can rewrite as

where is the identity matrix.
By factorising the polynomial we can find *complex* numbers such that

from which we find at least one eigenvalue and eigenvector for as follows:

- if then is an eigenvector with eigenvalue ;
- if then check if :
- if so then is an eigenvector with eigenvalue ;

- if not then iterate this process until you find an eigenvalue or conclude that from which we conclude that is an eigenvector with eigenvalue .

Note that the use of complex numbers was only necessary to factorise the polynomial into linear factors. If we wanted to work only with real numbers then at this stage we could only factorise the polynomial into a mixture of linear and irreducible quadratic factors. If all the factors were irreducible quadratic factors then we would not obtain any real eigenvalues.

Note further that although we are guaranteed to have a dependence relation between the first terms in the sequence

it is quite possible that a smaller subset of these vectors are linearly dependent. If this happens it is advantageous to use the smaller subset. This is because the polynomial we then need to factorise will be of lower degree. Contrast this with the method of finding eigenvalues using a determinant: with that method one always needs to calculate an determinant and then solve a degree polynomial.