If you take a square matrix M, subtract x from the elements on the diagonal, and take the determinant, you get a polynomial in x called the characteristic polynomial of M.
For example, let Then The characteristic equation is the equation that sets the characteristic polynomial to zero.
The roots of this polynomial are eigenvalues of the matrix.
The Cayley-Hamilton theorem says that if you take the original matrix and stick it into the polynomial, you’ll get the zero matrix.
In brief, a matrix satisfies its own characteristic equation.
Note that for this to hold we interpret constants, like 12 and 0, as corresponding multiples of the identity matrix.
You could verify the Cayley-Hamilton theorem in Python using scipy.
funm to compute a polynomial function of a matrix.
>>> from scipy import array >>> from scipy.
linalg import funm >>> m = array([[5, -1], [1, 2]]) >>> linalg.
funm(m, lambda x: x**2 – 7*x + 12) This returns a zero matrix, almost.
The function funm doesn’t exactly return a zero matrix, but returns a matrix whose entries are about as close to zero as we should expect given the precision of floating point arithmetic, with errors on the order of 10-16.
The function funm is not directly sticking the matrix m into the polynomial or else it would return exactly zero since the entries are all integers.
It is usually used to compute things like the exponential of a matrix and is optimized for such functions.
I imagine funm is factoring M into PDP-1 where D is a diagonal matrix.
Then f(M) = P f(D) P-1.
This is because f can be applied to a diagonal matrix by simply applying f to each diagonal entry independently.
The slight departure of the computed result from exactly the zero matrix probably comes from computing P or P-1.
Related posts Cosine of a matrix How far is xy from yx on average for quaternions?.