This comes up quite frequently, but I’ve been stuck for an easy memory-friendly way to do this. I trawled through the 1A Vectors and Matrices course notes, and found the following mechanical proof. (It’s not a discovery-proof - I looked it up.)

## Lemma

Let be a symmetric matrix. Then any eigenvectors corresponding to different eigenvalues are orthonormal. (This is a very standard fact that is probably hammered very hard into your head if you have ever studied maths post-secondary-school.) The proof of this is of the “write it down, and you can’t help proving it” variety:

Suppose are different eigenvalues of , corresponding to eigenvectors . Then , . Hence (transposing the first equation) ; the left hand side is . Hence ; but so this is . Since , this means .

## Theorem

Now, suppose has eigenvalues . They might not be distinct; take the ones which are, . Then extend this to a basis of , and orthonormalise that basis using the Gram-Schmidt process. (This can be proved - it’s tedious but not hard, as long as you remember what the Gram-Schmidt process is, and I think it’s safe to assume.) With respect to this basis, is a matrix which is diagonal in the first entries. Moreover, we are performing an orthonormal change of basis, and conjugation by orthogonal matrices preserves the property of “symmetricness” (proof: ), so the th to th row/column block is symmetric. It is also real (because we have performed a conjugation by a real matrix). And we have that the first columns of are filled with zeros below the diagonal (being the image of eigenvectors), so is also filled with zeros in the first rows above the diagonal, because it is a symmetric matrix.

Now by induction, that sub-matrix is diagonalisable by an orthogonal matrix. Hence we are done: all symmetric matrices are diagonalisable by an orthogonal change of basis. (The eigenvectors produced by the inductive step must be orthogonal to the ones we’ve already found, because they fall in a subspace which is orthogonal to that of the one we already found.)