Okay, so I figured this part out: a matrix multiplied by its transposition is a covariance matrix. By which I mean: the higher the value in a given (row, col), the more data in those axes were correlated.
https://en.wikipedia.org/wiki/Covariance_matrix
To simplify, consider a 3x3 matrix `A` and multiply `A` by `transpose(A)`.
What each cell of the result is telling you is how likely it is that when you change the value on the row axis, the value on the column axis changes the same way. So the diagonal will always be large, because data on an axis will always correlate with itself (i.e. when you change the value of `x`, the value of `x` changes in *exactly* the same way, `x*x = x^2`), but cell 0,2, for example, tells you how much changing x causes z to change the same way (if it's the same value as cell 0,0, then the points lie on a diagonal in the xz-plane: changing `x` causes the exact same change in `z`).
I still need to cogitate a bit on why the eigenvector with the largest eigenvalue of this matrix is the axis along which the data has the highest variance in the original coordinate space.