If you read the Wikipedia page on the Cramér-Rao bound in statistics, there is an elegant and concise proof given of the scalar version of the bound. However, no proof of the full multivariate case is given there.
Indeed, it seems at first like the same approach will not work, because multivariate Cramér-Rao is a matrix inequality, while the scalar proof relies on the Cauchy-Schwarz inequality, which is a statement about inner products. Since an inner product is just a real-valued number, surely a different approach is required for proofs about matrices?
But after reading this 1980 paper of Bultheel I think the same short proof goes through, if we generalise the definition of “inner product” slightly. In fact, this form of Cauchy-Schwarz holds for the familiar outer product and the inner product version is just a special case!
Below we’ll confine ourselves to the reals for simplicity, unlike Bultheel who works more abstractly.
We’ll review the scalar case, then extend it to matrices.
Inner product definition
An inner product, on a vector space over the reals takes two vectors and returns a real number. The prototypical example is the dot product on , , but we can allow others if they satisfy these requirements:
- Positive definiteness: , with equality if and only if .
Cauchy-Schwarz from inner products
The Cauchy-Schwarz inequality is the following statement about products of inner products:
We can show this using the definition of the inner product above. Take a vector where is a real scalar.
Positive definiteness says:
We can use bilinearity to expand this:
and symmetry to obtain
Now if we make the choice and simplify:
We obtain the desired inequality:
Matrix-valued inner product axioms
Now we’d like something a little more powerful. We can get this if we are willing to generalise the notion of an inner product to something that returns a matrix instead of a number. I’ll denote this new “inner product” by .
We also need to generalise our axioms slightly for this wider definition. I will follow Bultheel’s definition, simplifying by considering only real-valued matrices. So:
- Symmetry holds up to a transpose. Now is a matrix, we need to add matrix transposition if we swap the arguments, but we still have:
- Bilinearity still applies, not only with scalar coefficients but also with matrices. We have to be careful about whether we are multiplying on the left or right, because matrix multiplication is not commutative. So we have:
- Positive definiteness: we will demand that is itself positive definite, i.e. as a matrix inequality. We can also insist that implies .
Now a multivariate Cauchy-Schwarz follows from these axioms just as it did in the scalar case, though again we must take care of the transpositions.
We substitute :
Using transpose symmetry to tidy up:
we obtain the matrix form of Cauchy-Schwarz:
In particular, in the scalar case, this reduces to the usual scalar form of the inequality.
I think this is cute! For one thing, we’ve just defined the outer product to be an inner product!
(The outer product between two dimensional vectors is the matrix , while the Euclidean dot product is the scalar .)
Yet since the outer product is transpose symmetric, bilinear, and results in a positive definite matrix for a single vector, it’s a perfectly good inner product for these purposes. I wonder why Cauchy-Schwarz is more commonly known in the less general inner product form?
I’m also intrigued by the geometric connotations of “matrix-valued inner products”. The inner product is an algebraic construction which is geometrically motivated, and so bridges these two aspects of mathematics. The inner product is at the core of geometry and defines:
- length of vectors (from the induced norm )
- angles between vectors (from ), and in particular orthogonality when
- projections onto sets (by minimizing the norm, or by orthogonality)
So – what would it mean geometrically for a length or an angle to be matrix valued?
I don’t know! But it does occur to me that if you have two ordinary, independent scalar metrics and , you can always compose these into a new “matrix-valued metric” . (This is still positive definite in the sense above). That declares two vectors to be orthogonal when both of the component metrics are: this means that the trace of our matrix, which is the sum of its eigenvalues, will be zero. (If we had used the determinant instead, it would declare orthogonality whenever any of the constituents did.)
Furthermore, the trace of the outer product recovers the inner product. In fact, the trace already gives a proper inner product between square matrices, thought of as a vector space: . So we can squash our matrix-valued inner product back to an ordinary scalar inner product by taking the trace. And if we do this for our diagonal matrix of independent metrics, we recover the usual metric on ! It was there all along, but the matrix-valued metric additionally preserves more information about along which basis directions the vectors agree and disagree.