Skip to main content

Pick functions and operator monotones

Any time you can order mathematical objects, it is productive to ask what operations preserve the ordering. For example, real numbers have a natural ordering, and we have $x \geq y \Rightarrow x^k \geq y^k$ for any odd natural number $k$. If we further impose the assumption $y \geq 0,$ then order preservation holds for $k$ any positive real number.

Self-adjoint operators on a Hilbert space have a natural (partial) order as well. We write $A \geq 0$ for a self-adjoint operator $A$ if we have
$$\langle \psi | A | \psi \rangle \geq 0$$
for every vector $|\psi\rangle,$ and we write $A \geq B$ for self-adjoint operators $A$ and $B$ if we have $(A - B) \geq 0.$ Curiously, many operations that are monotonic for real numbers are not monotonic for matrices. For example, the matrices
$$P = \frac{1}{2} \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}$$
and
$$Q = \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}$$
are both self-adjoint and positive, so we have $P+Q \geq P \geq 0$, but a straightforward calculation shows that $(P+Q)^2 - P^2$ is not a positive matrix.

There is a very beautiful theory due to Charles Loewner that explains exactly when a function of matrices preserves order. For any function $f : \mathbb{R} \to \mathbb{R},$ we can define a corresponding function acting on Hermitian matrices by diagonalizing and acting on each eigenvalue individually. Loewner proved first that the only functions that are order-preserving for every pair of Hermitian matrices are the affine functions $f(x) = \alpha x + \beta.$ On its own this is a bit disappointing, but Loewner went on to observe that for there are additional functions that are order-preserving for certain restricted families of Hermitian matrices. In particular, he considered the family of functions that is order-preserving for all matrices with eigenvalues contained in some fixed subset of the real line. For example, there are many more functions that satisfy $A \geq B > 0 \Rightarrow f(A) \geq f(B)$ then there are functions that satisfy the same implication without the requirement that $A$ and $B$ are individually positive. Quite generally, Loewner showed the following:

Let $(a,b)$ be an open subset of $\mathbb{R}$. Then in order to have $f(A) \geq f(B)$ for every $A \geq B$ with eigenvalues of $A$ and $B$ both contained in $(a, b)$, it is necessary and sufficient that $f : (a, b) \to \mathbb{R}$ be a Pick function. The function $f : (a, b) \to \mathbb{R}$ is a Pick function if it is real analytic and has an analytic continuation that maps the upper half-plane to itself.

This is a very interesting theorem, as it relates an algebraic structure (matrix order) to an analytic one (holomorphy). The "necessary" part of Loewner's theorem is quite hard to show; this is the part that says any matrix monotone function has an appropriate analytic continuation. But the "sufficient" part is usually what is needed in practice, and the proof is both digestible and instructive. In this post, we will (i) explore the structure of Pick functions and show that they always have interesting integral representations, (ii) use these integral representations to show that Pick functions are monotones for bounded Hermitian operators (going beyond the special case of finite-dimensional matrices), (iii) introduce and study an order relation for unbounded operators, and (iv) explore when Pick functions are monotone for unbounded operators.

I learned this material by studying Schmudgen's book Unbounded self-adjoint operators on Hilbert space and Simon's book Loewner's theorem on monotone matrix functions.

Prerequisites: For section 1, complex analysis. For section 2, the spectrum of an operator in terms of its resolvents and the spectral theorem in terms of spectral measures. For sections 3 and 4, basic definitions and propositions concerning unbounded operators.

Table of Contents

  1. Pick functions and integral representations
  2. Pick functions as bounded monotones
  3. Order relations for unbounded operators
  4. Pick functions as unbounded monotones

1. Pick functions and integral representations

In this section, we will use the general term Pick function for a holomorphic map from the upper half-plane to itself: $f : \mathbb{C}^+ \to \mathbb{C}^+.$ We will later consider special cases where $f$ has a real analytic limit on some subset of the real line.

We will soon see that any Pick function has an integral representation in terms of a measure on the real line. The measure on the real line is related to the imaginary part of the Pick function; consequently, on any part of the real line where the Pick function has a real limit, the integrand will vanish. This is why Pick functions can be controlled when they act on operators with spectrum on a certain range; the result will be some integral expression that does not have any support in the operator's spectrum.

To construct an integral representation for a Pick function $f_{\text{plane}},$ we first transform to the unit disk via the conformal mapping
$$z_{\text{disk}} = \frac{z_{\text{plane}} - i}{z_{\text{plane}}+i}.$$
So we have a function
$$f_{\text{disk}}(z_{\text{disk}}) = f_{\text{plane}}(z_{\text{plane}})$$
which maps the interior of the unit disk holomorphically into the upper half plane. For any radius $r < 1,$ we can use Cauchy's integral identity to write, for $|z_{\text{disk}}| < r,$ the identity
$$f_{\text{disk}}(z_{\text{disk}}) = \frac{1}{2 \pi i} \int_{|\zeta|=r} d\zeta\, \frac{f_{\text{disk}}(\zeta)}{\zeta - z_{\text{disk}}}.$$
We would like to obtain an integral identity for $f_{\text{plane}}$ by taking $r \to 1$ and undoing the conformal map to exchange $z_{\text{disk}}$ for $z_{\text{plane}}.$ But the limit does not necessarily converge, and if it does we have no guarantee that the integrand will vanish on any portion of the boundary of the half-plane (or the boundary of the disk). Instead, it is productive to write out this identity explicitly in terms of a real variable $\theta$, restrict to $z_{\text{disk}}=0,$ and look only at the imaginary part:
$$\text{Im}\left[f_{\text{disk}}(0)\right] = \frac{1}{2 \pi} \int d\theta\, \text{Im}\left[f_{\text{disk}}(r e^{i \theta})\right].$$
This is an integral identity for only the imaginary part of $f_{\text{disk}}$ in terms of itself; the real part of the function does not appear anywhere. We can compose this function with a Mobius transformation and change variables to obtain
$$\text{Im}\left[f_{\text{disk}}(z_{\text{disk}})\right] = \frac{1}{2\pi} \int d\theta\, \frac{r^2 - |z_{\text{disk}}|^2}{|z_{\text{disk}} - r e^{i \theta}|^2} \text{Im}\left[f_{\text{disk}}(r e^{i \theta})\right],$$
then we can rewrite this suggestively as
$$\text{Im}\left[f_{\text{disk}}(z_{\text{disk}})\right] = \text{Im}\left[ \frac{1}{2\pi} \int d\theta\, i \frac{r e^{i \theta}+z_{\text{disk}}}{r e^{i \theta}-z_{\text{disk}}} \text{Im}\left[f_{\text{disk}}(r e^{i \theta})\right]\right].$$
So we have the imaginary part of $f_{\text{disk}}$ written in terms of the imaginary part of a holomorphic function. Two holomorphic functions with the same imaginary part can only differ by a real constant; from this one can easily deduce the expression
$$f_{\text{disk}}(z_{\text{disk}}) = \text{Re}[f_{\text{disk}}(0)] + \frac{1}{2\pi} \int d\theta\, i \frac{r e^{i \theta}+z_{\text{disk}}}{r e^{i \theta}-z_{\text{disk}}} \text{Im}\left[f_{\text{disk}}(r e^{i \theta})\right].$$

Thus far we haven't used in any way the Pick property of $f_{\text{disk}},$ which is that its image is in the upper half-plane. This shows up in the following way. We write
$$f_{\text{disk}}(z_{\text{disk}}) = \lim_{r \to 1} f_{\text{disk}}(r z_{\text{disk}}) = \text{Re}[f_{\text{disk}}(0)] + \lim_{r \to 1} \frac{1}{2\pi} \int d\theta\, i\, \frac{e^{i \theta} + z_{\text{disk}}}{e^{i \theta} - z_{\text{disk}}} \text{Im}\left[f_{\text{disk}}(r e^{i \theta})\right].$$
For each $r$, the expression
$$d \mu_r(\theta) = \frac{d\theta}{2 \pi} \text{Im}\left[f_{\text{disk}}(r e^{i \theta})\right]$$
defines a measure by the positivity of the imaginary part of $f_{\text{disk}}.$ A general theorem (see Simon's book if you are curious; we won't need it in practice because in any actual examples we can do an explicit computation) shows that the $r \to 1$ limit of this measure always exists, and we have a measure $d\mu(\theta)$ on the unit circle with
$$f_{\text{disk}}(z_{\text{disk}}) = \text{Re}[f_{\text{disk}}(0)] + i\, \int d\mu(\theta)\, \frac{e^{i \theta}+z_{\text{disk}}}{e^{i \theta} - z_{\text{disk}}}.$$
Undoing the transformation from the disk back to the plane, we obtain
$$f_{\text{plane}}(z_{\text{plane}}) = \text{Re}[f_{\text{plane}}(i)] + \tilde{\mu}(\infty) z_{\text{plane}} + i \int_{-\infty}^{\infty} d\tilde{\mu}(x) \frac{1+x z_{\text{plane}}}{x - z_{\text{plane}}}.$$
The measure $\tilde{\mu}$ is obtained by transforming the measure $\mu$ from the unit circle to the real line. Generically it may have some contribution from the point at infinity. Away from infinity, we have
$$d \tilde{\mu}(x) = \lim_{\epsilon \to 0} \frac{dx}{\pi(1+x^2)}\text{Im}[f_{\text{plane}}(x+i \epsilon)].$$ 
Dropping all of the superfluous subscripts, we learn that for any Pick function $f : \mathbb{C}^+ \to \mathbb{C}^+,$ we have
$$f(z) = \text{Re}[f(i)] + c z + i \int_{-\infty}^{\infty} d\tilde{\mu}(x) \frac{1+x z}{x - z}$$
with
$$d \tilde{\mu}(x) = \lim_{\epsilon \to 0} \frac{dx}{\pi(1+x^2)}\text{Im}[f(x+i \epsilon)]$$
 and $c$ a positive constant. This is our general integral formula for a Pick function.

For a real number $t$ in the regime where $d \tilde{\mu}$ vanishes, we can take the limit $z \to t$ and obtain the integral identity
$$f(t) = \text{Re}[f(i)] + c t + i \int_{-\infty}^{\infty} d\tilde{\mu}(x) \frac{1+x t}{x - t}.$$
It is often useful to pick a reference point $t_0$ and subtract $f(t) - f(t_0)$ to write
$$f(t) = f(t_0) + c (t-t_0) + i \int_{-\infty}^{\infty} d\tilde{\mu}(x) \frac{1+x^2}{(x-t)(x-t_0)}.$$
The utility of this expression is that when $t$ and $t_0$ are in the same open interval outside of the support of $d\tilde{\mu}(x),$ the numbers $(x-t)$ and $(x-t_0)$ always have the same sign, so the integrand in this expression is positive. Integrals with positive integrand are much easier to manipulate than general integrals.

Let us conclude with a few examples.

For $f(z) = \log{z},$ it is easy to compute $\tilde{\mu}(\infty) = 0$ and
$$d\tilde{\mu}(x) = \frac{dx}{1+x^2} \Theta(x \leq 0),$$
which gives the integral identity
$$\log{z} = \int_{-\infty}^{0} \frac{dx}{1+x^2} \frac{1+ x z}{x - z},$$
which can be simplified to
$$\log{z} = \int_{0}^{\infty} dx\, \left( \frac{x}{1+x^2} - \frac{1}{x+z} \right).$$
The reference-subtracted formula with $t_0 = 1$ is
$$\log{t} = \int_0^{\infty} dx\, \left( \frac{1}{x+1} - \frac{1}{x+t}\right).$$

For $f(z) = z^{\alpha}$ with $0 < \alpha < 1,$ one can compute $\tilde{\mu}(\infty) = 0$ and
$$d\tilde{\mu} = \frac{\sin{\pi \alpha}}{\pi} dx\, \frac{(-x)^{\alpha}}{1+x^2} \Theta(x \leq 0).$$
From this one obtains
$$z^{\alpha} = \cos\frac{\pi \alpha}{2} + \frac{\sin{\pi \alpha}}{\pi} \int_{0}^{\infty} dx\, x^{\alpha} \left(\frac{x}{1+ x^2} - \frac{1}{z+x}\right).$$
The reference-subtracted formula in the limit $t_0 \to 0$ is
$$t^{\alpha} = t \frac{\sin{\pi \alpha}}{\pi} \int_0^{\infty} dx\, \frac{x^{\alpha - 1}}{t+x}.$$

2. Pick functions as bounded monotones

Let $A$ and $B$ be bounded, Hermitian operators with spectra contained in the interval $(a, b).$ Let $f$ be a Pick function with real analytic limit in $(a, b).$ We want to show that $A \geq B$ implies $f(A) \geq f(B).$

Let $|\psi\rangle$ be a generic vector in Hilbert space with unit norm. Because $A$ is Hermitian, there is a spectral measure $d\nu_{\psi, \psi}$ satisfying
$$\langle \psi | f(A) | \psi \rangle = \int d\nu_{\psi, \psi}(t) f(t).$$ 
We can use our Pick function integral representation to write
$$\langle \psi | f(A) |\psi \rangle = \text{Re}[f(i)] + c \langle \psi | A |\psi \rangle + i \int d\nu_{\psi, \psi}(t) \int_{-\infty}^{\infty} d\tilde{\mu}(x)\, \frac{1+x t}{x-t}.$$
One can apply Fubini's theorem to switch the order of the integrals, and obtain
$$\langle \psi | f(A) |\psi \rangle = \text{Re}[f(i)] + c \langle \psi | A |\psi \rangle + i \int_{-\infty}^{\infty} d\tilde{\mu}(x)\, \langle \psi | (1 + x A)(x - A)^{-1} |\psi \rangle.$$
Applying this also to $f(B)$, we obtain
$$\langle \psi | [f(A) - f(B)] |\psi \rangle = c \langle \psi | (A - B) |\psi \rangle + i \int_{-\infty}^{\infty} d\tilde{\mu}(x) \langle\psi | [(1 + x A)(x - A)^{-1} - (1 + x B)(x-B)^{-1}] |\psi\rangle.$$
The first term on the RHS is positive because we have $c > 0$ and $A \geq B.$ So to show $f(A) \geq f(B)$, it suffices to show the operator identity
$$(1 + x A) (x - A)^{-1} \geq (1 + x B)(x - B)^{-1}$$
for $x$ in the support of $d\tilde{\mu}.$ Using the formula
$$(1 + x A)(x-A)^{-1} = (1 + x^2)(x - A)^{-1} - x,$$
we see that what we actually need to show is
$$(x - A)^{-1} \geq (x - B)^{-1}.$$

Because we have $A \geq B,$ we have $(x -A) \leq (x - B).$ Because $x$ is outside of the interval where the spectra of $A$ and $B$ are supported, the operators $(x-A)$ and $(x-B)$ are either both positive or both negative. So to show $(x - A)^{-1} \geq (x - B)^{-1},$ it suffices to show that for $S \geq T > 0,$ we have $S^{-1} \leq T^{-1}.$ This is shown using a trick having to do with norms. From $S \geq T,$ we have
$$1 \geq S^{-1/2} T S^{-1/2},$$
which implies
$$1 \geq \lVert S^{-1} T S^{-1/2}\rVert = \lVert T^{1/2} S^{-1/2} \rVert^2 = \lVert S^{-1/2} T^{1/2} \rVert^2 = \lVert T^{1/2} S^{-1} T^{1/2} \rVert,$$
from which we obtain
$$1 \geq T^{1/2} S^{-1} T^{1/2},$$
hence
$$T^{-1} \geq S^{-1}.$$

Working backwards, one can see that we have demonstrated that Pick functions are monotone on bounded operators.

An interesting subtlety occurs when the spectrum of $A$ or $B$ goes all the way to the endpoints of the interval $(a, b).$ I.e., so far we have assumed that $A$ and $B$ have spectrum contained within $(a, b)$; but what if the spectra go to $[a, b]$? A part of the argument above breaks down, which is that the resolvent $(a-A)^{-1}$ is no longer necessarily bounded. In other words, the integrand in the Pick function integral representation can blow up near the endpoints of its support.

Showing how to deal with this subtlety will be instructive for our investigation of unbounded operators in the next section. First let us suppose that the Pick function $f$ is bounded on $(a, b)$; otherwise the operators $f(A)$ or $f(B)$ could be unbounded, and we have not learned how to deal with those. From the integral formula for a Pick function it is easy to see that it is monotonically increasing on $(a, b)$, so if it is bounded, then it has a continuous extension to $[a, b].$

If $\tilde{\mu}$ has zero measure at the endpoints $a$ and $b$, then we can use the reference-subtracted formula to write
$$f(t) = f(t_0) + c (t - t_0) + \int_{-\infty}^{\infty} d\tilde{\mu}(x) \frac{1+x^2}{(x-t)(x-t_0)}$$
and use dominated/monotone convergence theorems to take the limit $t_0 \to a$ and obtain
$$f(t) = f(a) + c(t-a) + \int_{x \notin [a, b]} d\tilde{\mu}(\infty) \frac{t-a}{(x-t)(x-a)} (1+x^2).$$
One can repeat the arguments above and use positivity of the integrand to apply Fubini's theorem even without being able to show convergence a priori; from this one obtains
$$\langle \psi | f(A) | \psi \rangle = f(a) + c \langle \psi| (A - a) |\psi \rangle + \int_{x \notin [a,b]} d\tilde{\mu}(x) \frac{1+x^2}{x-a} \langle\psi|(A-a)(x-A)^{-1} |\psi \rangle,$$
and from this one can show $f(A) \geq f(B)$ from $A \geq B.$

We put in $\tilde{\mu}(a) = \tilde{\mu}(b) = 0$ as an assumption, but this is actually guaranteed when $f$ is bounded on $(a,b).$ This is because $f$ is continuous on the endpoints of the intervals, so from the integral representation for $f,$ we see that the $t \to a$ and $t \to b$ limits of
$$\int_{-\infty}^{\infty} d\tilde{\mu}(x) \frac{1+x^2}{(x-t)(x-t_0)}$$
must exist. But the integrand is positive and blows up at $x=a$ and $x=b$ in the $t \to a$ or $t \to b$ limits, so for the integral to be finite, the measure must vanish on $a$ and $b.$

3. Order relations for unbounded operators

How do we apply the above considerations to unbounded operators? For bounded Hermitian operators, it was easy to say $A \geq B$ if $\langle \psi | A | \psi \rangle \geq \langle \psi | B | \psi \rangle$ for every $|\psi\rangle$ in Hilbert space. But if $A$ and $B$ are unbounded self-adjoint operators, then due to domain issues, there may not be any nontrivial vectors $|\psi\rangle$ for which both $\langle \psi | A | \psi \rangle$ and $\langle \psi | B | \psi \rangle$ are both defined. For this reason, any definition of order for unbounded operators must include some statement about the domains.

For completely general unbounded operators, I'm not aware of any good definition. But for bounded below operators, there is one. Let us first restrict to positive self-adjoint operators. What does it mean to say $A \geq B \geq 0$? First, note that the expression $\langle \psi | A | \psi \rangle$ makes sense not just for every $|\psi \rangle$ in the domain of $A$, but for every $|\psi\rangle$ in the (generally larger) domain of $A^{1/2},$ since we can consider the quantity $\langle A^{1/2} \psi | A^{1/2} \psi \rangle.$ We would like $A \geq B \geq 0$ to be some statement like
$$\langle A^{1/2} \psi | A^{1/2} \psi \rangle \geq \langle B^{1/2} \psi | B^{1/2} \psi \rangle \geq 0.$$
We should think of the statement that $|\psi\rangle$ is in the domain of $B^{1/2}$ as equivalent to the statement that $\langle B^{1/2} \psi | B^{1/2} \psi\rangle$ is finite. So our above order can be stated concisely as: $A \geq B \geq 0$ if the domain of $A^{1/2}$ is contained in the domain of $B^{1/2}$, and for any $|\psi\rangle$ in this shared domain we have
$$\langle A^{1/2} \psi | A^{1/2} \psi \rangle \geq \langle B^{1/2} \psi | B^{1/2} \psi \rangle \geq 0.$$
A similar consideration works for any operators $A$ and $B$ that are bounded below, because in this case we can just say $A \geq B \geq -c$ if we have $c + A \geq c + B \geq 0.$

It is very useful to know that for bounded-below operators, the statement $A \geq B \geq -c$ is equivalent to the statement $(s + A)^{-1} \leq (s + B)^{-1}$ for $s > c.$ These operators are bounded, so the sense in which they are ordered is the usual one. In fact, it suffices to check $(s + A)^{-1} \leq (s + B)^{-1}$ for a single $s > c$ to know that it is true for all $s > c$ and also that we have $A \geq B \geq -c$ in the domain sense explained above.

To see this, we first note that $(s+A)^{1/2}$ has the same domain as $A^{1/2},$ and so we have $(s + A) \geq (s + B)$ if and only if we have $A \geq B.$ So it suffices to show that for $A \geq B \geq 0$ with $A, B$ invertible, we have $A^{-1} \leq B^{-1}.$ Note that none of these operators is necessarily bounded.

To show this, let $P_n$ be the spectral projection for $A$ on the range $[1/n, n].$ For any vector $|\psi\rangle,$ the vector $P_n |\psi\rangle$ is in the domain of every power of $A$. By the spectral theorem, we have
$$\int \frac{1}{|x|} d\nu^{(A)}_{\psi, \psi}(x) = \lim_{n \to \infty} \langle A^{-1/2} P_n \psi | A^{-1/2} P_n \psi \rangle = \lim_{n \to \infty} \lVert A^{-1/2} P_n |\psi \rangle \rVert^2.$$
We can rewrite this using
$$\lVert A^{1/2} A^{-1} P_n |\psi \rangle \rVert^2 = \langle \psi | A^{-1} P_n \psi \rangle = \langle A^{-1/2} P_n \psi | A^{-1/2} P_n \psi \rangle$$
hence
$$\lVert A^{1/2} A^{-1} P_n |\psi \rangle \rVert^2 = \frac{|\langle \psi | A^{-1} P_n \psi \rangle |^2}{\langle A^{-1/2} P_n \psi | A^{-1/2} P_n \psi \rangle}$$.
So we have
$$\int \frac{1}{|x|} d\nu^{(A)}_{\psi, \psi}(x) = \lim_{n \to \infty} \frac{|\langle \psi | A^{-1} P_n \psi \rangle |^2}{\langle A^{-1/2} P_n \psi | A^{-1/2} P_n \psi \rangle} = \sup_{\chi} \frac{|\langle \psi | \chi \rangle |^2}{\langle A^{1/2} \chi \psi | A^{1/2} \chi \rangle},$$
where the supremum is taken over all $|\chi\rangle$ in the domain of $A^{1/2}.$

The vector $|\psi\rangle$ is in the domain of $A^{-1/2}$ if and only if this expression is finite, and it is equal to $\langle A^{-1/2} \psi | A^{-1/2} \psi \rangle.$ So for $|\psi\rangle$ in the domain of $B^{-1/2}$, we have
$$\langle B^{-1/2} \psi | B^{-1/2} \psi \rangle = \sup_{\chi \in D_{B^{1/2}}} \frac{|\langle \psi | \chi \rangle |^2}{\langle B^{1/2} \chi \psi | B^{1/2} \chi \rangle} \geq \sup_{\chi \in D_{A^{1/2}}} \frac{|\langle \psi | \chi \rangle |^2}{\langle A^{1/2} \chi \psi | A^{1/2} \chi \rangle} = \langle A^{-1/2} \psi | A^{-1/2} \psi \rangle,$$
which gives $B^{-1} \geq A^{-1},$ as desired.

4. Pick functions as unbounded monotones

Now that we know what it means for unbounded operators to be ordered, we can ask whether Pick functions are monotonic when applied to unbounded operators. The answer is sort of. We've only defined an order for bounded-below unbounded operators; consequently, we should only expect to be able to talk about Pick functions being monotonic on unbounded operators when the images of the bounded-below operators under Pick functions are also bounded below. We'll go a little beyond this at the end, but let's start with the simpler case first.

Suppose $f$ is a bounded-below Pick function with real analytic limit in the half-open interval $(0, \infty).$ (The general case follows just by shifting around the lower bound on the interval.) Let $A$ and $B$ be unbounded positive operators. We will now show that when we have $A \geq B$ in the sense of unbounded operators, we have $f(A) \geq f(B)$ in the sense of unbounded operators.

The trick is that we have already shown that $A \geq B$ is equivalent to the bounded-operator identity $(s + A)^{-1} \leq (s + B)^{-1}$ for any (every) $s > 0.$ We have a relative integral relation for the Pick function like
$$f(t) = f(t_0) + c (t-t_0) + \int d\tilde{\mu}(x) \frac{1+x^2}{(x-t)(x-t_0)} (t - t_0),$$
and using the monotone convergence theorem one can take $t_0 \to 0$ to obtain
$$f(t) = f(0) + c t + \int d\tilde{\mu}(x) \frac{1+x^2}{(x-t)x} t.$$
The integrand is positive. For a unit vector $|\psi\rangle,$ we can use the same spectral integral tricks as in section 2 to obtain
$$\int f(\lambda) d\nu^{(A)}_{\psi, \psi}(\lambda) = f(0) + c \int \lambda d\nu^{(A)}_{\psi, \psi}(\lambda) + \int d\tilde{\mu}(x) \frac{1+x^2}{x}  \langle \psi | A (x - A)^{-1} |\psi \rangle.$$
Everything is positive, so the integral on the LHS converges if and only if the integral on the RHS converges. From $A \geq B$, we clearly have
$$\int \lambda d\nu^{(A)}_{\psi, \psi} \geq \int \lambda d\nu^{(B)}_{\psi, \psi},$$
and the resolvent relation gives us a similar inequality for the third term on the RHS (using $\frac{A}{x-A} = \frac{1}{x-A} - \frac{1}{x}$.) Putting all of this together we obtain
$$\int f(\lambda) d\nu^{(A)}_{\psi, \psi}(\lambda) \geq \int f(\lambda) d\nu^{(B)}_{\psi, \psi}(\lambda).$$
So the RHS converges if and only if the LHS converges, and convergence is related to whether or not $|\psi\rangle$ is in the domain of the appropriate operator. So we have that if $|\psi \rangle$ is in the domain of $f(A)^{1/2},$ then it is in the domain of $f(B)^{1/2},$ and we have $f(A) \geq f(B)$ in the sense of unbounded operators.

Finally, we note that there are interesting cases where the Pick function is not bounded below. For example, what if we want to compare the logarithms of positive operators with spectra that go all the way to zero? If we have $A \geq B \geq 0$ but the spectra go continuously to zero, then $\log{A}$ and $\log{B}$ are not positive operators. So our notion of ordering for unbounded operators goes out the window. We can no longer use bounded-below tricks to show that the domain of $|\log{A}|^{1/2}$ is contained in the domain of $|\log{B}|^{1/2},$ and in fact this is not generally true. What is true is that for any $|\psi\rangle$ that is known to be in the simultaneous domain of both, one can show
$$\int \log{(\lambda)} d\nu^{(A)}_{\psi, \psi}(\lambda) \geq \int \log{(\lambda)} d\nu^{(B)}_{\psi, \psi}(\lambda).$$
But because the integrands are not positive, convergence of the LHS does not guarantee convergence of the RHS, and the inequality is only guaranteed if one can come up with some independent argument that the integrals converge. This causes interesting issues when thinking about e.g. monotonicity of modular Hamiltonians in quantum field theory; one must be careful about domain issues that arise due to this problem of not having guaranteed convergence.

Comments

Popular posts from this blog

Envelopes of holomorphy and the timelike tube theorem

Complex analysis, as we usually learn it, is the study of differentiable functions from $\mathbb{C}$ to $\mathbb{C}$. These functions have many nice properties: if they are differentiable even once then they are infinitely differentiable; in fact they are analytic, meaning they can be represented in the vicinity of any point as an absolutely convergent power series; moreover at any point $z_0$, the power series has radius of convergence equal to the radius of the biggest disc centered at $z_0$ which can be embedded in the domain of the function. The same basic properties hold for differentiable functions in higher complex dimensions. If $\Omega$ is a domain --- i.e., a connected open set --- in $\mathbb{C}^n$, and $f : \Omega \to \mathbb{C}^n$ is once differentiable, then it is in fact analytic, and can be represented as a power series in a neighborhood of any point $z_*$, i.e., we have an expression like $$f(z) = \sum a_{k_1 \dots k_n} (z_1 - z_*)^{k_1} \dots (z_n - z_*)^{k_n}.$$ The

The stress-energy tensor in field theory

I came to physics research through general relativity, where the stress energy tensor plays a very important role, and where it has a single unambiguous meaning as the functional derivative of the theory with respect to metric perturbations. In flat-space quantum field theory, some texts present the stress tensor this way, while some present the stress tensor as a set of Noether currents associated with spatial translations. These definitions are usually presented as being equivalent, or rather, equivalent up to the addition of some total derivative that doesn't affect the physics. However, this is not actually the case. The two stress tensors differ by terms that can be made to vanish classically, but that have an important effect in the quantum theory. In particular, the Ward identities of the two different stress tensors are different. This has caused me a lot of grief over the years, as I've tried to compare equations between texts that use two different definitions of the

Stone's theorem

 Stone's theorem is the basic result describing group-like unitary flows on Hilbert space. If the map $t \mapsto U(t)$ is continuous in a sense we will make precise later, and each $U(t)$ is a unitary map on a Hilbert space $\mathcal{H},$ and we have $U(t+s)=U(t)U(s),$ then Stone's theorem asserts the existence of a (self-adjoint, positive definite, unbounded) operator $\Delta$ satisfying $U(t) = \Delta^{it}.$ This reduces the study of group-like unitary flows to the study of (self-adjoint, etc etc) operators. Quantum mechanically, it tells us that every group-like unitary evolution is generated by a time-independent Hamiltonian. This lets us study very general symmetry transformations in terms of Hamiltonians. The standard proof of Stone's theorem, which you'll see if you look at Wikipedia , involves trying to make sense of a limit like $\lim_{t \to 0} (U(t) - 1)/t$. However, I have recently learned of a beautiful proof of Stone's theorem that works instead by stud