Processing math: 100%
Skip to main content

Unbounded generators of unitary groups

Stone's theorem tells us when a one-parameter unitary group has a self-adjoint generator. If U(t) is a group --- i.e., it satisfies U(t1+t2)=U(t1)U(t2) --- then we can write U(t)=eiHt for some unbounded H if and only if we have limtt0U(t)|ψ=U(t0)|ψ for every |ψ in Hilbert space. This is the condition that U(t) is strongly continuous.

What if we have a unitary group with multiple parameters? Within any one-parameter subgroup we should be able to find a generator. But is there a connection between the generators of different subgroups? Concretely, imagine we have a two-parameter subgroup that forms a representation of R2. I.e., assume we have
U(t1+t2,s1+s2)=U(t1,s1)U(t2,s2).
Assume further that the map from R2 to unitary operators is strongly continuous. Then for any one-parameter subgroup we have a generator, in particular, we have
U(t,0)=eiH1t
and
U(0,t)=eiH2t.
But what about the generator of U(t,t)? We have some operator K with
U(t,t)=eiKt.
What is the connection between K and H1 and H2? By looking at the structure of R2 we expect to be able to write something like K=H1+H2, but is this really correct? These operators are all unbounded; how do we know that we can add them together?

The correct statement is, in fact, K=¯H1+H2, where the overline denotes a closure of unbounded operators. This means that K can be obtained from the sum of H1 and H2 by taking limits in the shared domain. The fact that H1 and H2 have any vectors in their shared domain is highly nontrivial; it is even stronger to claim that one can learn everything about K by studying its restriction to this shared domain. More generally, if G is a Lie group and ρ:GU(H) is a strongly continuous representation, then for any Lie algebra element X we can study the one-parameter unitary group
UX(t)=ρ(exp[tX]).
This group has a self-adjoint generator HX. The theorem we will prove in this post is that generators compose the way you expect, in that we have
HX+Y=¯HX+HY.
The fact that HX, HY, and HX+Y all live together in some representation of a Lie group is crucial in establishing this result. 

To proceed, we will first (i) find a dense subspace of Hilbert space on which all of the generators HX can be studied simultaneously, then (ii) establish that the expected relation holds within this subspace, and (iii) show that the main theorem can be proved by taking appropriate limits.

I learned this material by studying chapter 10 of Konrad Schmudgen's book Unbounded Operator Algebras and Representation Theory, and filling in some gaps with my own calculations.

Prerequisites: Basics of unbounded operators; in particular, it will be very helpful to know what is the closure of an operator before trying to read this post, though I will explain it as it arises.

Table of contents

  1. Smooth vectors
  2. Proving the main theorem

1. Smooth vectors

Let G be a Lie group and ρ:GU(H) a strongly continuous unitary representation. Because G is a Lie group, it always has a left Haar measure μ. To obtain vectors in Hilbert space that are "well behaved" with respect to G, we will start with a generic vector |ψ and smear it with the action of G using the Haar measure. For any compactly supported, smooth function f:GC, we will define
|ψf=Gdμ(g)f(g)ρ(g)|ψ.
This integral is defined in the sense of the Bochner integral.

Any vector constructed this way has the property that the map gρ(g)|ψf is a smooth map from G to H. Any vector |ϕ for which the map gρ(g)|ϕ is smooth is called a smooth vector; we have constructed a broad class of examples above. In fact, by taking a sequence of functions fn that approaches a delta function on the identity of G, one can show |ψfn|ψ. From this we see that every vector in Hilbert space is arbitrarily well approximated by smooth vectors, so the space of smooth vectors is dense in Hilbert space.

Now consider an arbitrary Lie algebra element X, and let HX be the generator of the unitary group eiHXt=UX(t)=ρ(exp[tX]). While this is an unbounded operator, we will show that every smooth vector is in its domain.

To see this, we note from Stone's theorem that the domain of HX consists of all those vectors |ψ for which the limit
HX|ψ=limt0UX(t)|ψ|ψit
exists. If |ψ is a smooth vector, then obviously the map tUX(t)|ψ is smooth, so this limit exists and is just the derivative of that map.

So far we have shown that the space of smooth vectors is simultaneously in the domain of every HX. We also know that the space of smooth vectors is dense in Hilbert space. This makes it sound like to understand each HX, it is sufficient to study the restriction of HX to the space of smooth vectors. But this is not necessarily true!

For an unbounded operator T with domain D, we say its closure (if it exists) is the operator ¯T with domain ¯D, where ¯D consists of all vectors |ψ that can be obtained as the limit of a sequence |ψnD such that T|ψn also converges. For such a vector, ¯T is defined by
¯T|ψ=limnT|ψn.
In other words, the closure of an operator is obtained by studying limits of sequences for which the image sequence is convergent.

Given a closed operator S with domain D, and a dense sub-domain ˜D, the identity
S=¯S|˜D
may or may not be true depending on the details of ˜D. If it is true, then ˜D is said to be a core for S, since anything you want to know about S can be understood by taking limits of its restriction to ˜D.

What we will now show is that for each generator HX, the space of smooth vectors is a core. To see this, let fn be a sequence of compactly supported smooth functions of G that limit to a delta function on the identity element. As mentioned above, we have |ψfn|ψ for any |ψ, and each |ψfn is a smooth vector. The action of HX on this sequence is given by
HX|ψfn=limt0UX(t)|ψfn|ψfnit,
or
HX|ψfn=limt01it(Gdμ(g)fn(g)(ρ(exp[tX])1)|ψ).
When |ψ is in the domain of HX, the limit can be taken inside the integral to obtain
HX|ψfn=Gdμ(g)fn(g)HX|ψ,
from which we can deduce
limnHX|ψfn=HX|ψ.
We also had |ψfn|ψ, and each |ψfn is smooth, so this completes the claim that the smooth vectors form a core for HX.

2. Proving the main theorem

Continuing with the notation of the previous section, suppose that X and Y are two Lie algebra elements for G, with HX and HY the corresponding self-adjoint generators. We want to show HX+Y=¯HX+HY. We know that the space of smooth vectors is a core for each of these operators individually, so we have
HX+Y=¯HX+Y|smooth.
For a smooth vector |ψ, we clearly have
HX+Y|ψ=limt0ρ(exp[t(X+Y)])1it|ψ.
This is the derivative of the smooth function tρ(exp[t(X+Y)])|ψ, which can be computed using continuity of ρ and the derivative of the exponential map to obtain
HX+Y|ψ=HX|ψ+HY|ψ.
So the fundamental identity we want to prove holds on smooth vectors, and we have
HX+Y=¯HX+Y|smooth=¯(HX+HY)|smooth.

So HX+Y is the closure of a restriction of HX+HY; to show that it is the closure of HX+HY, it suffices to show that the domain of HX+Y contains the full domain of HX+HY.
To show this, we note that for |ϕ in the domain of HX+HY and |ψ a smooth vector, we can use individual self-adjointness of HX and HY to obtain
ϕ|(HX+HY)|ψ=(HX+HY)ϕ|ψ.
This implies that |ϕ is in the domain of the adjoint of (HX+HY)|smooth, but since this operator has a self-adjoint closure HX+Y, the domain of its adjoint is the domain of its closure, and we have that |ϕ is in the domain of HX+Y, as desired.

Comments

Popular posts from this blog

Pick functions and operator monotones

Any time you can order mathematical objects, it is productive to ask what operations preserve the ordering. For example, real numbers have a natural ordering, and we have xyxkyk for any odd natural number k. If we further impose the assumption y0, then order preservation holds for k any positive real number. Self-adjoint operators on a Hilbert space have a natural (partial) order as well. We write A0 for a self-adjoint operator A if we have ψ|A|ψ0 for every vector |ψ, and we write AB for self-adjoint operators A and B if we have (AB)0. Curiously, many operations that are monotonic for real numbers are not monotonic for matrices. For example, the matrices P=12(1111) and Q=(0001) are both self-adjoint and positive, so we have P+QP0, but a str...

Envelopes of holomorphy and the timelike tube theorem

Complex analysis, as we usually learn it, is the study of differentiable functions from C to C. These functions have many nice properties: if they are differentiable even once then they are infinitely differentiable; in fact they are analytic, meaning they can be represented in the vicinity of any point as an absolutely convergent power series; moreover at any point z0, the power series has radius of convergence equal to the radius of the biggest disc centered at z0 which can be embedded in the domain of the function. The same basic properties hold for differentiable functions in higher complex dimensions. If Ω is a domain --- i.e., a connected open set --- in Cn, and f:ΩCn is once differentiable, then it is in fact analytic, and can be represented as a power series in a neighborhood of any point z, i.e., we have an expression like f(z)=ak1kn(z1z)k1(znz)kn. The ...

Some recent talks (Summer 2024)

My posting frequency has decreased since grad school, since while I'm spending about as much time learning as I always have, much more of my pedagogy these days ends up in papers. But I've given a few pedagogically-oriented talks recently that may be of interest to the people who read this blog. I gave a mini-course on "the algebraic approach" at Bootstrap 2024. The lecture notes can be found here , and videos are available here . The first lecture covers the basic tools of algebraic quantum field theory; the second describes the Faulkner-Leigh-Parrikar-Wang argument for the averaged null energy condition in Minkowski spacetime; the third describes recent developments on the entropy of semiclassical black holes, including my recent paper with Chris Akers . Before the paper with Chris was finished, I gave a general overview of the "crossed product" approach to black hole entropy at KITP. The video is available here . The first part of the talk goes back in ti...