Content-Type: multipart/mixed; boundary="-------------0107251429701"
This is a multi-part message in MIME format.
---------------0107251429701
Content-Type: text/plain; name="01-288.keywords"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="01-288.keywords"
Adiabatic Approximation, Exponential Asymptotics
---------------0107251429701
Content-Type: application/x-tex; name="hagjoy7.tex"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline; filename="hagjoy7.tex"
\documentclass[12pt,fleqn]{article} \textheight=9in
\textwidth=6.5in
\topmargin=-.75in \oddsidemargin=0mm
\renewcommand{\theequation}{\arabic{section}.\arabic{equation}}
\renewcommand{\baselinestretch}{1}
\newcommand{\ra}{\rightarrow}
\newcommand{\bra}{\langle} \newcommand{\ket}{\rangle}
\newcommand{\be}{\begin{equation}}
\newcommand{\ee}{\end{equation}}
\newcommand{\bea}{\begin{eqnarray}}
\newcommand{\eea}{\end{eqnarray}}
\newcommand{\eps}{\epsilon}
\newcommand{\E}{\mbox{e}}
\newcommand{\e}{\mbox{\scriptsize e}}
\newcommand{\ffi}{\varphi}
\newcommand{\sign}{\mbox{sign}}
%\newcommand{\ep}{\hfill {$\Box$}}
\newcommand{\ep}{\qquad {\vrule height 10pt width 8pt depth 0pt}}
\newcommand{\ode}{{\cal O}}
\newcommand{\w}{\cal A}
\newcommand{\z}{\cal B}
\newcommand{\grintl}{[\kern-.14em [}
\newcommand{\grintr}{]\kern-.14em ]}
\newcommand{\ds}{\displaystyle}
\newtheorem{lem}{Lemma}[section]
\newtheorem{prop}{Proposition}[section]
\newtheorem{thm}{Theorem}[section]
\newtheorem{cor}{Corollary}[section]
\renewcommand{\thefootnote}{\alph{footnote}}
\def\R{\hbox{$\mit I$\kern-.277em$\mit R$}}
\def\C{\hbox{$\mit I$\kern-.6em$\mit C$}}
\def\un{\hbox{$\mit I$\kern-.77em$\mit I$}}
\def\0{\hbox{$\mit I$\kern-.70em$\mit O$}}
%\def\r{\mbox{\bf \scriptsize R}}
\def\r{I\kern-.277em R}
\def\z{\mbox{\bf \scriptsize Z}}
\def\N{\mbox{\bf N}}
\begin{document}
\title{Elementary Exponential Error Estimates for the Adiabatic
Approximation}
\author{George Hagedorn\thanks{Partially
Supported by National Science Foundation
Grant DMS--0071692.} \\ Center for Statistical Mechanics and Mathematical
Physics \\
Virginia Polytechnic Institute and State
University \\ Blacksburg, Virginia 24061-0123, U.S.A.\\ \\
Alain Joye\\ Institut Fourier\\ Unit\'e Mixte de Recherche CNRS-UJF
5582 \\ Universit\'e de Grenoble I \\ 38402 Saint-Martin d'H\`eres
Cedex, France}
\maketitle
\begin{abstract}
We present an elementary proof that the quantum adiabatic approximation
is correct up to exponentially small errors for Hamiltonians that depend
analytically on the time variable.
Our proof uses optimal truncation of a straightforward asymptotic
expansion. We estimate the terms of the expansion
with standard Cauchy estimates.
\end{abstract}
\newpage
\section{Introduction}
\vskip .5cm
The adiabatic theorem of quantum mechanics describes the
asymptotic behavior of solutions to the time-dependent Schr\"odinger
equation when the Hamiltonian depends slowly on the time variable.
By rescaling the time variable by a factor of $\epsilon$, which measures
the slowness of the Hamiltonian's variation, the problem is usually
restated in the following way:
Let $\{ H(t)\}_{t\in \r}$ be a smooth family of self-adjoint
operators that satisfies the following gap condition:\
Assume $H(t)$ possesses a smooth non-degenerate eigenvalue $E(t)$ for all
times. Then the solution to
\be\label{schro}
i\,\epsilon\,\frac{\partial\psi}{\partial t}\ =\ H(t)\,\psi ,\qquad
\mbox{for small}\quad \epsilon>0,
\ee
with initial condition $\psi(t_{0})$ in the eigenspace associated
with $E(t_{0})$, will evolve to a state $\psi(t)$ that belongs to
the eigenspace associated with $E(t)$,
up to an $O(\epsilon)$ error as $\epsilon\ra 0$.
The square of the norm of the orthogonal projection
of $\psi(t)$ onto the complement of the instantaneous eigenspace
defines the non-adiabatic transition probability.
According to the adiabatic theorem, it is of order $\epsilon^{2}$.
The statement that solutions to (\ref{schro}) follow the instantaneous
eigenspaces of the Hamiltonian was made as early as 1928 by Born and Fock
in \cite{bf} for discrete, non-degenerate
Hamiltonians, and was generalized over the years by several authors.
A few milestones in the history of the adiabatic theorem were the following:
Kato \cite{k} proved the theorem for Hamiltonians
with a non-degenerate eigenvalue separated from the rest of the spectrum,
without any assumption on the nature of the rest of the spectrum.
In \cite{n1}, Nenciu showed that the adiabatic theorem holds for bounded
Hamiltonians if one replaces the isolated eigenvalue $E(t)$ by an
isolated component of the spectrum $\sigma (t)$ and the instantaneous eigenspace associated with $E(t)$ by the instantaneous spectral
subspace associated with $\sigma(t)$. This result was further
generalized to unbounded Hamiltonians by Avron, Seiler
and Yaffe in \cite{asy}.
At the same time that the adiabatic theorem was qualitatively generalized
to handle more situations, it was quantitatively
improved to compute transition probabilites more
accurately. For discrete Hamiltonians Lenard \cite{l} and Garrido \cite{g}
developed techniques that provide
asymptotic expressions for certain solutions to (\ref{schro}),
with $O(\epsilon^{\infty})$ error estimates. These techniques have
been generalized by Nenciu and Rasche \cite{n2,nr,n3} to fit
the general setting described above. Typical results say that
if all time derivatives of the Hamiltonian at both the initial
and final times are zero, then the transition probability is
$O(\epsilon^{\infty})$.
When these
derivatives are non--zero, there exists a smooth $\epsilon$--dependent
subspace, close to the instantaneous spectral subspace, to which certain
solutions belong, up to $O(\epsilon^{\infty})$ errors.
A scattering theory analog was proved in \cite{asy}, where the derivatives
were assumed to vanish as $t\rightarrow \pm \infty$.
When the Hamiltonian is an analytic function of time,
one expects the transition probability to be exponentially small
in the scattering context mentioned above. This is suggested by
non--rigorous analyses of $2\times 2$ matrix Hamiltonians,
certain explicitly solvable models (see \cite{z,lan,d}),
and the success of the Landau--Zener formula.
It was proved to be true in a general setting
only recently by Joye and Pfister in \cite{jp1}.
Earlier works proved it for matrix hamiltonians \cite{f,hp,jkp}
or for discrete hamiltonians with special time dependence
\cite{js}. These papers solve
(\ref{schro}) for complex values of the time
variable along carefully chosen paths in the complex plane.
Subsequent works on the exponential accuracy of the adiabatic approximation
have been performed by Nenciu \cite{n4} who proved the existence
of ``superadiabatic evolution operators,'' {\it i.e.},
exponentially accurate approximations of the
evolution generated by (\ref{schro}).
Superadiabatic evolutions were first
introduced and studied in the physics literature
by Berry and Lim \cite{b1,b2}.
Joye and Pfister,
\cite{jp2,j} used superadiabatic evolutions to set up
a reduction theory and to prove the Landau--Zener formula. The
method of proof consisted in deriving and achieving sufficient
control on asymptotic expansions of the evolution operator so that
optimal truncation would yield exponential accuracy.
Exponential accuracy of the adiabatic
theorem was also tackled using powerful pseudo-differential operator
techniques by Sj\"ostrand \cite{sj} and by Martinez \cite{m} who
studied the exponential decay rate of the transition probability as a
function of the parameters of the problem using this method.
Further details and results on other aspects of the adiabatic theorem
can be found in the references quoted in the recent reviews
\cite{iamp} and \cite{ae}.
In the present paper we provide an elementary proof of the exponential
accuracy of the adiabatic theorem, in the spirit of the results
concerning superadiabatic evolutions for Hamiltonians that have
a non--degenerate isolated eigenvalue.
Our result is not new, nor the most general, but our proof uses
only simple techniques of elementary analysis.
Our approach is to use a straightforward asymptotic expansion \cite{ghold}
for the solution to (\ref{schro}).
We estimate the individual terms in the expansion by using Cauchy estimates.
From this, it follows that when the expansion is truncated after an optimal
number of terms, the resulting approximation is exponentially accurate.
%
%Finally, we get
%or exponentially accurate approximation by optimally truncating
%the asymptotic series at its least term, a procedure also known as
%the ``astronomers' rule''.
\subsection{The Main Results}
We assume two hypotheses. The first one states that
the (possibly unbounded) Hamiltonian is analytic in an appropriate
sense in a neighboorhood of the real axis.
\vspace{.3cm}\noindent
{\bf H1}:\quad Let $\{ H(t)\}_{t\in \r}$ be a family of self-adjoint operators
in a separable Hilbert space ${\cal H}$ with common dense domain
$D\subset {\cal H}$. We assume that $\{ H(t)\}_{t\in \r}$ admits
an extension to the set
$S_{\delta_{0}}=\{t\in\C \,:\,|\mbox{Im}\,t|<\delta_{0}\}$
which forms an analytic family of type A.
\vspace{.3cm}
The second hypothesis asserts the existence of a non-degenerate
eigenvalue in the spectrum of $H(t)$ for all times.
\vspace{.3cm}\noindent
{\bf H2}:\quad For $t\in \R$, let $E(t)$ be a simple eigenvalue
of $H(t)$ that remains a distance $d(t)>d_{0}>0$ away from the rest of
the spectrum of $H(t)$.
\vspace{.3cm}
We let $\grintl x\grintr$ denote the greatest integer less than or equal
to $x$, and let $\phi^{\perp}(t)$ denote the projection of any
vector $\phi(t)$ onto the orthogonal complement of the instantaneous
eigenspace associated with $E(t)$.
\vskip .2cm
Our main result is the following:
\vskip .2cm
\begin{thm}\label{mr}
Assume hypotheses H1 and H2. Then, for all $t\in \R$, there exists a sequence
$\{\psi_{n}(t)\}_{n\in {\bf N}}$ of vectors in ${\cal H}$
that is determined by
an explicit recurrence relation. For each $N\in\N$, we construct
$$
\Psi_{N}(t,\epsilon)\ =\ e^{-i\int_{t_{0}}^{t}\,E(s)\,ds/\epsilon}\,
\left(\,\psi_{0}(t)\,+\,\epsilon \psi_{1}(t)\,+\,\cdots\,+\,
\epsilon^{N}\psi_{N}(t)\,+\,\epsilon^{N+1}\psi^{\perp}_{N+1}(t)\,\right).
$$
For any $t_0$ and $t$ in an arbitrary compact interval of $\R$,
there exist positive $G$, $C(g)$, and $\Gamma(g)$ (given in
(\ref{constants})), such that for all $g\in (0,\,G)$,
the vector
$\Psi_{*}(t,\eps)\,=\,\Psi_{\grintl g/\epsilon\grintr}(t,\epsilon)$
satisfies
$$\|\psi(t,\eps)\,-\,\Psi_{*}(t,\eps)\|\ \leq\ C(g)\
e^{-\Gamma(g)/\epsilon},$$
for all $\epsilon\leq 1$.
Here $\psi(t,\eps)$ is the exact solution to
the Schr\"odinger equation (\ref{schro})
with initial condition $\psi(t_{0},\eps)\,=\,\Psi_{*}(t_{0},\eps)$.
\end{thm}
\vskip .3cm \noindent
{\bf Remarks}:\quad {\bf 1.}
By keeping track of how $\Gamma(g)$ depends on the
minimum gap $d_{0}$, we recover the expected behavior
$\Gamma(g)\simeq d_{0}$ as $d_{0}\ra\infty $. See
\cite{jp2} and \cite{m}.\newline
{\bf 2.} The theorem implies that the range of the
projector
$\ds \frac{|\Psi_{*}(t,\eps)\ket\,\bra \Psi_{*}(t,\eps)|}
{\|\Psi_{*}(t,\eps)\|^{2}}$
is a smooth subspace that solutions follow,
up to $O(e^{-\Gamma(g)/\epsilon})$ errors.\newline
{\bf 3.} At the cost of some more technicalities, it is possible to get
the same result when the analyticity and gap hypotheses only hold
in a neighborhood of some bounded interval of the real axis, or when
$t$ and $t_{0}$ tend to minus and plus infinity, respectively.
\vspace{.3cm}
The rest of the paper is devoted to the proof of the Theorem \ref{mr}.
\vskip .5cm
\section{Adiabatic Expansion in Powers of $\epsilon$}\label{adiabat}
\setcounter{equation}{0}
\vskip .5cm
In this section we develop the expansion in powers of $\epsilon$ for certain
solutions to the evolution determined by
\be\label{scheqn}
i\,\epsilon\,\frac{\partial\psi}{\partial t}\ =\ H(t)\,\psi.
\ee
We assume H1 and H2 so that the resolvent of $H(t)$ and the isolated
eigenvalue $E(t)$ of multiplicity $1$ are $C^\infty$ in $t\in \R$.
Without loss of generality, we assume the initial time is $t_{0}=0$.
We prove that $\psi(t,\eps)$ has an expansion of the form
\be\label{expansion}
\psi(t,\eps)\ =\ e^{-i\int_0^t\,E(s)\,ds/\epsilon}\,\left(\,\psi_0(t)\,+\,
\epsilon\,\psi_1(t)\,+\,\epsilon^2\,\psi_2(t)\,+\,\dots\,\right).
\ee
We choose $\Phi(t)$ to be a smooth normalized eigenvector of $H(t)$ corresponding to
$E(t)$, and we assume its phase has been chosen so that
\be\label{phasechoice}
\langle\,\Phi(t),\,\Phi'(t)\,\rangle\ =\ 0
\ee
for each $t$. The existence of such an eigenvector follows {\it e.g.}, from
Problem 15, Chapter XII of \cite{rs4}.
We substitute the expression (\ref{expansion}) into (\ref{scheqn}) and equate
terms on the two sides of the resulting equation that are formally of the same
orders in $\epsilon$.
\vskip .5cm
\noindent {\bf Order $0$.}\qquad The terms of order zero require
$$\left[\,H(t)-E(t)\,\right]\,\psi_0(t)\ =\ 0.$$
This equation forces us to take
\be\label{psi0}
\psi_0(t)\ =\ f_0(t)\,\Phi(t),
\ee
for some yet to be determined function $f_0(t)$.
\vskip .5cm
\noindent {\bf Order $1$.}\qquad The terms of order $\epsilon$ require
$$i\,\frac{\partial\psi_0}{\partial t}\ =\
\left[\,H(t)-E(t)\,\right]\,\psi_1(t).$$
From (\ref{psi0}) this implies
$$i\,\frac{\partial f_0}{\partial t}(t)\,\Phi(t)\,+\,
i\,f_0(t)\,\frac{\partial \Phi}{\partial t}(t)\ =\
\left[\,H(t)-E(t)\,\right]\,\psi_1(t).$$
We solve this equation by separately examining those components of this
equation that are multiples of $\Phi(t)$ and those that are perpendicular to
$\Phi(t)$. Using (\ref{phasechoice}), we thus obtain two conditions:
\be\label{psi1a}
i\,\frac{\partial f_0}{\partial t}(t)\ =\ 0,
\ee
and
\be\label{psi1b}
i\,f_0(t)\,\frac{\partial \Phi}{\partial t}(t)\ =\
\left[\,H(t)-E(t)\,\right]\,\psi_1(t).
\ee
Equation (\ref{psi1a}) requires that $f_0$ be constant, and without loss of
generality, we choose it to be
\be\label{f0}
f_0(t)\ =\ 1.
\ee
Equation (\ref{psi1b}) then forces us to choose
\be\label{psi1}
\psi_1(t)\ =\ f_1(t)\,\Phi(t)\,+\,\psi_1^\perp(t),
\ee
where $f_1$ is yet to be determined, and
\be\label{psi1perp}
\psi_1^\perp(t)\ =\ i\,[\,H(t)-E(t)\,]_r^{-1}\,\Phi'(t).
\ee
In this expression we have used $[\,H(t)-E(t)\,]_r^{-1}$ to denote the reduced
resolvent operator of $H(t)$ on the orthogonal complement of the span of
$\Phi(t)$.
\vskip .5cm
\noindent {\bf Order $n\ge 2$.}\qquad We assume inductively that
we have solved the equations of order $j\le n-1$ to obtain
\be\label{psij}
\psi_j(t)\ =\ f_j(t)\,\Phi(t)\,+\,\psi_j^\perp(t).
\ee
Here, the scalar
function $f_j$ has been determined for $j\le n-2$, and the vector--valued
function $\psi_j^\perp$ has been determined for $j\le n-1$.
Equating terms of order $n$ requires
$$i\,\frac{\partial\psi_{n-1}}{\partial t}\ =\
\left[\,H(t)-E(t)\,\right]\,\psi_n(t).$$
From (\ref{psij}) this implies
$$i\,\frac{\partial f_{n-1}}{\partial t}(t)\,\Phi(t)\,+\,
i\,f_{n-1}(t)\,\frac{\partial \Phi}{\partial t}(t)\,+\,
i\,\frac{\partial \psi_{n-1}^\perp}{\partial t}(t)
\ =\ \left[\,H(t)-E(t)\,\right]\,\psi_n(t).$$
Using (\ref{phasechoice}), we separately examine the components of this
equation that are multiples of $\Phi(t)$ and those that are perpendicular to
$\Phi(t)$ to obtain two conditions:
\be\label{psina}
i\,\frac{\partial f_{n-1}}{\partial t}(t)\,+\,
i\,\langle\,\Phi(t),\,\frac{\partial \psi_{n-1}^\perp}{\partial t}(t)\,\rangle
\ =\ 0,
\ee
and
\be\label{psinb}
i\,f_{n-1}(t)\,\frac{\partial\Phi}{\partial t}(t)\,+\,
i\,P_\perp(t)\,\frac{\partial\psi_{n-1}^\perp}{\partial t}(t)
\ =\
\left[\,H(t)-E(t)\,\right]\,\psi_n(t),
\ee
where $P_\perp(t)\,=\,I\,-\,|\Phi(t)\rangle\,\langle\Phi(t)|$.
Equation (\ref{psina}) is solved simply by integration.
It determines $f_{n-1}$
up to a constant of integration that we take to be zero:
\be\label{fn}
f_{n-1}(t)\ =\ -\,\int_0^t\,
\langle\,\Phi(s),\,\frac{\partial\psi_{n-1}^\perp}{\partial t}(s)\,\rangle
\,ds.\qquad\qquad (n\ge 2)
\ee
Equation (\ref{psinb}) determines $\psi_n^\perp$ to be
\be\label{psinperp}
\psi_n^\perp(t)\ =\ i\,[\,H(t)-E(t)\,]_r^{-1}\,\left(\,f_{n-1}(t)\,\Phi'(t)\,+\,
P_\perp(t)\,\frac{\partial\psi_{n-1}^\perp}{\partial t}(t)\,\right).
\ee
We have thus determined $f_{n-1}$ and $\psi_n^\perp$, and the induction can
proceed.
\vskip .5cm
By using lemma 2.1 of \cite{ghold}, we now easily prove that
\bea\nonumber
\Psi_N(t,\eps)\!
&=&\!e^{-i\int_0^t\,E(s)\,ds/\epsilon}\left(\,\psi_0(t)\,+\,
\epsilon\,\psi_1(t)\,+\,\epsilon^2\,\psi_2(t)\,+\,\dots\,+\,
\epsilon^N\,\psi_N(t)\,+\,\epsilon^{N+1}\,\psi_{N+1}^\perp\,\right)\\[9pt]
&&\label{PsiN}
\eea
agrees with an exact solution of (\ref{scheqn}) up to an error that is bounded
by $A_N\,\epsilon^{N+1}$ for some $A_N$, as long as $t$ is kept in a fixed
compact interval.
Lemma 2.1 of \cite{ghold} states that if $\chi(t,\eps)$ approximatively
solves (\ref{scheqn}) in the sense that
\be
i\,\epsilon\,\frac{\partial\chi}{\partial
t}(t,\eps)\,-\,H(t)\,\chi(t,\eps)
=\zeta(t,\eps),
\ee
where $\zeta(t,\eps)$ is non-zero but small, then there exists an exact
solution
$\psi(t,\eps)$ to (\ref{scheqn}), such that
\be
\|\psi(t,\eps)-\chi(t,\eps)\|\leq \int_{0}^{t}\, \|\zeta(s,\eps)\|\,ds/
\epsilon.
\ee
We compute the error when (\ref{PsiN}) is substituted
into (\ref{scheqn}):
\bea\nonumber
\zeta_N(t,\eps)&=&
i\,\epsilon\,\frac{\partial\Psi_N}{\partial
t}(t,\eps)\,-\,H(t)\,\Psi_N(t,\eps)\\
&=&i\,\epsilon^{N+2}\ e^{-i\int_0^t\,E(s)\,ds/\epsilon}\
\frac{\partial\psi_{N+1}^\perp}{\partial t}(t).
\label{zetaN}
\eea
Then $\Psi_N(t,\eps)$ agrees with an exact solution of
(\ref{scheqn}) up to an error whose norm is bounded by $A_N\,\epsilon^{N+1}$,
where
\be\label{CN}
A_N\ \le\ \int_0^t\,\left\|\,
\frac{\partial\psi_{N+1}^\perp}{\partial t}(s)\,\right\|\,ds.
\ee
\vskip .5cm
\noindent {\bf Remarks}:\quad {\bf 1.}\quad
For future reference, we note that an integration
by parts in (\ref{fn}) yields an alternative expression for $f_n(t)$.
Since $\psi_{n-1}^\perp(s)$ is orthogonal to $\Phi(s)$ for each $s$, the
boundary terms vanish, and
\be\label{fn1}
f_{n-1}(t)\ =\ \int_0^t\,
\langle\,\Phi'(s),\,\psi_{n-1}^\perp(s)\,\rangle
\,ds.
\ee
{\bf 2.}\quad
So far we have used only smoothness of $H(t)$, rather than analyticity.
If $\frac{d^{n}}{dt^{n}}(H(t)-i)^{-1}=0$ for all $n\geq 1$, then
$\psi_{0}(t)=\Phi(t)$ and $\psi_{n}(t)\equiv 0$ for $n\geq 1$. This
implies that the transition probability is $O(\epsilon^{\infty})$
for initial and final times where the derivatives of the Hamiltonian vanish.
\vskip .5cm
\section{Cauchy Estimates}\label{cauchy}
\setcounter{equation}{0}
To prove exponential estimates by using optimal truncation of (\ref{expansion}),
we estimate the dependence on $N$ of the quantity $A_N$ in (\ref{CN}).
In this section we prove a simple lemma that we use to
estimate this dependence.
\vskip .5cm
\begin{lem}\label{cauchylem}
Define $B(0)=1$ and $B(k)=k^k$ for integers $k\ge 1$.
Suppose $g$ is an analytic vector--valued function on the strip
$S_\delta\ =\ \{\,t\,:\,|\mbox{\rm Im}\,t|<\delta\,\}$.
If $g$ satisfies
$$\|g(t)\|\ \le\ C\,B(k)\,(\delta\,-\,|\mbox{\rm Im}\,t|)^{-k},$$
for some $k\ge 0$, then $g'$ satisfies
$$\|g'(t)\|\ \le\ C\,B(k+1)\,(\delta\,-\,|\mbox{\rm Im}\,t|)^{-k-1},$$
for all $t\in S_\delta$.
\end{lem}
\vskip .5cm
\noindent
{\bf Proof:}\quad Let us first consider the case $k\ge 1$.
By Cauchy's formula, we can write
\be\label{cauchyformula}
g'(t)\ =\ \frac 1{2\pi i}\,\int_\Gamma\,\frac{g(s)}{(t-s)^2}\,ds,
\ee
where $\Gamma$ is the circular contour with center $t$ and radius
$\ds\frac 1{k+1}\ (\delta\,-\,|\mbox{\rm Im}\,t|)$.
For $s$ on $\Gamma$, we have
$\ds (\delta-|\mbox{\rm Im}\,s|)\,\ge\,
\frac k{k+1}(\delta\,-\,|\mbox{\rm Im}\,t|)$. Thus,
\bea\nonumber
\|g(s)\|&\le&C\ k^k\ (\delta\,-\,|\mbox{\rm Im}\,s|)^{-k}\\
&\le&C\ k^k\ \left[\,\frac{k}{k+1}\,
(\delta\,-\,|\mbox{\rm Im}\,t|)\,\right]^{-k}
\nonumber\eea
So, by putting the norm inside the integral in (\ref{cauchyformula}), we have
\bea\nonumber
\|g'(t)\|&\le&\frac 1{2\pi}\
\frac{2\pi}{k+1}(\delta\,-\,|\mbox{\rm Im}\,t|)\
C\,k^k\,\left[\frac{k}{k+1}(\delta\,-\,|\mbox{\rm Im}\,t|)\right]^{-k}\
\left[\frac{1}{k+1}(\delta\,-\,|\mbox{\rm Im}\,t|)\right]^{-2}\\ \nonumber
&=&C\ (k+1)^{k+1}\ (\delta\,-\,|\mbox{\rm Im}\,t|)^{-k-1}.\eea
For $k=0$ we use the same argument with the radius of $\Gamma$ replaced by
$\ds\alpha\,\,(\delta\,-\,|\mbox{\rm Im}\,t|)$ for any $\alpha<1$. This yields the bound
$$\|g'(t)\|\ \le\ C\,\alpha^{-1}\,(\delta\,-\,|\mbox{\rm Im}\,t|)^{-1}.$$
The lemma follows because $\alpha<1$ is arbitrary.\qquad\qquad\ep
\vskip .5cm
\section{Preliminary Estimates}\label{prelim}
\setcounter{equation}{0}
In this section we derive
preliminary estimates for derivatives of the resolvent operator and and the
eigenvector $\Phi(t)$.
We know that $H(t)$ is a self-adjoint analytic family in
$S_{\delta_0}$. We arbitrarily choose $\delta\in(0,\,\delta_0)$.
Taking $\delta_0$ small enough, we can assume
that for each $t\in S_{\delta_{0}}$, the analytic function $E(t)$ is a
distance $d(t)>d>0$ from the rest of the spectrum of $H(t)$,
where $d\geq d_{0}/2$.
We can further assume that for
$t\in S_{\delta_{0}}$,
\be\label{assume}
\left\|\,(z+E(t)-H(t))^{-1}\right\|\,\le C_1
\ee
whenever $|z|=d/2$.
The reduced resolvent $[\,H(t)-E(t)\,]_r^{-1}$ can be written as
$$[\,H(t)-E(t)\,]_r^{-1}\ =\
\frac 1{2\pi i}\ \int_{|z|=d/2}\,(H(t)-E(t)-z)^{-1}\,\frac{dz}z.$$
From this representation, we see that
\be\label{reduced}
\left\|\,[\,H(t)-E(t)\,]_r^{-1}\,\right\|\ \le\ C_1,
\ee
for $t\in S_{\delta_{0}}$.
Similarly, the spectral projection associated with eigenvalue $E(t)$ is given by
$$P(t)\ =\ \frac {-1}{2\pi i}\ \int_{|z|=d/2}\,(H(t)-E(t)-z)^{-1}\,dz.$$
From this representation, we see that there exists $C_2$, such that both
$P(t)$ and
$P_\perp(t)\,=\,I\,-\,P(t)$ satisfy
\be\label{projest1}
\left\|\,P(t)\,\right\|\ \le\ C_2,
\ee
and
\be\label{projest2}
\left\|\,P_\perp(t)\,\right\|\ \le\ C_2,
\ee
for $t\in S_{\delta_{0}}$.
Assumption (\ref{assume}) also implies estimates on the derivatives of
the vector $\Phi(t)$ of Section \ref{adiabat}. To prove these estimates, we
note that Problem 15, Chapter XII of \cite{rs4} and (\ref{assume}) imply
the existence of an analytic vector-valued function $\Phi(t)$ that never
vanishes in $S_{\delta_{0}}$, which is bounded and analytic in
$S_{\delta_0}$, normalized for real $t\in S_{\delta_0}$, and satisfies
(\ref{phasechoice}) for real $t\in S_{\delta_0}$. We choose $C_3$, such that
\be
\left\|\,\Phi(t)\,\right\|\ \le\ C_3,\label{Phiest}
\ee
for $t\in S_{\delta_0}$.
Since $\delta<\delta_0$, $\Phi'(t)$ is bounded and analytic in $S_{\delta}$, so
there exists $C_4$, such that
\be
\left\|\,\Phi'(t)\right\|\ \le\ C_4,\label{Phiest1}
\ee
for $t\in S_{\delta}$.
\vspace{.3cm}
{\bf Remark}:\quad
It is not difficult to see by means of
the second resolvent identity that $C_{1}\simeq 1/d_{0}$, whereas
the other constants are uniform $d_{0}$.
\vskip .5cm
\section{The Main Estimates}\label{main}
\setcounter{equation}{0}
In this section we prove estimates for $f_n(t)$ and $\psi_n^\perp(t)$ that
lead to exponential results in an optimal truncation strategy. The idea is to
use an induction based on formulas (\ref{fn}) and (\ref{psinperp}) with
technical help from Sections \ref{cauchy} and \ref{prelim}. Let us
introduce the set
$S_{\delta, T}=\{ t\in S_{\delta}\ :\ |t|\leq T\}$, for any $T>0$.
\vskip .5cm
\begin{lem}\label{mainlem} Assume the hypotheses of Section \ref{prelim} and the
notation of Sections \ref{adiabat} and \ref{cauchy}. Define
$C_5\ =\
C_1\,\left(\,C_3\,C_4\,T\,+\,C_2\,\right)$.
Then for $t\in S_{\delta,\,T}$ and $n\ge 1$, we have
\be\label{fnest}
|f_n(t)|
\ \le\ T\,C_1\,C_3\,C_4\,C_5^{n-1}\,B(n)\,(\delta-|\mbox{\rm Im}\,t|)^{-n}.
\ee
and
\be\label{psinest}
\left\|\,\psi_n^\perp(t)\,\right\|
\ \le\ C_1\,C_4\,C_5^{n-1}\,B(n-1)\,(\delta-|\mbox{\rm Im}\,t|)^{-n+1}.
\ee
\end{lem}
\vskip .5cm
\noindent {\bf Proof:}\quad We prove this by induction on $n$.
To get the induction started, we estimate $\psi_1^\perp$ and $f_1(t)$.
The function $\psi_1^\perp$ is given by (\ref{psi1perp}).
By (\ref{reduced}) and (\ref{Phiest1}),
we have
\be\label{psi1est}
\left\|\,\psi_1^\perp(t)\,\right\|
\ \le\ C_1\,C_4.
\ee
By Lemma \ref{cauchylem}, this implies
$$\left\|\,\frac{\partial\psi_{1}^\perp}{\partial t}(t)\,\right\|
\ \le\ C_1\,C_4\,\,B(1)\,(\delta-|\mbox{Im}\,t|)^{-1}.$$
Using this and integrating along a straight contour in (\ref{fn}), we see
that
\be\label{f1est}
|f_1(t)|\ \le\ T\,C_1\,C_3\,C_4\,B(1)\,(\delta-|\mbox{Im}\,t|)^{-1}.
\ee
Note that we have used the estimate
$(\delta-|\mbox{Im}\,s|)^{-1}\le(\delta-|\mbox{Im}\,t|)^{-1}$,
as $s$ goes from $0$ t0 $t$ along the straight contour.
For the induction step, suppose for some $N\ge 2$, that the lemma's conclusion
is true for all $n