Content-Type: multipart/mixed; boundary="-------------0106290404935"
This is a multi-part message in MIME format.
---------------0106290404935
Content-Type: text/plain; name="01-234.comments"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="01-234.comments"
AMS-Code: 34L40, 81Q10, 42A38
---------------0106290404935
Content-Type: text/plain; name="01-234.keywords"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="01-234.keywords"
Schr\"odinger operator, Fourier transform, sparse potential, singular continuous spectrum
---------------0106290404935
Content-Type: application/x-tex; name="ft.tex"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline; filename="ft.tex"
\documentclass{article}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\newtheorem{Theorem}{Theorem}[section]
\newtheorem{Proposition}[Theorem]{Proposition}
\newtheorem{Lemma}[Theorem]{Lemma}
\newtheorem{Definition}[Theorem]{Definition}
\newtheorem{Corollary}[Theorem]{Corollary}
\begin{document}
\title{Schr\"odinger operators with
sparse potentials: asymptotics of the Fourier transform of
the spectral measure}
\author{Denis Krutikov$^1$
and Christian Remling$^2$}
\date{June 27, 2001}
\maketitle
\begin{center}
(to appear in {\it Commun.\ Math.\ Phys.})
\end{center}
\vspace{0.5cm}
\noindent
1.\ Universit\"at Essen, Fachbereich Mathematik/Informatik,
45117 Essen, GERMANY\\
E-mail: denis.kroutikov@uni-essen.de\\[0.1cm]
2.\ Universit\"at Osnabr\"uck,
Fachbereich Mathematik/Informatik,
49069 Osnabr\"uck, GERMANY\\
E-mail:
cremling@mathematik.uni-osnabrueck.de\\[0.3cm]
2000 AMS Subject Classification: primary 34L40, 81Q10, secondary
42A38
\\[0.3cm]
Key words: Schr\"odinger operator, Fourier transform, sparse potentials
\\[0.3cm]
\begin{abstract}
We study the {\it pointwise} behavior of the Fourier transform
of the spectral measure for discrete one-dimensional Schr\"odinger
operators with sparse potentials. We find a resonance structure
which admits a physical interpretation in terms of a simple quasiclassical
model. We also present an improved version of known results on the
spectrum of such operators.
\end{abstract}
\section{Introduction}
Let $H$ be the Hamiltonian
of a quantum mechanical system, acting on a Hilbert space $\mathcal H$.
If the initial state is denoted by $\psi$ (so $\psi\in\mathcal H$ and
$\|\psi\|=1$), then
$\left| \langle \psi, e^{-itH}\psi\rangle \right|^2$ is the probability
of finding the system again in the state $\psi$ at time $t$. Clearly,
$\langle \psi, e^{-itH}\psi\rangle = \widehat{\rho}_{\psi}(t)$, where
$\rho_{\psi}$ is the spectral measure of $\psi$ and the hat denotes the
Fourier transform. It is therefore interesting to study
the Fourier transform of the spectral measures of $H$.
Usually, one does not analyze dynamical properties directly,
but rather tries to connect them to the spectral properties of $H$.
For instance, the time average $(1/2T)\int_{-T}^T
|\widehat{\rho}(t)|^2 \, dt$ is related to the continuity properties
of $\rho$ with respect to Hausdorff measures \cite{L}. These properties,
in turn, can be (and have been) studied successfully
for many interesting models.
In this paper, however, we are interested in the {\it pointwise}
behavior of $\widehat{\rho}(t)$ as $t\to\pm\infty$. Clearly, this
quantity carries additional information which gets lost in the
averaging process. In particular, it is often interesting to know
whether $\lim_{t\to\pm\infty} \widehat{\rho}(t) =0$ (the measures $\rho$
with this property are called Rajchman measures). On the other hand,
the pointwise behavior of $\widehat{\rho}(t)$
is usually difficult to analyze and it may depend in
a subtle way on number theoretic properties of $\rho$. For example,
a classical result of Salem says that a Cantor set with ratio of dissection
$\theta>2$ does not support non-zero Rajchman measures precisely if
$\theta$ is a Pisot number, that is, if $\theta$ is an algebraic
integer whose conjugates are strictly less than one in absolute value
(see \cite[Chapter III]{Meyer}).
Furthermore, Lyons \cite{Ly} characterized the Rajchman measures as the
measures annihilating all Weyl sets, and the property of being a
Weyl set again depends on arithmetic properties.
However, there are also two obvious remarks that can be made: an absolutely
continuous measure is Rajchman (by the Riemann-Lebesgue Lemma), while
a point measure is not Rajchman (by Wiener's Theorem). So the distinction
between Rajchman and non-Rajchman measures really concerns the
singular continuous part of a measure.
In this paper, we will discuss one specific model where the pointwise
behavior of $\widehat{\rho}(t)$ can be analyzed rather completely.
Indeed, the estimates we will prove below cannot be substantially
improved as this would be inconsistent with the spectral properties --
compare the discussion following Theorem \ref{T1.2}.
We will study discrete one-dimensional Schr\"odinger operators
with sparse potentials. These potentials can lead to singular continuous
spectra, as was first shown by Pearson in his celebrated paper
\cite{Pea1}. Pearson's results were recently improved and extended
in \cite{KLS,Mol,Remsparse,Remsc}.
The discrete Schr\"odinger
equation reads
\begin{equation}
\label{se}
y(n-1)+y(n+1)+V(n)y(n)=Ey(n)\quad\quad\quad (n\in\mathbb N);
\end{equation}
let $H:\ell_2(\mathbb N)\to\ell_2(\mathbb N)$
be the associated Schr\"odinger operator, that is,
$(Hy)(n)$ equals the left-hand side of \eqref{se} (where we
put $y(0):=0$). The potential $V$ will have the form
\begin{equation}
\label{pot}
V(n)=\sum_{m=1}^{\infty} g_m \delta_{n,x_m},
\end{equation}
where the $g_n$ are bounded and $x_10$ (arbitrarily small) and define the resonant set $R$
by
\[
R= \bigcup_{n\in\mathbb N} [(1-\epsilon)x_n, x_n (\ln x_n)^{1+\epsilon}].
\]
Suppose that for some $C>0$, $\mu>0$, we
have $x_n\le Cx_{n+1}^{1-\mu}$ for all $n\in\mathbb N$. Then:\\
(i) For
every $m\in\mathbb N$ and every $f\in C_0^{\infty}(-2,2)$, there exists
a constant $C$ so that
\[
\left| (f\, d\rho)\, \widehat{ }\, (t) \right| \le C (1+|t|)^{-m}
\]
for all $t$ with $|t|\notin R$.\\
(ii) For every $\gamma<\min\{ 1/2, \mu\}$
and every $f\in C_0^{\infty}(-2,2)$ with $0\notin\text{supp }f$,
there exists a constant $C$ so that
\[
\left| (f\, d\rho)\, \widehat{ }\, (t) \right| \le C (1+|t|)^{-\gamma}
\]
for all $t$.
\end{Theorem}
Here, $\rho$ is the spectral measure associated with the vector
$\delta_1\in \ell_2$ ($\delta_1(1)=1$ and $\delta_1(n)=0$ if
$n\not= 1$). Since $\delta_1$ is a cyclic vector for $H$, any other
spectral measure $\rho_{\psi}$ is absolutely continuous with respect
to $\rho$.
Some comments on Theorem \ref{T1.2}
are in order. First of all, Killip and one of us have
shown \cite{KR} that
\[
\mathcal H_{Raj} :=
\{ \psi\in\ell_2: \rho_{\psi}\text{ is a Rajchman measure} \}
\]
is a reducing subspace for $H$. So, since $C_0^{\infty}(-2,2)$ is
dense in $L_2((-2,2),d\rho)$, part a) of Theorem \ref{T1.2} tells us
that the Schr\"odinger operator $H$ is purely Rajchman on $(-2,2)$,
that is, $E((-2,2))\ell_2\subset {\mathcal H}_{Raj}$ (where $E$ denotes the
spectral projection of $H$).
Simon \cite{Sim} has obtained earlier a very general result
which goes in the same direction. Roughly speaking, it states that for
many models with singular continuous spectrum,
one can achieve that $\mathcal H_{sc}=\mathcal H_{Raj}$
(and, in fact, $\widehat{\rho}(t)=O(|t|^{-1/2}\ln |t|)$) by making
the potential sufficiently sparse. However, there is little control on the
rate with which the barrier separations have to increase. Simon's
techniques are quite different from ours.
Theorem \ref{T1.2}b) shows that under a stronger assumption
on the $x_n$'s, we also get information on the rate with
which $(f\, d\rho)\,\widehat{ }$ goes to zero.
Namely, according to part (i),
the Fourier transform decays very rapidly off the
resonant set $R$. Part (ii) is especially interesting if
the $x_n$ grow so rapidly that
$x_n\le Cx_{n+1}^{1/2}$. Then $\mu=1/2$, and Theorem \ref{T1.2}b)
says that for arbitrary $m\in\mathbb N$, $\delta>0$,
\begin{equation}
\label{concl}
\left| \left( f\, d\rho\right)\, \widehat{}\, (t) \right| \le
\begin{cases} C(1+|t|)^{-m} & |t|\notin R \\
C (1+|t|)^{-1/2+\delta} & |t|\in R
\end{cases} .
\end{equation}
This conclusion can also be proved under weaker assumptions
on the increase of $x_n$ if there is some regularity in the
way in which the $x_n$'s tend to infinity. For example, if
$x_n = [\exp(a^n)]$ with $a>1$, then \eqref{concl} also holds.
These estimates must be rather accurate,
at least if $\sum g_n^2=\infty$. Indeed, Theorem \ref{T1.1}b) then shows
that the spectral measure is purely singular on $(-2,2)$, so
$(f\, d\rho)\,\widehat{ } \notin L_2$. This means, first of all,
that on the resonant set, the exponent of $(1+|t|)$ cannot be smaller
than $-1/2$. By the same token, our definition of the resonant set
is close to optimal in that it cannot be true that for all large $n$,
the interval containing $x_n$ is smaller
than $Cx_n^{1-\epsilon}$, with $\epsilon>0$. Indeed, if such an
estimate held, then (writing $I_n=[x_n-Cx_n^{1-\epsilon},
x_n+Cx_n^{1-\epsilon}]$)
\[
\int_{I_n} \left| \left( f\, d\rho\right)\, \widehat{}\, (t) \right|^2
\, dt \le C_0 x_n^{1-\epsilon} \left( x_n^{-1/2+\delta} \right)^2
= C_0 x_n^{2\delta-\epsilon}.
\]
Hence by taking $\delta<\epsilon/2$,
we see that $\left( f\, d\rho\right)\, \widehat{} \in L_2$.
As mentioned above, this conclusion
contradicts the fact that $\rho$ is singular.
Since our intervals have a
size of $\approx x_n(\ln x_n)^{1+\epsilon}$, we may be off by at
most a factor which is $o(x_n^{\epsilon})$ for all $\epsilon>0$.
Note also that
the intervals contained in
$R$ are disjoint and
large for large $n$, but there are also huge gaps between them,
so that the complementary
set of non-resonant times covers a considerable portion
of the real line.
Theorem \ref{T1.2}b) very neatly supports a naive quasiclassical
picture of quantum motion under the influence of a sparse potential.
Namely, play the following game: Start with a particle localized at
the origin $n=1$ at time $t=0$, and let it move towards the first
barrier (which is at $x_1$). When the particle hits the first barrier,
it is either reflected or transmitted (the corresponding probabilities
should presumably be determined from the reflection and transmission
coefficients from stationary scattering theory, but this is quite
irrelevant here). In the case of reflection, the particle returns
to the origin, while in the case of transmission, it moves on to
the second barrier, where it is again either transmitted or reflected.
Recalling that $|\widehat{\rho}(t)|^2$ is the probability
of finding the particle again at $n=1$ at time $t$ if it was initially
at $n=1$, we see that the above model suggests that $\widehat{\rho}$
should have a resonance structure since return to the origin is possible
only at certain times. Because of the spreading of the wave packets,
we should not expect very sharp resonances. Of course, mathematically
speaking, there is little reason to have much confidence in this
simplistic model, and indeed the actual analysis proceeds along
different lines.
Still, the final result (compare equation \eqref{concl}) is
exactly what the model predicts!
We can now also understand the role of the assumption $0\notin
\text{supp }f$ in Theorem \ref{T1.2}b)(ii): Namely, the spreading
of wave packets under the free evolution is slower for wave packets
localized (in energy) around $E=0$. Our methods also work if
$0\in \text{supp }f$ is allowed, but one obtains weaker estimates.
In particular, under the same assumptions as above ($\mu=1/2$),
one can prove that $\left( f\, d\rho\right)\, \widehat{}(t)
=O(|t|^{-1/6+\epsilon})$ for every $\epsilon>0$. See
\cite{Kr} for details on this.
Our approach for proving Theorem \ref{T1.2} depends on a representation
of the Fourier transform of the spectral measure as a rather complicated
looking limit of (an increasing number of) series of integrals
(= Theorem \ref{T2.1}). This formula is completely general, but if
(and probably only if) the potential is sparse, it is also useful
because most of the integrals are oscillatory and hence small.
These terms will be estimated in Sect.\ 4, the result being
Theorem \ref{T4.1}. There are other terms which cannot be
treated in this way; these contributions are discussed in Sect.\ 5.
Armed with these estimates, we can then prove Theorem \ref{T1.2}
in Sect.\ 6; in fact, this result is a rather straightforward
consequence of Theorems \ref{T4.1}, \ref{T5.1}. Finally, in Sect.\ 7,
we prove Theorem \ref{T1.1}.
It is also possible to treat the case of unbounded $g_n$'s with our
methods, although the technical difficulties increase and the results
are somewhat less satisfactory. See again \cite{Kr} for further
information.
{\bf Acknowledgment:} C.R.\ acknowledges financial support by
the Heisenberg program of the Deutsche Forschungsgemeinschaft.
\section{Preliminaries}
In this section, we collect some basic material that will be
needed in the sequel. First of all, we will use a Pr\"ufer
type transformation (compare \cite{KLS,KRS})
to rewrite the Schr\"odinger equation
\eqref{se}.
So, suppose that $E\in (-2,2)$, and
let $y$ be the solution of \eqref{se} with
initial values $y(0)=0$, $y(1)=1$ (say). Write $E=2\cos k$ with
$k\in (0,\pi)$ and define $R(n)>0$, $\psi(n)$ by
\[
\begin{pmatrix} y(n-1)\sin k \\ y(n)-y(n-1)\cos k \end{pmatrix}
= R(n) \begin{pmatrix} \sin(\psi(n)/2-k) \\
\cos(\psi(n)/2-k) \end{pmatrix} .
\]
In fact, the angle $\psi(n)$ is defined only modulo $4\pi$.
One then checks that $R$ and $\psi$ obey the equations
\begin{gather*}
\frac{R(n+1)^2}{R(n)^2} = 1 - \frac{V(n)}{\sin k} \sin\psi(n)
+\frac{V(n)^2}{\sin^2 k}\sin^2 (\psi(n)/2), \\
\cot \left( \psi(n+1)/2-k\right) = \cot (\psi(n)/2) -
\frac{V(n)}{\sin k} .
\end{gather*}
There is no problem with the singularities of $\cot$ because we
can as well use a similar equation with $\tan$ instead of $\cot$.
Actually, a tiny bit of information got lost when we passed from
\eqref{se} to
these new equations. This is reflected in the fact that now
$\psi(n+1)$ is only determined modulo $2\pi$ by
the equations. We must in fact impose the additional requirement
that $\sin(\psi(n)/2)$ and
$\sin(\psi(n+1)/2-k)$ have the same sign (and
if $\sin(\psi(n)/2)=0$, then
$\cos(\psi(n+1)/2-k)=\cos(\psi(n)/2)$). Fortunately, these points will not
cause any inconvenience.
Note that the evolution of $R,\psi$ is especially simple
if $V=0$: $R$ is constant and $\psi(n+1)=\psi(n)+2k$. If the
potential is sparse (that is, of the form \eqref{pot}), we use
a slightly different notation in that we write $R_n=R(x_n)$
and $\psi_n=\psi(x_n)$; also, it is often useful to make the
dependence on $k$ explicit. We then have that
$R(m)=R_n$ for $x_{n-1}< m\le x_n$ and
\begin{gather}
\label{eqR}
\frac{R_{n+1}^2}{R_n^2} = 1 - \frac{g_n}{\sin k}\sin\psi_n
+ \frac{g_n^2}{\sin^2 k} \sin^2 (\psi_n/2), \\
\label{eqtheta}
\psi_n = \psi(x_{n-1}+1) + 2k (x_n-x_{n-1}-1) , \\
\label{eqpsi}
\cot \left( \psi(x_{n-1}+1)/2 - k \right) = \cot (\psi_{n-1}/2)
-\frac{g_{n-1}}{\sin k} .
\end{gather}
As a second tool, we need a representation of the spectral measure
as a weak star limit of absolutely continuous measures involving
the solutions of \eqref{se}. We again use the spectral measure
associated with $\delta_1$, and we denote this measure by $\rho$.
In other words,
$\rho(M)=\|E(M)\delta_1\|^2$, where $E(\cdot)$ is the spectral
resolution of $H$.
\begin{Proposition}
\label{P2.1}
Let $w$ be a Herglotz function (that is, a holomorphic mapping from
$\mathbb C^+=\{ z\in \mathbb C : \text{\rm Im } z>0\}$
to itself), and let $I\subset\mathbb R$ a
bounded, open interval.
Suppose that $w$ extends continuously to $\mathbb C^+ \cup I$ and that
$\text{\rm Im }w(E) >0$ for all $E\in I$. Then
\[
\int f(E)\, d\rho(E) = \lim_{n\to\infty}
\frac{1}{\pi}\int f(E) \frac{\text{\rm Im }w(E)}
{\left| y(n,E) - w(E)y(n+1,E) \right|^2}\, dE
\]
for all continuous functions $f$ with support in $I$.
Here, $y$ is the solution of \eqref{se} with the initial values
$y(0,E)=0$, $y(1,E)=1$.
\end{Proposition}
Basically,
this result is from \cite{Pea2};
the special
case $w\equiv i$ has been known before
\cite{Car,LS}. The proof we give below does not depend on the
methods used in these papers; it
is based on an idea of Atkinson
(unpublished manuscript).
{\it Proof of Proposition
\ref{P2.1}.}
Let $y$ be as above, and also introduce $v$ as the solution
of \eqref{se} with the initial values
$v(0,E)=1$, $v(1,E)=0$. In fact, the spectral parameter
$E$ will also take complex values in this proof,
and in that case we usually denote it by $z$ instead
of $E$. Fix $N\in\mathbb N$,
write $f(n,z)=v(n,z)-M_N(z)y(n,z)$ and determine $M_N$ from the
(non-selfadjoint) boundary condition $f(N,z)=w(z) f(N+1,z)$
($z\in\mathbb C^+$). A brief computation shows that
\begin{equation}
\label{MN}
M_N(z) = \frac{v(N,z)-v(N+1,z)w(z)}{y(N,z)-y(N+1,z)w(z)} .
\end{equation}
Moreover, there is Green's identity
\[
\sum_{n=1}^N \left( \overline{g(n)}(\tau h)(n) - \overline{(\tau g)
(n)} h(n) \right) = \left. \left( \overline{g(n)}h(n+1)-
\overline{g(n+1)}h(n) \right) \right|_{n=0}^{n=N} .
\]
Here, $g,h$ are arbitrary functions from $\mathbb N_0$ to $\mathbb C$,
and $(\tau y)(n)$ is short-hand for the left-hand side of \eqref{se}.
If we apply this to
\[
\sum_{n=1}^N \left| f(n,z)\right|^2 =
\frac{1}{z-\overline{z}} \sum_{n=1}^N \left( \overline{f(n,z)}
(\tau f)(n,z) - \overline{(\tau f)(n,z)}f(n,z) \right)
\]
with the function $f$ from above,
we obtain
\[
\sum_{n=1}^N \left| f(n,z)\right|^2 = \frac{\text{Im }M_N(z)}
{\text{Im }z} - \left| f(N+1,z)\right|^2 \frac{\text{Im }w(z)}
{\text{Im }z}.
\]
This equation together with \eqref{MN} show that $M_N$ is a Herglotz
function. Clearly, $\text{Im }M_N \ge \text{Im }z \sum_{n=1}^N
\left| f(n,z)\right|^2$, which is precisely the condition for
$M_N$ to lie inside the Weyl circle $K_N(z)$ (see, for example,
\cite[Sect.\ 9.2]{CodLev} and \cite[Sect.\ 2.4]{Te}).
By standard Weyl theory, the
Weyl circles shrink to a point as $N\to\infty$, and this point
is nothing but the $m$-function of the half-line problem:
$m(z)=\langle \delta_1, (H-z)^{-1} \delta_1 \rangle$.
In particular, we have that $M_N(z)\to m(z)$
for fixed $z\in\mathbb C^+$. It now
follows that the measures associated with $M_N$ converge (in a
sense that will be made precise shortly)
to $\rho$. This part of the argument is similar to the construction
of the spectral measure $\rho$ in standard Weyl theory (compare
the discussion in \cite[Sect.\ 9.3]{CodLev}) and will thus only
be sketched.
Write down the Herglotz representation
of $M_N$:
\[
M_N(z)= a_N + b_N z + \int_{\mathbb R} \left(\frac{1}{t-z}
-\frac{t}{t^2+1}\right) \, d\rho_N(t).
\]
Here $a_N\in\mathbb R$, $b_N\ge 0$, and $\rho_N$ is a positive
Borel measure with $\int \frac{d\rho_N(t)}{t^2+1}<\infty$.
By analyzing the asymptotics of $M_N(iy)$ as $y\to\infty$, one
can in fact show that $b_N=0$. It is nice to have finite measures,
so we introduce $d\mu_N(t)=\frac{d\rho_N(t)}{t^2+1}$ and write
$M_N$ as
\[
M_N(z)= a_N + \int_{\mathbb R} \frac{tz+1}{t-z}\,
d\mu_N(t) .
\]
Note that $\text{Im }M_N(i)=\mu_N(\mathbb R)$; since this sequence
is bounded (even convergent), the Banach-Alaoglu Theorem shows that
the $\mu_N$ converge on a subsequence to a limit measure $\mu$
in the weak star topology (where the finite, complex Borel measures
on $\mathbb R$ are viewed as the dual of $C_0(\mathbb R)$).
By passing to the limit in the equation
\[
\frac{\text{Im }M_N(z)}{\text{Im }z} - \text{Im }M_N(i)
=\int_{\mathbb R}\left( \frac{t^2+1}{|t-z|^2} - 1\right)\,
d\mu_N(t) ,
\]
we thus see that
\[
\frac{\text{Im }m(z)}{\text{Im }z} - \text{Im }m(i)
=\int_{\mathbb R}\left( \frac{t^2+1}{|t-z|^2} - 1\right)\,
d\mu(t).
\]
Since the measure associated with a Herglotz function is already
determined by the imaginary part of that function, we must have that
$d\mu(t)=\frac{d\rho(t)}{t^2+1}$. In particular, this measure is
the only possible weak star limit point of the $\mu_N$'s, and thus
it was not necessary to pass to a subsequence. Rather, we have
$\frac{d\rho_N(t)}{t^2+1}\to\frac{d\rho(t)}{t^2+1}$ in the weak
star topology.
Finally, a computation
using \eqref{MN} and constancy of the Wronskian
$W(n)=v(n)y(n+1)-v(n+1)y(n)$ shows that for all $E\in I$, the limit
$M_N(E)\equiv \lim_{\epsilon\to 0+} M_N(E+i\epsilon)$ exists and
equals
\[
\text{Im } M_N(E)=\frac{\text{Im }w(E)}{\left| y(N,E)-y(N+1,E)w(E)\right|^2} .
\]
By general facts on Herglotz functions, the measures $\rho_N$ are
therefore purely absolutely continuous in $I$ with density
$(1/\pi)\,\text{Im }M_N(E)$. $\square$
\begin{Corollary}
\label{C2.1}
Suppose $f$ is a continuous function with support contained
in $(-2,2)$. Then
\[
\int f(E) \, d\rho(E) = \frac{2}{\pi} \lim_{n\to\infty}
\int_0^{\pi} f(2\cos k) \frac{\sin^2 k}{R^2(n,k)}\, dk .
\]
\end{Corollary}
{\it Proof.} We want to
apply Proposition \ref{P2.1} with $I=(-2,2)$ and
\[
w(z)= \frac{z}{2} + i \sqrt{ 1 -\frac{z^2}{4}},
\]
but we first have to check that this is a Herglotz function.
More precisely, we will choose the square root on $z\in (-2,2)$ so
that $\text{Im }w>0$ there and then continue holomorphically to
the upper half-plane. The continuation is possible because the
branch points of
$(w-z/2)^2= z^2/4-1$ are $z=\pm 2$, neither of which is in
the upper half-plane.
By the monodromy theorem,
the continuation is also unique. Moreover, $w(z)$ extends continuously
to the closure of $\mathbb C^+$ (in the Riemann sphere
$\mathbb C_{\infty}$), and then the image of
$\mathbb R \cup \{ \infty \}$ is
the closed curve
\begin{equation}
\label{curve}
(-\infty, -2) \cup \{2e^{i\varphi}: \pi \ge
\varphi \ge 0 \} \cup (2,\infty) \cup \{ \infty \}
\end{equation}
Therefore, the set $\{ w(z): z\in\mathbb C^+ \}$ must be
contained in one of the two regions
into which the sphere is divided by \eqref{curve}. It now follows
easily that this image
must actually be contained in the region contained
in the upper half-plane,
so $w(z)$ is
a Herglotz function, as required.
Now the claim follows from Proposition \ref{P2.1} together with
the substitution $E=2\cos k$. $\square$
We now use Corollary \ref{C2.1} to derive a formula for the
Fourier transform of $\rho$. Since we are interested only in
the part of the operator on $(-2,2)$,
we will study
\[
(f\, d\rho)\,\widehat{ }\,(t) =
\int_{-\infty}^{\infty}
f(E) e^{-itE}\, d\rho(E),
\]
with $f\in C_0^{\infty}(-2,2)$.
\begin{Theorem}
\label{T2.1}
\begin{multline}
\label{form}
(f\, d\rho)\, \widehat{ }\, (t) =\\
\lim_{N\to\infty} \sum_{n_1,\ldots,n_N=-\infty}^{\infty}
\int_0^{\pi} g(k) \left( \prod_{j=1}^N c(n_j,g_j/\sin k) \right)
e^{i \left( \sum_{l=1}^N n_l \psi_l(k) - 2t\cos k \right) } \, dk,
\end{multline}
where $g\in C_0^{\infty}(0,\pi)$ and
\[
c(0,a)=1,\quad
c(n,a) = \left( 1 + \frac{2i}{a}\, \frac{n}{|n|} \right)^{-|n|}
\quad (n\not= 0).
\]
\end{Theorem}
{\it Proof.} By Corollary \ref{C2.1} and \eqref{eqR}, we have
\begin{multline*}
(f\, d\rho)\,\widehat{ }\,(t) =
\frac{2}{\pi} \lim_{N\to\infty}
\int_0^{\pi} \frac{f(2\cos k)\sin^2 k}{R^2_1(k)} \, e^{-2it\cos k}
\times \\
\prod_{j=1}^N \left( 1 - \frac{g_j}{\sin k}\sin \psi_j(k)
+\frac{g_j^2}{\sin^2 k} \sin^2 (\psi_j(k)/2) \right)^{-1}\, dk.
\end{multline*}
The factors in the product can be expanded in a
Fourier series:
\[
\frac{1}{1- a\sin\psi + a^2 \sin^2(\psi/2)}
=\sum_{n=-\infty}^{\infty} c(n,a) e^{in\psi},
\]
with the coefficients $c(n,a)$ defined in the statement of
the Theorem. This can be checked by summing the
series. As the convergence is uniform in $\psi$, we may interchange
the order of integration and summation.
Finally, the factor $2/\pi\sin^2 k R^{-2}_1(k)$ can be absorbed
by $g$, and the claim now follows. $\square$
\section{Estimates on the Pr\"ufer angle}
The integrals from \eqref{form} contain
rapidly oscillating exponentials. As usual, we will exploit this
by integrating by parts. We will then need the following estimates
on the derivatives of the Pr\"ufer angles $\psi_n$.
From now on and throughout the rest of this paper, we assume
that the potential is given by \eqref{pot} and that
$x_{n-1}/x_n\to 0$ and $\sup |g_n| < \infty$.
\begin{Lemma}
\label{L3.1}
\begin{align*}
\psi_n'(k) & = 2x_n \left( 1 + O(x_{n-1}/x_n) \right) \\
\left| \psi_n^{(j)}(k)\right| & \le C_j x_{n-1}^j\quad\quad (j\ge 2)
\end{align*}
These estimates hold uniformly for $k$ from a compact subset of
$(0,\pi)$.
\end{Lemma}
The estimates on the first two derivatives were also proved in
\cite{KLS}. Since we will integrate by parts many times (not only
once, as in \cite{KLS}), we really need Lemma \ref{L3.1} in full
generality. Actually, in Sect.\ 7, we will also need a slightly
different version of the first statement (which will be more
accurate for small $g_n$'s), but this will be discussed later.
{\it Proof.} Let $\theta_n=\psi(x_{n-1}+1)$. Then \eqref{eqpsi}
says that
\[
\cot \left( \frac{\theta_n}{2}-k \right) =
\cot \frac{\psi_{n-1}}{2} - \frac{g_{n-1}}{\sin k}.
\]
We differentiate this equation
and solve for $\theta_n'$ to obtain
\begin{multline*}
\theta_n'=2 + \frac{1}{\sin^2\frac{\psi_{n-1}}{2} +
\left( \cos\frac{\psi_{n-1}}{2} - \frac{g_{n-1}}{\sin k}
\sin\frac{\psi_{n-1}}{2}\right)^2} \, \psi_{n-1}' - \\
\frac{g_{n-1}\,\frac{\cos k}{\sin^2 k}\sin^2\frac{\psi_{n-1}}{2}}
{\sin^2\frac{\psi_{n-1}}{2} +
\left( \cos\frac{\psi_{n-1}}{2} - \frac{g_{n-1}}{\sin k}
\sin\frac{\psi_{n-1}}{2}\right)^2}.
\end{multline*}
Now the $g_n$'s are bounded
and $\sin k$ is bounded away from zero (since $k$ varies over
a compact subset of $(0,\pi)$).
Taking \eqref{eqtheta} into account, we therefore obtain
\[
\psi_n'=2(x_n-x_{n-1}) + O(1)\psi_{n-1}' + O(1),
\]
where the constants implicit in $O(1)$ only depend on $\sup
|g_n|$ and $\inf \sin k$.
The $x_n$'s grow more
rapidly than exponentially, so the claim on $\psi_n'$ follows
by iterating this equation.
To prove the assertion on the higher derivatives, we note
that $\psi_n^{(j)}=\theta_n^{(j)}$ for $j\ge 2$. Thus, for these
$j$,
\begin{multline*}
\psi_n^{(j)} = \left(
\frac{\psi_{n-1}'}{\sin^2\frac{\psi_{n-1}}{2} +
\left( \cos\frac{\psi_{n-1}}{2} - \frac{g_{n-1}}{\sin k}
\sin\frac{\psi_{n-1}}{2}\right)^2} \right)^{(j-1)} - \\
\left(
\frac{g_{n-1}\,\frac{\cos k}{\sin^2 k}\sin^2\frac{\psi_{n-1}}{2}}
{\sin^2\frac{\psi_{n-1}}{2} +
\left( \cos\frac{\psi_{n-1}}{2} - \frac{g_{n-1}}{\sin k}
\sin\frac{\psi_{n-1}}{2}\right)^2} \right)^{(j-1)} .
\end{multline*}
Denote the denominator by $D$, that is,
\[
D=\sin^2\frac{\psi_{n-1}}{2} +
\left( \cos\frac{\psi_{n-1}}{2} - \frac{g_{n-1}}{\sin k}
\sin\frac{\psi_{n-1}}{2}\right)^2.
\]
If the derivatives are evaluated using the product rule
$j-1$ times, we get a sum of many terms. Fortunately, it
suffices to observe the following facts:\\
(i) The only term containing $\psi_{n-1}^{(j)}$ is
$\psi_{n-1}^{(j)}/D$.\\
(ii) Everything else is of the form
\[
D^{-m}\left( \prod_i \left( \psi_{n-1}^{(r_i)}\right)^{p_i}
\right) f(\psi_{n-1},k),
\]
where $f$ is a bounded function, $m\le j$, and the numbers $r_i,p_i$
satisfy $\sum_i r_ip_i \le j$.
We can now complete the proof by induction on $j$. By
the induction hypothesis (and a direct argument
for $j=2$), the above remarks imply that
\[
\left| \psi_n^{(j)}\right| \le C_j \left(
\left| \psi_{n-1}^{(j)}\right| + x_{n-1}^j \right) .
\]
The claimed estimates follow by iterating this. $\square$
\section{Non-resonant terms}
The heading of this section refers to those terms from \eqref{form}
for which the exponential
is rapidly oscillating as a function of $k$.
It is useful to first make explicit in the notation the
largest index $j$ with $n_j\not= 0$. To this end, we denote
the expression from the right-hand side of \eqref{form}, with
no limit taken, by $I_N(t)$ (so $(f\,d\rho)\,\widehat{}\,
(t)=\lim_{N\to\infty} I_N(t)$). Also, let
\[
J_N(t)=
\sum_{\substack{n_1,\ldots,n_N\in\mathbb Z \\ n_N\not=0}}
\int_0^{\pi} g(k) \left( \prod_{j=1}^N c(n_j,g_j/\sin k) \right)
e^{i \left( \sum_{l=1}^N n_l \psi_l(k) - 2t\cos k \right) } \, dk.
\]
Then $I_N(t)=J_N(t)+I_{N-1}(t)$.
We can now describe our general strategy for estimating
\eqref{form}. By Lemma \ref{L3.1},
the derivative of the phase is roughly equal to
\[
\sum_{j=1}^N n_j \psi_j'(k) + 2t\sin k \approx
2 \sum_{j=1}^N n_j x_j + 2t\sin k .
\]
Since the $x_j$'s are rapidly increasing, we may expect this to be
of the order $2n_Nx_N+2t\sin k$. So if $|t|$ is either much larger
or much smaller than $x_N$ (and if $N$ is not too small), the exponential
will be heavily oscillating and the corresponding contribution to
\eqref{form} will be small. If $|t|$ is of the order of $x_N$
(``resonance''), a different treatment is necessary (see the next
section). Of course, the above reasoning is not literally true because
the $n_j$'s with $j0$ (arbitrarily small).
We first study the case when
\[
|t| \le \frac{1-\epsilon}{a}\, x_N .
\]
More specifically, we will analyze $J_N(t)$, assuming this inequality.
The series will be cut off at
\[
M= \left[ b x_N/x_{N-1} \right] ,
\]
where $[x]$ denotes the largest integer $\le x$, and $b>0$ will
be chosen later. So we have to distinguish two (sub-)cases:\\
a) $|n_j|\le M$ for all $j\in \{ 1,2,\ldots, N-1 \}$;\\
b) $|n_j| > M$ for some $j\in \{ 1,2,\ldots, N-1 \}$.
Before we go on, a general remark on the notation we will use
may be helpful. Namely, the term ``constant'' will refer to a number
that is independent of $t,N$, and the $n_j$'s (later, we will
sum over these latter parameters, anyway). It may depend, however,
on the other parameters of the problem, which are $\sup |g_n|$,
the $x_n$'s and the function $g\in C_0^{\infty}(-2,2)$. It may also
depend on additional parameters we introduce like the $\epsilon$ from
above. A constant is usually denoted by $C$; the actual value of
$C$ may change from one formula to the next. Also, we sometimes write
$a\lesssim b$ instead of $a\le Cb$.
Now let us start with case a). Abbreviate
\[
\varphi(k)=\sum_{j=1}^N n_j \psi_j(k) - 2t\cos k .
\]
Using Lemma \ref{L3.1}, we then see that
\begin{align*}
\left| \varphi' \right| & \ge |n_N\psi_N'|-\sum_{j=1}^{N-1}
|n_j\psi_j'| - 2(1-\epsilon)x_N\\
& \ge 2\left( |n_N|-1+\epsilon \right) x_N -C|n_N|x_{N-1}
-2Cb (x_N/x_{N-1}) \sum_{j=1}^{N-1} x_j .
\end{align*}
If $N$ is sufficiently large
and if $b$ is chosen sufficiently small, then we may
further estimate this by, let us say,
\begin{equation}
\label{phi}
\left| \varphi' \right| \ge \epsilon |n_N| x_N .
\end{equation}
In order to obtain good estimates, we must now integrate by
parts sufficiently many times. To do this, we introduce the
differential expression
\[
L= \frac{-i}{\varphi'(k)}\, \frac{d}{dk}.
\]
Note that $L(e^{i\varphi})=e^{i\varphi}$. Therefore, we can
manipulate the integrals from the expression for $J_N(t)$ as
follows.
\[
\int g \left( \prod c \right) e^{i\varphi}\, dk
= \int g \left( \prod c \right) \left( L^m e^{i\varphi}
\right) \, dk =
\int e^{i\varphi} \left[ {L'}^m \left( g \prod c \right) \right]
\, dk
\]
Here, $m\in\mathbb N$ may still be chosen and
\[
L'= \frac{d}{dk} \, \frac{i}{\varphi'(k)}
\]
is the transpose of $L$. There are no boundary terms because $g$
has compact support. We obtain the estimate
\begin{equation}
\label{4.4}
\left| \int g \left( \prod c \right) e^{i\varphi}\, dk \right|
\le \pi \max_{k\in\text{supp }g} \left| {L'}^m \left( g \prod c \right)
\right| ;
\end{equation}
we expect the right-hand side to be small because $\varphi'$ is
large by \eqref{phi}.
So, our next task is to control ${L'}^m \left( g \prod c \right)$.
Each of the $m$ derivatives contained in ${L'}^m$ can act either
on $g$ or on some $c(n_j,g_j/\sin k)$ or on one of the factors
$1/\varphi'$. The function $g$ is smooth, so $|g^{(j)}|\le C_m$.
Next, note that
\[
\frac{d}{dk} c(n,g/\sin k) = c(n,g/\sin k)
\frac{\mp 2i\cos k}{g\pm 2i\sin k} \, |n|,
\]
where the signs depend on the sign of $n$. Since $c$ itself decays
exponentially -- $|c(n,g/\sin k)| \le e^{-\gamma |n|}$, where
$\gamma>0$ depends only on $\sup |g_n|$ and $\inf \sin k$ -- we obtain
the bound
\begin{equation}
\label{estc}
\left| \frac{d^j}{dk^j} c(n,g/\sin k) \right| \le
C_j |n|^j e^{-\gamma |n|} .
\end{equation}
Finally, $(1/\varphi')^{(T)}$ is a sum of terms of the form
\begin{equation}
\label{4.1}
C\, \frac{\varphi^{(r_1)}\cdots \varphi^{(r_s)}}{\left(
\varphi'\right)^q},
\end{equation}
where $r_i\ge 2$ and
\begin{equation}
\label{4.2}
\sum_{i=1}^s r_i= q+T-1;
\end{equation}
the $r_i$'s need not be distinct.
To bound these expressions, we use Lemma \ref{L3.1} which implies that
(for $2\le r \le m$)
\begin{equation}
\label{estphi'}
\left| \varphi^{(r)}\right| \le C_m \sum_{j=1}^N |n_j|x_{j-1}^r + 2|t|
\lesssim (x_N/x_{N-1}) x_{N-2}^r + |n_N|x_{N-1}^r + x_N .
\end{equation}
We introduce the abbreviation $A_N(r)$ for this latter bound.
Recalling that $|\varphi'|\gtrsim |n_N|x_N$ (by \eqref{phi}), we can thus
bound \eqref{4.1} by $(|n_N|x_N)^{-q} \prod_{i=1}^s A_N(r_i)$.
The above considerations show that ${L'}^m \left( g \prod c \right)$
is a sum of many terms
each of which admits a bound of the form
\begin{equation}
\label{4.3}
C_m \left( |n_N|x_N \right)^{-P} \prod_{i=1}^s A_N(r_i)
\prod_{j=1}^N |n_j|^{p_j} e^{-\gamma |n_j|} .
\end{equation}
More precisely, such a bound results if
$p_j$ derivatives act on $c(n_j,g_j/\sin k)$.
Consequently, the remaining derivatives (if any) act on some
factor $1/\varphi'$ or on $g$.
For later use, we record
the fact that the number of different terms of the
form \eqref{4.3} admits a bound of the form $CN^m$, where $C$
depends on $m$ only. To prove this, observe that the product
rule, applied to $\left( \prod_{j=1}^N c \right)^{(l)}$ with
$0\le l \le m$, produces at most $N^l\le N^m$ terms. Furthermore,
the number of possibilities of distributing the remaining
$m-l$ derivatives among $g$ and the factors $1/\varphi'$ does
not depend on $N$.
We now claim that there are the following restrictions on the
parameters: $P\ge m$, $s\ge 0$, $r_i\ge 2$, $p_j\ge 0$ and
\[
\sum_{i=1}^s r_i + \sum_{j=1}^N p_j \le P .
\]
The first inequality just says that the number of factors
$1/\varphi'$ increases when derivatives act on them, and the following
three relations are obvious. The last inequality is obtained as follows.
$\sum p_j$ is the number of derivatives acting on $\prod c$, thus if
$T$ denotes the number of
derivatives that act on some factor
$1/\varphi'$, then $T\le m-\sum p_j$. Assume for the moment that
these $T$ derivatives all act on the same factor $1/\varphi'$.
Then expressions of the form \eqref{4.1} result, and the exponent
$q$ must be related to $P$ by $P=q+m-1$. Hence \eqref{4.2} gives
\[
\sum_{i=1}^s r_i = P-m+1 + T -1 \le P-\sum_{j=1}^N p_j ,
\]
as claimed. We need not pay special attention to the case
where the $T$ derivatives act on different factors
$1/\varphi'$ because only terms of the type already handled
can arise in this way.
To simplify \eqref{4.3}, we observe that
\begin{align*}
\frac{A_N(r)}{\left( |n_N|x_N\right)^r} & \lesssim
\frac{1}{|n_N|^r} \left( \frac{x_{N-2}}{x_{N-1}}\right)^r
\left( \frac{x_{N-1}}{x_N} \right)^{r-1} + \frac{1}{|n_N|^{r-1}}
\left( \frac{x_{N-1}}{x_N} \right)^r +
\frac{1}{|n_N|^r x_N^{r-1}}\\
& \lesssim \left( \frac{x_{N-1}}{|n_N|x_N} \right)^{r-1} .
\end{align*}
Hence
\[
\eqref{4.3} \lesssim \left( \frac{x_{N-1}}{|n_N|x_N}
\right)^{\sum (r_i-1)} \left( \frac{1}{|n_N|x_N}
\right)^{P-\sum r_i} \prod_{j=1}^N |n_j|^{p_j}
e^{-\gamma |n_j|},
\]
and these bounds can now be summed over the range
$n_i\in\mathbb Z$, $n_N\not=0$, $|n_i|\le M$ (actually,
this latter restriction is not needed at this point). So, let
\[
D_p = \sum_{n\in\mathbb Z} |n|^p e^{-\gamma |n|},
\]
and use the
conditions on the various exponents (see the discussion following
\eqref{4.3}); we obtain
\begin{align*}
\sum_{\substack{n_1,\ldots ,n_N\\ n_N\not= 0}}
\eqref{4.3} & \le C_m
\left( \frac{x_{N-1}}{x_N} \right)^{\sum (r_i-1)}
\left( \frac{1}{x_N}\right)^{P-\sum r_i} \prod_{j=1}^N D_{p_j}\\
& = C_m \frac{x_{N-1}^{\sum (r_i-1)}}
{x_N^{P-s}} \prod_{j=1}^N D_{p_j}\\
& \le C_m
\left( \frac{x_{N-1}}{x_N} \right)^{P-s}
\prod_{j=1}^N \left( D_{p_j} x_{N-1}^{-p_j}\right)\\
& \le C_m
\left( \frac{x_{N-1}}{x_N} \right)^{m/2}
\prod_{j=1}^N \left( D_{p_j} x_{N-1}^{-p_j} \right).
\end{align*}
The last inequality holds because $r_i\ge 2$ and $\sum_{i=1}^s r_i
\le P$, hence $s\le P/2$, and thus $P-s\ge P/2 \ge m/2$.
We can now find an $N_0=N_0(m)$ so that $D_p\le D_0 x_{N-1}^p$
for all $N\ge N_0$, $p=0,1,\ldots,m$. We use this observation
and also replace $m/2$ by $m$ to obtain
\[
\sum_{\substack{n_1,\ldots ,n_N\\ n_N\not= 0}}
\eqref{4.3} \le C_m D_0^N \left( \frac{x_{N-1}}{x_N} \right)^m
\quad\quad (N\ge N_0).
\]
Up to now, we have estimated only the typical term from the
decomposition of ${L'}^m \left( g \prod c \right)$ performed
above, but, as already noted, the number of such terms is bounded
by $CN^m$,
so ${L'}^m \left( g \prod c \right)$ satisfies
the same estimate (with a possibly larger constant and $D_0$
replaced by, let us say, $2D_0$).
Because of \eqref{4.4}, the discussion of case a) is thus
complete.
Case b) is much easier. Now $|n_j|>M$ for some $j\in
\{1,\ldots, N-1 \}$, where $M=[bx_N/x_{N-1}]$. Use \eqref{estc}
(with $j=0$) and sum over all $n_1,\ldots, n_N$ for which
we are in case b). This gives
\begin{align*}
\sum_{\text{Case b)}}
\left| \int g \left( \prod c \right) e^{i\varphi} \right| & \lesssim
\sum_{j=1}^{N-1} \sum_{n_1\in\mathbb Z}e^{-\gamma |n_1|} \cdots
\sum_{|n_j|>M} e^{-\gamma |n_j|} \cdots \sum_{n_N\in\mathbb Z}
e^{-\gamma |n_N|} \\
& \lesssim ND_0^N e^{-\gamma b x_N/x_{N-1}}
\le (2D_0)^N e^{-\gamma b x_N/x_{N-1}}.
\end{align*}
We summarize:
\begin{Lemma}
\label{L4.1}
Suppose that $|t|\le (1/a-\epsilon)x_N$ ($\epsilon>0$). Then,
for any $m\in\mathbb N$, there are constants $C_m, D$, not depending
on $t$ or $N$,
so that $|J_N(t)|\le C_m D^N (x_{N-1}/x_N)^m$.
Moreover, $D$ is also independent of $m$.
\end{Lemma}
{\it Proof.} It suffices to prove this for large $N$
because then validity of the bound for all $N$ is achieved
by simply adjusting the constant. By combining the above estimates,
we obtain
\[
|J_N(t)|\le C_m D^N \left[
\left(\frac{x_{N-1}}{x_N}\right)^m
+ e^{-\gamma b x_N/x_{N-1}}\right]\quad\quad
(N\ge N_0(m)),
\]
and the second term is much smaller than the first one
for large $N$ and can
thus be dropped. $\square$
The opposite case ($|t|$ much larger than $x_N$) can be treated using
similar ideas. It will thus suffice to provide a sketch of the
argument. We fix once and for all a sequence $B_N\le \ln x_N$ (say)
that tends to
infinity. In fact, the point is that $B_N$ may go to infinity arbitrarily
slowly (for instance, $B_N=(\ln x_N)^{\epsilon}$
is a reasonable choice). We now assume that
\[
|t|\ge B_N x_N \ln x_N .
\]
We can again prescribe an arbitrarily large exponent $m\in
\mathbb N$, and
we again distinguish two subcases:\\
a) $|n_j|\le (m/\gamma) \ln |t|$ (where $\gamma$ is from \eqref{estc})
for $j=1,\ldots, N$. We will estimate $I_N$ (not
$J_N$), so we do not assume that $n_N\not= 0$.\\
b) $|n_j|> (m/\gamma) \ln |t|$ for some $j\in\{ 1,\ldots,N\}$.
In case a), we have that for sufficiently large $N$,
\begin{align*}
|\varphi'| & \ge 2a_0 |t| - \sum_{j=1}^N 2x_j\left(
1+ O(x_{j-1}/x_j) \right) \frac{m}{\gamma} \ln |t| \\
& \ge 2 a_0 |t| - 3 x_N \frac{m}{\gamma} \ln |t|,
\end{align*}
where $a_0=\min_{k\in\text{supp }g} \sin k>0$.
Now $x/\ln x$ is an increasing function of $x$ for $x>e$, so
\[
\frac{|t|}{\ln |t|} \ge \frac{B_Nx_N \ln x_N}{\ln x_N + \ln (B_N\ln x_N)},
\]
which, for large $N$, is bigger than $(B_N/2)x_N$, say.
Hence
\[
|\varphi'| \ge 2a_0|t| - \frac{6m}{\gamma B_N} |t| \ge a_0 |t|
\]
for large $N$.
We now integrate by parts sufficiently many times (the exact
number of integrations depends
on $m$), as above.
Lemma \ref{L3.1} now gives
\[
\left| \varphi^{(r)} \right| \le C_m \sum_{j=1}^N |n_j| x_{j-1}^r + 2|t|
\lesssim x_{N-1}^r \ln |t| + |t|,
\]
and this estimate replaces \eqref{estphi'}. If this bound is again denoted
by $A_N(r)$, then one shows that $A_N(r)/|t|^r \lesssim (x_{N-1}/|t|)^{r-1}$.
It is this combination, with $|t|$ in the denominator, that is of
interest here because now $|\varphi'|\gtrsim |t|$.
Having made these adjustments, the argument now proceeds as above;
the final result is the bound
\[
\sum_{\substack{n_1,\ldots,n_N \\ \text{Case a)}}}
\left| \int g \left( \prod c\right) e^{i\varphi} \right| \le C_mD^N \left(
\frac{x_{N-1}}{|t|} \right)^m .
\]
As usual, the constant $C_m$ depends on $m$ and the sequence $B_N$, but
of course not on $t$ or $N$. Moreover, the constant $D$ is also
independent of $m$.
In case b), we can argue as in case b) above to obtain
\[
\sum_{\substack{n_1,\ldots,n_N \\ \text{Case b)}}}
\left| \int g \left( \prod c \right) e^{i\varphi} \right| \le
C N D_0^N e^{-\gamma (m/\gamma)\ln |t|} = C N D_0^N |t|^{-m} .
\]
Putting things together, this gives:
\begin{Lemma}
\label{L4.2}
Suppose that $|t|\ge B_Nx_N\ln x_N$. Then,
for any $m\in\mathbb N$, there are constants $C_m, D$, independent of
$t,N$,
so that $|I_N(t)|\le C_m D^N(x_{N-1}/|t|)^m$.
Moreover, $D$ is also independent of $m$.
\end{Lemma}
{\it Proof.} Combine the above estimates, just as in the proof of
Lemma \ref{L4.1}. $\square$
For a large set of times $t$, we are in one of the two situations
treated by Lemmas \ref{L4.1} and \ref{L4.2}, respectively, for
every $N\in\mathbb N$. In view of the physical interpretation
attempted in the Introduction, we call this set the set
of non-resonant times. More precisely, define the resonant set $R$ by
\begin{equation}
\label{defR}
R=\bigcup_{n\in\mathbb N} \left[ \left( \frac{1}{a} -
\epsilon \right) x_n, B_nx_n\ln x_n \right] .
\end{equation}
For $a=1$ and $B_n = (\ln x_n)^{\epsilon}$, this reduces to the
definition given in the formulation of Theorem \ref{T1.2}.
\begin{Theorem}
\label{T4.1}
For any $m\in\mathbb N$, the following holds.
If $|t| \notin R$ and if $N\in\mathbb N$ is such that
\begin{equation}
\label{4.5}
B_Nx_N \ln x_N < |t| < (1/a-\epsilon) x_{N+1} ,
\end{equation}
then
\[
\left| (f\, d\rho)\,\widehat{}\, (t) \right| \le
C_m \left[ D^N \left( \frac{x_{N-1}}{|t|} \right)^m
+ \sum_{n=N+1}^{\infty} D^n \left( \frac{x_{n-1}}{x_n} \right)^m\right] .
\]
The constant $D$ is independent of $m$.
\end{Theorem}
{\it Remark.} Of course, since we only assumed that $x_{n-1}/x_n
\to 0$, the series can diverge, in which case Theorem \ref{T4.1} is
vacuous.
{\it Proof.} By \eqref{form} and the definition of $I_N,J_n$, we
can write
\[
(f\, d\rho)\,\widehat{}\, (t) = I_N(t) + \sum_{n=N+1}^{\infty}
J_n(t),
\]
where we use the $N$ from \eqref{4.5}. We now apply Lemma \ref{L4.2}
to estimate $I_N(t)$ and Lemma \ref{L4.1} to bound the
$J_n(t)$ ($n\ge N+1$).
$\square$
\section{Resonant terms}
It remains to analyze the case when $t\in R$. So suppose that
\[
(1/a-\epsilon) x_N \le |t| \le B_Nx_N\ln x_N.
\]
The point $k=\pi/2$ (which corresponds to the energy $E=0$)
plays a special role now because the second derivative of
$\cos k$ is zero there. Therefore, we also assume that
$\pi/2\notin \text{supp }g$.
We introduce the new phase
\[
\theta(k)= 2k \sum_{j=1}^N n_jx_j -2t\cos k .
\]
Then, using the notation from the preceding section, we have that
$\varphi=\theta+\eta$, where
\[
\eta(k)= \sum_{j=1}^N n_j(\psi_j(k)-2x_jk).
\]
As usual, we need information on the derivatives. By Lemma \ref{L3.1},
\[
\left| \eta' \right| \lesssim \sum_{j=1}^N |n_j| x_{j-1}
\]
(where we put $x_0:=1$). Also,
\[
\theta'= 2\sum_{j=1}^N n_jx_j + 2t\sin k,\quad
\theta'' = 2t\cos k .
\]
In particular, our assumption $\pi/2\notin \text{supp }g$ ensures
that $|\theta''|\approx |t|$.
We regard $\eta$ as a perturbation of $\theta$. Resonance is possible
now, that is, $\theta'(k)$ can be small, but since $|\theta''|$ is
large, this can only happen for a small set of $k$'s, and outside
this set, we still have oscillatory integrals.
To make these ideas precise, introduce the sets
\begin{align*}
S_0 & = \text{supp }g,\\
S_1 & = \{ k\in S_0 : \left| \theta'(k) \right| \le \delta_1 x_N\} ,\\
S_2 & = \{ k\in S_1 : \left| \theta'(k) \right| \le \delta_2 x_N\},
\ldots
\end{align*}
The numbers $\delta_j>0$ will be chosen later; they will satisfy
$1=:\delta_0 \gg \delta_1\gg \delta_2\gg \ldots$. Clearly, $S_0
\subset [\epsilon,\pi/2-\epsilon]\cup [\pi/2+\epsilon,\pi-\epsilon]$
for some $\epsilon>0$. By treating these two parts of
the support of $g$ separately and replacing the actual support
with the corresponding interval, we may assume that $S_0$ is
an interval. Then $\theta''$ does not change sign on $S_0$, and
hence all the sets $S_n$ are intervals. Clearly,
$S_0\supset S_1\supset S_2 \supset \cdots$. It also follows that
\begin{equation}
\label{estSn}
\left| S_n \right| \lesssim \delta_n \frac{x_N}{|t|}\lesssim
\delta_n .
\end{equation}
Note also that the sets $S_l$ depend on the $n_j$'s.
Our goal is to estimate $I_N(t)$. The integrals $J_n(t)$ ($n>N$)
do not contain resonant terms, and we can use the results of
Sect.\ 4.
We must estimate
$\int g\left( \prod c\right) e^{i(\theta+\eta)}$.
Using the sets $S_n$, we can split the integrals
as follows:
\[
\int_{S_0}\cdots \; = \int_{S_m}\cdots \; + \sum_{l=0}^{m-1}
\int_{S_l\setminus S_{l+1}}\cdots
\]
The number $m$ is a parameter which we leave unspecified for
the time being.
The integrals over
$S_l\setminus S_{l+1}$ are again handled by integrating by
parts. More precisely, we have that
\begin{multline}
\label{5.1}
\left| \int_{S_l\setminus S_{l+1}}
g\left( \prod c\right) e^{i(\theta+\eta)}\right|
=\left| \int_{S_l\setminus S_{l+1}}
g\left( \prod c\right) \frac{(e^{i\theta})'}
{i\theta'} e^{i\eta} \right| \\
\le \text{boundary terms } + |S_l| \sup_{k\in S_l\setminus S_{l+1}}
\left| \left( \frac{ g \left( \prod c \right) e^{i\eta}}{\theta'}
\right)' \right| .
\end{multline}
Since $S_l\setminus S_{l+1}$ consists of at most two disjoint intervals,
the boundary terms are obtained by inserting the endpoints of these
intervals into $g\left( \prod c \right) / \theta'$. As a result,
these boundary terms may be estimated by
\[
\left| \text{boundary terms } \right| \lesssim
\frac{e^{-\gamma \sum |n_j|}}{\delta_{l+1}x_N} .
\]
For the second term from the right-hand side of \eqref{5.1}, we
use the by now familiar arguments from the preceding section. We obtain
the bound
\[
\left( \frac{x_{N-1}\sum |n_j|}{\delta_{l+1} x_N} +
\frac{|t|}{\delta_{l+1}^2 x_N^2} \right)
\delta_l e^{-\gamma\sum |n_j|}.
\]
We have used \eqref{estSn} here.
The numerator of the first term in parantheses
is a bound on $|\eta'|$, the second ratio bounds the contribution where
the derivative acts on $1/\theta'$. Finally, the derivative may also
act on $\prod c$ or $g$, but this leads to contributions which are
smaller than the ones already obtained.
As usual, these bounds will now be summed over the $n_j$'s.
This gives
\[
\sum_{n_1,\ldots,n_N\in\mathbb Z}
\left| \int_{S_l\setminus S_{l+1}}
g\left( \prod c \right) e^{i\varphi} \right| \le
CD^N \left( \frac{\delta_l x_{N-1}}
{\delta_{l+1}x_N} + \frac{\delta_l B_N \ln x_N}{\delta_{l+1}^2 x_N}
\right) .
\]
The bound $CD^N/(\delta_{l+1}x_N)$ on the boundary terms does not
occur here because it is dominated by the second term from the
right-hand side of the above inequality.
We also need an estimate on $\int_{S_m}$, but this is easy,
since we clearly have that
\[
\left| \int_{S_m} g \left( \prod c
\right) e^{i\varphi} \right| \lesssim \delta_m e^{-\gamma\sum |n_j|}.
\]
After summing over the $n_j$'s, we thus get the bound $CD^N \delta_m$.
Combining the facts just established, we see that
\begin{multline}
\label{5.2}
\sum_{n_1,\ldots,n_N\in\mathbb Z}
\left| \int_{S_0}
g\left( \prod c \right) e^{i\varphi} \right| \le CD^N \times\\
\left( \delta_m
+ \frac{x_{N-1}}{x_N} \sum_{l=0}^{m-1} \frac{\delta_l}
{\delta_{l+1}} +\frac{B_N\ln x_N}{x_N} \sum_{l=0}^{m-1}
\frac{\delta_l}{\delta_{l+1}^2}\right) .
\end{multline}
\begin{Theorem}
\label{T5.1}
Suppose that $0\notin\text{supp }f$ and
\[
(1/a-\epsilon) x_N \le |t| \le B_N x_N \ln x_N .
\]
a) Then for arbitrary $\sigma>0$, $m\in\mathbb N$, there exist
constants $C,D$, independent of $N,t$, so that
\begin{multline*}
\left| (f\, d\rho)\,\widehat{}\, (t) \right| \le
C \sum_{n=N+1}^{\infty} D^n \left(\frac{x_{n-1}}{x_n}\right)^m +\\
CD^N \left[ \left( \frac{x_{N-1}}{x_N} \right)^{1/2}
+ B_N \ln x_N \left( \frac{x_N}{x_{N-1}} \right)^{\sigma}
\frac{1}{(x_{N-1}x_N)^{1/2}} \right] .
\end{multline*}
The constant $D$ is also independent
of $m$ and $\sigma$.
b) We also have the estimate
\[
\left| (f\, d\rho)\,\widehat{}\, (t) \right| \le
C \sum_{n=N+1}^{\infty} D^n \left(\frac{x_{n-1}}{x_n}\right)^m +
CD^N \left[ \frac{x_{N-1}}{x_N^{1-\sigma}}
+ \frac{B_N \ln x_N}{x_N^{1/2-\sigma}} \right] .
\]
\end{Theorem}
{\it Proof.} a) Here, we take $\delta_l=(x_{N-1}/x_N)^{\sigma l}$.
Then \eqref{5.2} yields
\begin{multline}
\label{5.3}
\sum_{n_1,\ldots,n_N\in\mathbb Z}
\left| \int_{S_0}
g\left( \prod c \right) e^{i\varphi} \right| \le CD^N \times\\
\left( \left( \frac{x_{N-1}}{x_N}\right)^{\alpha}
+ \left( \frac{x_{N-1}}{x_N}\right)^{1-\sigma} +
B_N \ln x_N \left( \frac{x_N}{x_{N-1}}\right)^{\sigma}
\frac{1}{x_{N-1}^{\alpha} x_N^{1-\alpha}} \right) ,
\end{multline}
where $\alpha=\sigma m$. The constant $D$ is independent of $m$
and $\sigma$.
But, as in the proof of Theorem \ref{T4.1},
\[
(f\, d\rho)\,\widehat{}\, (t) = I_N(t) + \sum_{n=N+1}^{\infty}
J_n(t);
\]
$I_N(t)$ has just been estimated in \eqref{5.3},
and the $J_n(t)$ can be bounded using Lemma \ref{L4.1}.
So $|J_n(t)|\le CD^n (x_{n-1}/x_n)^m$; also, in \eqref{5.3}, we
specialize to $\alpha=1/2$.
The claim now follows since we may clearly assume that $\sigma
\le 1/2$.
b) Proceed as in the proof of part a), but with $\delta_l=
x_N^{-\sigma l}$ (and again $\alpha=1/2$). $\square$
\section{Proof of Theorem \ref{T1.2}}
a) The hypothesis says that $x_n/x_{n-1}=e^{a_n n}$, where
$a_n\to\infty$. It is now straightforward to check that the
bounds of Theorems \ref{T4.1}, \ref{T5.1}a) tend to zero as
$N\to\infty$, provided the parameters are chosen appropriately.
For instance, we can take $B_N=\ln x_N$
and $\sigma\in (0,1/2)$.
(In fact, Theorem \ref{T5.1} has the additional hypothesis
that $0\notin\text{supp }f$, but this causes no problems since
$C_0^{\infty}$ functions with this property are still dense
in $L_2((-2,2), d\rho)$.)
b) Here, we put $B_N=(\ln x_N)^{\epsilon}$. Note also that $a\le 1$,
so the set $R$ defined in Theorem \ref{T1.2}b) contains the set
$R$ from \eqref{defR}. So, if $|t|\notin R$, Theorem \ref{T4.1}
applies. We will now further estimate the bound from the statement
of this Theorem. First of all,
\[
\left( \frac{x_{N-1}}{|t|} \right)^m \le
\left( \frac{x_{N-1}}{|t|} \right)^m \left( \frac{|t|}
{x_N} \right)^{m(1-\mu)} \le C_m |t|^{-m\mu}.
\]
As for the second term, we observe that
\begin{align*}
\sum_{n=N+1}^{\infty} D^n \left( \frac{x_{n-1}}{x_n}
\right)^m & \le C_m \sum_{n=N+1}^{\infty} \frac{D^n}{x_n^{m\mu}}\\
& = \frac{C_m D^{N+1}}{x_{N+1}^{m\mu}} \sum_{n=0}^{\infty}
D^n \left( \frac{x_{N+1}}{x_{N+1+n}} \right)^{m\mu} .
\end{align*}
Now for sufficiently large $N$, we have $x_{N+1}/x_{N+1+n}\le 2^{-n}$
(say) for all $n\ge 0$,
so the series converges for large $m$ and the sum may be
estimated by a number that does not depend on $N$. Thus
\[
\sum_{n=N+1}^{\infty} D^n \left( \frac{x_{n-1}}{x_n}
\right)^m \le C_m D^N x_{N+1}^{-m\mu} \le C_m D^N |t|^{-m\mu} .
\]
Finally, $D^N \lesssim x_N \lesssim |t|$, so (i) follows by
taking $m$ large enough.
Part (ii) follows in a similar way from Theorem \ref{T5.1}b),
so we will only sketch the argument. Fix a sufficiently small
$\sigma>0$. Then, for instance,
\[
\frac{x_{N-1}}{x_N^{1-\sigma}} \lesssim x_N^{-\mu + \sigma}
\lesssim \left( \frac{(\ln |t| )^{1+\epsilon}}{|t|} \right)^{\mu-\sigma}.
\]
The last term from the bound of Theorem \ref{T5.1}b) is treated
similarly, and the first term has already been discussed above.
The additional factors $D^N$ and $D^N (\ln x_N)^{1+\epsilon}$
are $O(|t|^{\delta})$
for arbitrary $\delta>0$, so they do not spoil these estimates.
$\square$
\section{Proof of Theorem \ref{T1.1}}
Since, as noted above, part a) is actually a result from
\cite{KLS}, we only need to prove part b).
First of all, absence of point spectrum is easy:
the $g_n$ are bounded,
so \eqref{eqR} shows that for every $k\in(0,\pi)$,
there exists $q>0$ so that $R_n \ge q^n$. But then
\[
\sum_{m=1}^{\infty} R(m)^2 = \sum_{n=1}^{\infty}
R_n^2 (x_n-x_{n-1})
\]
diverges, which implies that there are no $\ell_2$ solutions to
\eqref{se}. Hence $\sigma_{pp}\cap (-2,2)=\emptyset$.
Now as in \cite{KLS}, the
main part of the proof will depend on a general criterion
for absence of absolutely continuous spectrum
from \cite{LS}. Namely, if $I\subset (-2,2)$ is an open interval
and if we can find a sequence $N_m\to\infty$ so that for almost
all $E\in I$ (with respect to Lebesgue measure), $\lim_{m\to\infty}
R(N_m,E)=\infty$, then it will follow that $\sigma_{ac}\cap
I = \emptyset$.
We will again work with $k$ instead of $E$. Fix a compact
subinterval $I$ of $(0,\pi)$. According to what has been said
above, we want to find a sequence $N_m\to\infty$ so that
$R_{N_m}(k)\to\infty$ for almost all $k\in I$.
By \eqref{eqR} and the fact that $R_1=1$,
\[
\ln R_{N+1}(k) = \sum_{n=1}^N X_n(k,\psi_n(k)),
\]
where (writing $u_n(k)=g_n/\sin k$)
\[
X_n(k,\psi)= \frac{1}{2}\ln \left[ 1-u_n(k)\sin\psi
+u_n^2(k)\sin^2 (\psi/2) \right] .
\]
For every $n\in\mathbb N$, we subdivide $I$ into subintervals
$I_0^{(n)}, I_1^{(n)},\ldots, I_{N_n}^{(n)}$, so that for $l>0$,
$\psi_n(k)$ runs over an interval of length $2\pi$ if $k$ varies
through $I_l^{(n)}$. We start this process of subdividing $I$
at the right endpoint of $I$,
so we end up with an interval $I_0^{(n)}$ at the left endpoint of
$I$ which has the property that
$\psi_n(I_0^{(n)})$ is an interval of length less than or equal to
$2\pi$. Since $\psi_n' \sim 2x_n$ by Lemma \ref{L3.1},
we have the estimate $|I_l^{(n)}| \lesssim 1/x_n$. We introduce
\[
\gamma_{n,l} = \frac{1}{| I_l^{(n)}| }
\int_{I_l^{(n)}} X_n(k,\psi_n(k))\, dk
\]
and $Y_n(k) = X_n(k,\psi_n(k)) - \gamma_{n,l}$ ($k\in I_l^{(n)}$).
So, in particular, $\int_{I_l^{(n)}} Y_n(k)\, dk=0$.
Let us now compute the second moments of $Y_n$ with respect to
the probability measure $dP(k)=|I|^{-1}\, dk$ on $I$. We first consider
$EY_mY_n$ with $m0$, no matter how small, we
can find an $N_0=N_0(\epsilon)$ so that $x_m/x_n< \epsilon^{n-m}$
if $n>m\ge N_0$. Taking this into account, we find that
\begin{equation}
\label{7.5}
\sum_{1\le m