\documentclass{article}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\newtheorem{Theorem}{Theorem}[section]
\newtheorem{Proposition}[Theorem]{Proposition}
\newtheorem{Lemma}[Theorem]{Lemma}
\newtheorem{Definition}[Theorem]{Definition}
\newtheorem{Corollary}[Theorem]{Corollary}
\begin{document}
\title{Schr\"odinger operators with decaying potentials:
some counterexamples}
\author{Christian Remling}
\maketitle
%\begin{center}
%(to be submitted to {\it ???})
%\end{center}
%\vspace{0.5cm}
\noindent
Universit\"at Osnabr\"uck,
Fachbereich Mathematik/Informatik,
49069 Osnabr\"uck, GERMANY
\\[0.2cm]
E-mail: cremling@mathematik.uni-osnabrueck.de\\[0.3cm]
1991 AMS Subject Classification: primary 34L40, 81Q10;
secondary 34L20, 81Q20\\[0.3cm]
\begin{abstract}
Consider the one-dimensional Schr\"odinger equation
$-y''+Vy=Ey$ with potential $|V(x)|\le C(1+x)^{-\alpha}$.
We construct examples
which show that some recently obtained
results on the embedded singular spectrum of the
corresponding operators $H=-d^2/dx^2+V(x)$ are optimal.
\end{abstract}
\section{Introduction}
In this paper, I'm interested in one-dimensional
Schr\"odinger equations,
\begin{equation}
\label{1.1}
-y''(x)+V(x)y(x)=Ey(x),
\end{equation}
with potentials $V$ bounded by a decaying power:
\begin{equation}
\label{1.2}
|V(x)| \le \frac{C}{(1+x)^{\alpha}} \quad\quad (\alpha > 0).
\end{equation}
More specifically, we'll study the spectral properties
of the associated self-adjoint operators $H_{\beta}=
-\frac{d^2}{dx^2}+V(x)$,
acting on the Hilbert
space $L_2(0,\infty)$. The index $\beta\in [0,\pi)$
refers to the boundary condition
$y(0)\cos\beta + y'(0)\sin\beta =0$. For the general
theory, see, for example,
\cite{WMLN}. These questions are of relevance in
quantum mechanics; for more background information
on this topic
refer to, e.g., \cite{RS}.
Since $V$ tends to zero as $x\to \infty$, it follows
that the essential spectrum satisfies
$\sigma_{ess}=[0,\infty)$ (this is a classical result
going back to Weyl).
However, this doesn't say much about the physics of
the corresponding system since it
doesn't give information on the type of the spectrum
on $(0,\infty)$ (absolutely continuous,
singular continuous or point
spectrum). There has been some progress
on this question recently, and several new positive
results have been obtained. In this paper, we launch
the counterattack: By constructing suitable
examples, we show that these results
are in fact optimal.
We're interested in situations where $\sigma_{ac}=[0,\infty)$
with possibly also some embedded singular spectrum
on $(0,\infty)$. The corresponding
range of exponents in \eqref{1.2} is $1/2<\alpha\le 1$.
If $\alpha>1$ or if only $V(x)=o(x^{-1})$, then
the spectrum is purely absolutely continuous on
$(0,\infty)$. See \cite{Remac} for the
proof of this under the weaker assumption $V(x)=o(x^{-1})$;
if $\alpha>1$ is assumed, the result is classical and
easy to prove. On
the other hand, there are examples of (random) potentials
$V(x)=O(x^{-1/2})$ with purely singular spectrum
\cite{DSS,Simp}, so there need not be any
absolutely continuous spectrum if $\alpha\le 1/2$.
That $\alpha>1/2$ does imply
presence of absolutely continuous spectrum was first
shown in \cite{CK,CKR,Remac} (see also \cite{Kis34,Kis23}
for earlier work in this direction).
Deift and Killip then proved the most satisfactory
result that it already suffices to assume $V\in L_2$
\cite{DK}. Actually, in case $V$ satisfies \eqref{1.2},
one can say still more on the structure of the spectrum.
To formulate this precisely, we first require
a definition. The solutions of \eqref{1.1} are said to be of
WKB form if there is a solution $y(x,E)$
satisfying the asymptotic formula
\begin{equation}
\label{WKB}
\left(
\begin{array}{c}
y(x,E) \\ y'(x,E)
\end{array} \right) =
\left( \begin{array}{c} 1 \\ i\sqrt{E} \end{array} \right)
\exp\left( i\int_0^x \sqrt{E-V(t)}\, dt
\right) + o(1) \quad\quad (x\to\infty).
\end{equation}
Note that in this case, we have control over
{\it all} solutions of \eqref{1.1}, because $\overline{y}$ is
a linearly independent second solution. The so-called WKB
methods give solutions satisfying \eqref{WKB}, provided $V$
tends to zero and doesn't oscillate too much
(see, for instance, \cite{East2}). Of course, this
latter assumption need not be satisfied if we
only suppose \eqref{1.2}, but
it turns out that we still have WKB asymptotics
for ``most'' energies $E$. Namely,
let $S$ denote the exceptional set, i.e.
\begin{equation}
\label{exc}
S=\{ E>0: \text{No solution of \eqref{1.1} satisfies \eqref{WKB}.} \}.
\end{equation}
Then we have the following bound on the Hausdorff dimension
of $S$.
\begin{Theorem}[\cite{Remdim}]
\label{T1.1}
Suppose \eqref{1.2} holds. Then $\dim S\le 2(1-\alpha)$.
\end{Theorem}
This strengthens the result from \cite{CK,Remac} that
$S$ is of Lebesgue
measure zero if $\alpha>1/2$. Note that $\sigma_{ac}=[0,\infty)$,
in turn, follows from $|S|=0$
by the general results
of \cite{Simbdd,Stolz}.
In fact, we get the stronger statement that $(0,\infty)$
is an essential support of the absolutely continuous
part of the spectral measure.
It is not, in general,
possible to exclude embedded singular spectrum
(cf.\ \cite{Nab,Simpp,NW}),
but the bound on $\dim S$ leads to the following
restrictions.
\begin{Theorem}[\cite{Remdim}]
\label{T1.5}
Suppose \eqref{1.2} holds. Then:
a) The singular continuous part of the spectral measure
is supported by a set of dimension $\le 2(1-\alpha)$.
b) There is an exceptional set $B\subset [0,\pi)$ of boundary
conditions $\beta$ with $\dim B\le 2(1-\alpha)$, so that
the spectrum of $H_{\beta}$
is {\rm purely} absolutely continuous on
$(0,\infty)$ if $\beta \in [0,\pi)\setminus B$.
\end{Theorem}
Part a) is an immediate corollary to Theorem \ref{T1.1},
because $S$ itself always supports the singular part of
the spectral measure on $(0,\infty)$. For the proof
of part b) (which is based on methods developed in
\cite{dRJLS}), see again \cite{Remdim}.
In all these statements,
the bound $2(1-\alpha)$ gives the correct values in
the borderline cases $\alpha=1/2$ and $\alpha=1$, and
it looks natural. So the obvious guess is that it's
optimal. Our first construction addresses this issue;
it may be
summarized as follows.
\begin{Theorem}
\label{T1.3}
Suppose $\alpha>2/3$. Then there is a potential
$V$ so that \eqref{1.2} holds and
$\dim S=2(1-\alpha)$.
\end{Theorem}
In other words, Theorem \ref{T1.1} is sharp, and,
in fact, the bound $2(1-\alpha)$ is attained.
A variation on the construction
behind Theorem \ref{T1.3} will give an
analogous statement
(= Theorem \ref{T7.1}) which shows that
Theorem \ref{T1.5}b) is optimal, too (again, if $\alpha\ge 2/3$).
On the other hand, no such claim can be made as to
part a) of Theorem \ref{T1.5}.
The situation is
even worse: It's an open question
if there are potentials satisfying \eqref{1.2} with
$\alpha>1/2$ and non-empty singular continuous spectrum.
In fact, examples with
singular continuous spectrum embedded in the
absolutely continuous spectrum were constructed only
very recently in \cite{Mol,Remsc} (based in part on
ideas from \cite{KLS}), using so-called
sparse potentials. In these examples, the
essential support of the
absolutely continuous part of the spectral
measure does {\it not} have full measure in $\sigma_{ac}$,
in contrast to the situation under consideration here. That
makes it possible to deduce existence of singular
continuous spectrum
by indirect arguments which are not available here.
The assumption $\alpha>2/3$ in Theorem
\ref{T1.3} is of course of a technical character and thus
somewhat annoying. On the other hand,
I'll try to argue below that at least the method
used here cannot work for $1/2<\alpha\le 2/3$.
The borderline case $\alpha=1$ is particularly subtle.
We're now interested in the set
\[
P=\{ E>0: \text{\eqref{1.1} has an $L_2$ solution.} \}.
\]
This is the set of positive energies which are eigenvalues
for some boundary condition $\beta$.
Clearly, $P\subset S$, so
$\dim P=0$ by Theorem \ref{T1.1}.
Kiselev, Last, and Simon showed
that much more is true:
\begin{Theorem}[\cite{KLS}]
\label{T1.2}
Suppose \eqref{1.2} holds with $\alpha=1$.
Then $P$ is countable and $\sum_{E\in P} E < \infty$.
\end{Theorem}
We will prove
that Theorem \ref{T1.2} is sharp in the sense that
there are no further asymptotic conditions on
the energies from the set $P$:
\begin{Theorem}
\label{T1.4}
For any sequence $e_n>0$ with $\sum e_n<\infty$,
there are energies $E_n\ge e_n$ and a
potential $V(x)=O((1+x)^{-1})$ so that
$P\supset \{ E_n\}$.
\end{Theorem}
This answers a question asked in
\cite{KLS}.
Earlier, Simon has constructed potentials
with the property that the point
spectrum contains an arbitrary prescribed
sequence $E_n>0$ with $\sum E_n^{1/2}<\infty$
\cite{Simpp}.
One can use a slight variant of this construction
and combine it with an additional argument to
get examples with the following properties:
\begin{Corollary}
\label{C1.1}
There are potentials
$V(x)=O((1+x)^{-1})$ with $\sum_{E\in P} E^p =\infty$
for every $p<1$.
\end{Corollary}
Of course, this is weaker than Theorem \ref{T1.4}.
We have stated it separately, because the proof of
the Corollary based on \cite{Simpp} is elegant and
different from the construction we will
use to prove Theorem \ref{T1.4}.
The potentials we construct here will be obtained
by pasting together carefully chosen periodic
pieces. Everything relies on two nice features
of periodic potentials: First of all,
they are good at localizing
particles even at high energies, and second, there
are efficient tools for an accurate analysis
in this high energy regime. In fact, I used similar ideas
already in \cite{Rempp} in a somewhat different
context. It will also be important that certain
``inverse'' problems can be solved. Namely, it is
possible to prescribe lower bounds on
the gap lengths and on the Lyapunov exponent.
We will attack these problems by reducing them
to a similar problem on the Fourier coefficients
of bounded functions,
and the solution to this problem can be found in
the literature (see Theorem \ref{T3.1} below).
Here's a more detailed overview on what is done
in this paper: The proof of Theorem
\ref{T1.3} is given in Sections 2--6.
In Section 2, we collect
some simple preliminary observations. Then we interrupt
the formal proof; in Section 3, we give a
heuristic discussion, in order to explain and motivate
what happens in the following sections. We also
try to show here that the assumption
$\alpha>2/3$ is inevitable with the method used. Section 3
doesn't contain rigorous arguments and may thus be
skipped by the confident reader. In Section 4, we
prove asymptotic estimates on the trace of the
transfer matrix
(also known as Lyapunov function) for periodic
potentials. Although this subject is classical,
there are several new features in the analysis we
give: We deal with different potentials simultaneously,
and we use recent results from \cite{CK}
on multilinear operators to study the remainder terms
for potentials that are not smooth
more carefully than usual. What we state in Theorem \ref{T0.2}
is what is needed later, but perhaps the ideas
used in the proof
are also of some independent interest.
The main part of the proof of Theorem \ref{T1.3} is
then presented in Sections 5 and 6.
We construct the exceptional set
$S$ and discuss its properties in Section 5.
Finally, the analysis of the solutions of \eqref{1.1} for
$E\in S$ given in Section 6 concludes the proof of
Theorem \ref{T1.3}. Section 7 contains the modification
of this construction relevant to Theorem \ref{T1.5}b).
Theorem \ref{T1.4} can now also be proved with the
ideas developed in the preceding sections. This is done
in Section 8. In Section 9, we present the independent
proof of Corollary \ref{C1.1}.
Finally, in the Appendix, we sketch the proof of
the result of \cite{CK} that we use in Section 4.
We do this partly because this result is rather
recent and partly because the treatment of
\cite{CK} simplifies considerably in the special
case needed here.\\[0.2cm]
{\bf Acknowledgment:} I'd like to thank Sasha Kiselev
for most useful notes on the proof of Theorem \ref{TCK}.
I'd like to thank the Heisenberg program of the
Deutsche Forschungsgemeinschaft for financial support.
\section{Preliminaries}
We will work with wavenumbers $k=\sqrt{E}$ instead of
energies $E$. Of course, the transformed set
$\{k>0: k^2\in S\}$ has the same
Hausdorff dimension as $S$. Slightly abusing notation,
we will denote this set
also by $S$.
Let $y(x,k)$ be the solution of \eqref{1.1} with $E=k^2$ with the
initial values $y(0,k)=1, y'(0,k)=0$ (say). It will
be convenient to use modified Pr\"ufer variables
$R(x,k)>0,\varphi(x,k)$ defined by the relations $y=R\sin\varphi,
y'=kR\cos\varphi$ and by demanding that $\varphi$ be
continuous in $x$. The transfer matrix $T(x,t;k)$ is
the $2\times 2$-matrix that takes solution vectors
$\bigl( \begin{smallmatrix} y(s,k) \\ y'(s,k)/k
\end{smallmatrix} \bigr)$
from $s=t$ to $s=x$. In particular,
\[
R(x,k) = R(t,k) \| T(x,t;k)e_{\varphi(t,k)}\| , \quad
e_{\varphi} \equiv
\begin{pmatrix} \sin\varphi \\ \cos\varphi \end{pmatrix}.
\]
Also, if $u,v$ solve \eqref{1.1} and
$u(t,k)=v'(t,k)=1, u'(t,k)=v(t,k)=0$, then
\begin{equation}
\label{2.1}
T(x,t;k)= \begin{pmatrix} u(x,k) & kv(x,k) \\
u'(x,k)/k & v'(x,k) \end{pmatrix}.
\end{equation}
If $V=0$ on an interval $(a,b)$, then the evolution of
$R,\varphi$ is very simple: $R(x,k)\equiv R(a,k)$ and
$\varphi(x,k)=\varphi(a,k) + k(x-a)$ for all
$x\in (a,b)$.
As already mentioned in the introduction,
the potential we're after is
built out of periodic pieces. The $n$th piece
has period $g_n^{-1}$ and size $\sim c_ng_n^2$, where
$g_n>0$ is small. The number of periods is equal
to $L_n\in \mathbb N$.
We also need intervals of length $\Delta_n$
with zero potential between the blocks in order to
randomize the phases $\varphi$.
So, $V$ will be of the following form:
Put $a_1=0$ and $b_n=a_n+L_ng_n^{-1},
a_{n+1}=b_n+\Delta_n$, where $L_n\in\mathbb N$,
$\Delta_n \in [0,\pi/2]$, and define
\[
V(x)=\begin{cases}
c_n W_{g_n}(x-a_n) & x\in (a_n,b_n)\\
0 & x\in (b_n,a_{n+1})
\end{cases}.
\]
Here, $W_g$ denotes the rescaled function
$W_g(x)=g^2W(gx)$, and the basic potential
$W$ (as well as the other parameters)
will be chosen later. The following elementary
observation is crucial to our construction.
Namely, the
rescaling $W\to W_g$
has the same effect as replacing $k$ by $k/g$.
More precisely, write $T_W$ for the
transfer matrix associated with
$-y''+Wy=k^2y$. Then we have
\begin{Lemma}
\label{L2.1}
$T_{W_g}(1/g,0;k)=T_W(1,0;k/g)$.
\end{Lemma}
{\it Sketch of the proof.} Introduce the new variable
$t=gx$ in the Schr\"odinger equation
and use \eqref{2.1}. (See also \cite[Lemma 3.1]{Rempp}.)
$\square$
Of course, we need a few simple facts about periodic
potentials; a more comprehensive treatment can be found
in \cite{East}. In the sequel,
$T(k)=T_W(1,0;k)$ is short-hand for the transfer
matrix over one period of some (unspecified, but
fixed) potential $W$ with period $1$.
It suffices to study this object since
$T_W(N,0;k)=T(k)^N$ by periodicity. Then,
since $\det T(k)=1$, the eigenvalues
of $T(k)$ can be computed from its trace
\[
D(k)\equiv \text{tr }T(k)=u(1,k)+v'(1,k)
\]
($u,v$ solve $-y''+Wy=k^2y$, and $u(0)=v'(0)=1,
u'(0)=v(0)=0$). Namely, the eigenvalues are $\lambda,\lambda^{-1}$,
where
\begin{equation}
\label{mu}
\lambda(k)=\frac{D(k)}{2}\pm \sqrt{\frac{D(k)^2}{4}-1}.
\end{equation}
Here, we choose the sign so that $|\lambda|\ge 1$. If $|D|>2$,
then $\lambda$ is real and $|\lambda|>1$; if $|D|<2$, then $|\lambda|=1,
\lambda\notin \mathbb{R}$. The set $\{E\in \mathbb{R}: |D(E)|\le 2\}$
(this notation is a little sloppy since we've switched from
$k$ to $E$, but self-explanatory) is
a (possibly infinite) union of closed intervals. These intervals
are called bands. The intervals where $|D|>2$ are called gaps.
The spectrum of the whole-line operator
with potential $W$ is purely absolutely
continuous and consists precisely of the bands. If $k$ is in
a gap, the Lyapunov exponent is positive. So solutions that are
not very close to the decaying solution are
exponentially increasing.
Here's the precise statement we'll use:
\begin{Lemma}
\label{L2.2}
If $|D(k)|>2$, there is an angle $\theta(k)\in [0,\pi)$ so
that for all $n\in\mathbb N$,
\begin{equation}
\label{2.2}
\left\| T(k)^n e_{\varphi}\right\| \ge
|\lambda(k)|^n \sin|\theta(k)-\varphi|.
\end{equation}
\end{Lemma}
Here, we use the $\ell_2$ norm on $\mathbb{C}^2$, and,
as above,
$e_{\varphi}=(\sin\varphi,\cos\varphi)^t$.
{\it Proof.} Fix $k$, and pick $\psi,\theta \in [0,\pi)$
so that $e_{\psi},e_{\theta}$ are eigenvectors
of $T$ corresponding to the (real and distinct!) eigenvalues
$\lambda$ and $\lambda^{-1}$, respectively. Actually,
we may assume $\psi=0$ here without loss of
generality. Then $\theta\not=0$, and a calculation shows
that
\begin{align*}
\|T^n e_{\varphi}\|^2 &= \sin^{-2}\theta \left[
\lambda^{2n} \sin^2(\theta-\varphi) + \lambda^{-2n}
\sin^2\varphi
+2\cos\theta\sin(\theta-\varphi)
\sin\varphi \right] \\
& = \sin^{-2}\theta \left[
\lambda^{-n}\sin\varphi+\lambda^n\sin(\theta-\varphi)
\cos\theta\right]^2
+ \lambda^{2n}\sin^2(\theta-\varphi)\\
& \ge \lambda^{2n}\sin^2(\theta-\varphi),
\end{align*}
as claimed. $\square$
We conclude this section with an asymptotic estimate
of the usual type (compare, e.g., \cite{East,LevSar,PT}).
This result will be used in Section 8, and, incidentally,
it also suffices
to prove Theorem \ref{T1.3} for $\alpha>4/5$. However,
to go further, we'll need the more accurate analysis
of Section 4. Therefore, we will omit the proof of
the following Proposition.
We use the notations $\widehat{W}_n = \int_0^1
W(x) e^{2\pi inx}\, dx$ and $\|W\|_1 =\int_0^1
|W(x)|\, dx$.
\begin{Proposition}
\label{P2.1}
There is a constant $C_0>0$ such that the following
holds. If $W\in L_2(0,1), \int_0^1 W=0$,
$j\ge C_0\|W\|_1$, $j|\widehat{W}_j|^2\ge C_0\|W\|_1^3$, and
\[
|k-j\pi| \le \frac{|\widehat{W}_j|}{10j},
\]
then
\[
|\lambda(k)|\ge 1 + \frac{|\widehat{W}_j|}{10j}.
\]
\end{Proposition}
\section{A guide to the proof}
The set $S$ from \eqref{exc} will be constructed as
a Cantor type set. More precisely, at step $n$,
we'll use some of the gaps of the potential
$c_nW_{g_n}$, and we let then $T$ be the intersection
over all $n$ of the sets thus obtained. There is
definite hope that $S$, as defined in \eqref{exc},
can indeed contain such a set $T$,
because by Lemma \ref{L2.2},
we expect that the solutions of \eqref{1.1} grow
if $E$ is in a gap of $c_nW_{g_n}$ for all $n$.
Now, to analyze this situation quantitatively, we will have to
apply Proposition \ref{P2.1} to potentials of the form
$c_ng_n^2W(g_n x)$; by Lemma \ref{L2.1} we may as
well apply the Proposition to $c_nW(x)$ if we also
replace $k$ by $k/g_n$. In this way,
we see that the gaps are
located at the points $k_j= j\pi g_n$; their
size is of the order $l_n \approx c_n g_n^2 |\widehat{W}_j|$.
We proceed inductively and we use only those gaps
which are contained in a gap that was used in the preceding
step. Let $N_n$ denote the number of new gaps contained
in some fixed gap constructed in the $(n-1)$st step,
and write $P_n\equiv N_1\cdots N_n$ for the total number
of gaps used in step $n$. Because of the condition $\sum_j
|\widehat{W}_j|^2<\infty$, there are restrictions
on the size of $|\widehat{W}_j|$. We will pick a $W$
that satisfies $|\widehat{W}_j|\approx P_n^{-1/2}$ for
the indices $j$ that are used in the $n$th step.
Then $l_n\approx c_ng_n^2 P_n^{-1/2}$. Since adjacent
gaps are separated by a distance of order $\approx g_n$,
we obtain the relation $l_{n-1}\approx N_n g_n =P_n
P_{n-1}^{-1} g_n$. Finally, in order to get a set of
dimension $D$, we should have a scaling of the
type $P_nl_n^D\approx 1$.
Solving these latter three relations for $l_n,g_n,c_n$,
we obtain
\begin{align}
l_n & \approx P_n^{-1/D}, \nonumber\\
g_n & \approx P_n^{-1}P_{n-1}^{1-1/D},\label{3.99} \\
c_n & \approx P_n^{5/2-1/D}P_{n-1}^{2/D-2}\label{3.99a}.
\end{align}
By Proposition \ref{P2.1} again, the large eigenvalue
$\lambda_n$ of the transfer matrix
is of the order $|\lambda_n| \approx 1 +
c_n|\widehat{W}_j|/j \approx 1+ c_ng_nP_n^{-1/2}$.
By Lemma \ref{L2.2}, we expect that the solution
grows by a factor $\approx |\lambda_n|^{L_n}$ across
the $n$th piece. Thus, to get substantial increase,
we must take
\[
L_n\approx P_n^{1/2}(c_ng_n)^{-1} \approx
(P_n/P_{n-1})^{1/D-1}.
\]
It remains to estimate the rate of decay of $V$ we get with
these parameters: In the $n$th piece, we have that
$x\approx \sum_{m=1}^n L_m/g_m$. If $L_m/g_m$ increases
rapidly, this is $\approx L_n/g_n \approx P_n^{1/D}$.
On the other hand, $|V(x)| \le Cc_ng_n^2 \approx
P_n^{1/2-1/D}$ for these $x$, so $V(x)=O(x^{-\alpha})$ with
$\alpha=1-D/2$ or $D=2(1-\alpha)$, as desired.
We made heavy use of asymptotic expansions
in these heuristic considerations, so we will
certainly need that
$k/g_n \gg \|c_n W\|_1$
(the basic assumption in these expansions).
That is, we need that $c_ng_n \ll 1$, and since
$c_ng_n \approx P_n^{3/2-1/D}P_{n-1}^{1/D-1}$, our
method can, unfortunately, only work if $D<2/3$
(or, equivalently, $\alpha>2/3$).
\section{Asymptotic expansions}
We need asymptotic
formulae more precise than Proposition
\ref{P2.1}. We begin by introducing two maximal
functions which will be used to control the remainders.
For $f,g\in L_1(0,1)$, put
\begin{align*}
M_f(k) &= \max_{0\le b\le 1} \left|
\int_0^b dx\, f(x) e^{2\pi ikx} \right|, \\
M_{f,g}(k) & = \max_{0\le b\le 1} \left|
\int_0^b dx\, f(x) e^{2\pi ikx}\int_0^x dt\, g(t)
e^{-2\pi ikt} \right|.
\end{align*}
Then the results of Christ and Kiselev \cite{CK}
on multilinear operators,
specialized to the case at hand, give the following norm
bounds.
\begin{Theorem}[\cite{CK}]
\label{TCK}
If $p\in (1,2)$ and $1/p+1/q=1$, then
\[
\left( \sum_{n=-\infty}^{\infty} \left( M_f(n)\right)^q
\right)^{1/q}
\le C_p \|f\|_p,\quad
\left( \sum_{n=-\infty}^{\infty} \left( M_{f,g}(n)
\right)^{q/2} \right)^{2/q}
\le C_p \|f\|_p \|g\|_p.
\]
\end{Theorem}
The {\it proof} will be sketched in the Appendix.
Actually, we will only need the weak type bounds that
follow from these estimates. These bounds will let
us prove
\begin{Theorem}
\label{T0.2}
There is a constant $C_0>0$ such that the
following holds. If $W\in L_2(0,1), \int_0^1 W=0$,
$j\ge C_0\|W\|_1$, and
\[
|k-j\pi| \le \frac{|\widehat{W}_j|}{10j},
\]
then
\[
|D(k)| \ge 2 + \frac{|\widehat{W}_j|^2}{70j^2} - R(j),
\]
where, for every $q>2$, $R(j)$ satisfies the weak type estimate
\[
\# \{j\in\mathbb{N} : j\ge C_0 \|W\|_1 \text{ and }
R(j) > \lambda \|W\|_1 j^{-3} \}
\le C_q \lambda^{-q/2} \|W\|_2^q.
\]
\end{Theorem}
{\it Remark.} Note that the constants $C_0,C_q$ are independent
of $W$. This will be important in the application of
Theorem \ref{T0.2}.
{\it Proof.}
The following
expansion holds:
\[
D(k)= 2\cos k + \sum_{n=2}^{\infty} \frac{D_n(k)}{k^n},
\]
where
\begin{multline}
\label{0.10}
D_n(k)=\int_0^1 dt_1\, W(t_1)\int_0^{t_1} dt_2\,
W(t_2) \ldots \int_0^{t_{n-1}} dt_n\, W(t_n)\times\\
\sin k(1+t_n-t_1) \sin k(t_1-t_2) \cdots
\sin k(t_{n-1}-t_n).
\end{multline}
This follows from the corresponding expansions
of the solutions $u,v$ obtained from iterating
the variation of constants formula (see, e.g.,
\cite[Chapter 1]{PT}). The details of these routine
computations can be left to the reader.
Note also that the series for $D$ converges
absolutely and uniformly in $k\ge 0$ (say).
We want to treat $\sum_{n\ge 3} k^{-n}D_n$ as a
remainder. Of course, there's the obvious bound
$(\|W\|_1/k)^3 e^{\|W\|_1/k}$, and, indeed, this estimate
lets one prove Proposition \ref{P2.1}. We give a more
careful analysis in this proof.
It's useful to rewrite the $D_n$'s
using the complex representation
of the sine. We get
\begin{equation}
\label{0.2}
D_n(k)= (2i)^{-n} \sum_{\sigma_1,\ldots,\sigma_n
=\pm 1} \sigma_1 \cdots \sigma_n e^{i\sigma_1 k} S_n(\sigma)
\end{equation}
with
\begin{multline*}
S_n(\sigma)
\equiv \int_0^1 dt_1\, W(t_1)\int_0^{t_1} dt_2\,
W(t_2) \ldots \int_0^{t_{n-1}} dt_n\, W(t_n)\times\\
e^{ik(\sigma_2-\sigma_1)t_1} e^{ik(\sigma_3-\sigma_2)t_2}
\cdots e^{ik(\sigma_1-\sigma_n)t_n}.
\end{multline*}
If $\sigma_1=\ldots =\sigma_n$, then $S_n(\sigma)=
\left( \int_0^1 W\right)^n/n! =0$. On the other hand,
if not all the $\sigma$'s are equal, we can find
$m,l\in \{ 1,\ldots, n\}$
with $\sigma_{m+1}-\sigma_m = 2$ and $\sigma_{l+1}-
\sigma_l = -2$ (here, we have put $\sigma_{n+1}\equiv
\sigma_1$). We have
\begin{multline*}
|S_n(\sigma)| \le
\int dt_1\, |W(t_1)|\ldots \int dt_{m-1}\, |W(t_{m-1})|
\int dt_{m+1}\, |W(t_{m+1})| \ldots\\
\int dt_{l-1}\, |W(t_{l-1})|
\int dt_{l+1}\, |W(t_{l+1})|\ldots \int dt_n \,|W(t_n)| \times\\
\left| \int dt_m\, W(t_m) e^{2ikt_m} \int dt_l \,
W(t_l) e^{-2ikt_l} \right|.
\end{multline*}
The integration is over the region $\{(t_1,\ldots,t_n):
1\ge t_1 \ge \ldots \ge t_n\ge 0\}$. In particular,
in the last integral $t_l$ runs over $[t_{l+1},t_{l-1}]$
(using the obvious convention $t_0=1, t_{n+1}=0$).
The structure of the double integral with respect
to $(t_m,t_l)$ depends on the distance between the
indices $m$ and $l$.
We have to distinguish two cases: If $m-l$ (evaluated
modulo $n$) is not equal to $\pm 1$, then the region of
integration for the $t_m$-integral is $[t_{m+1},t_{m-1}]$,
and thus the double integral
is equal to the product of the two single
integrals. Since clearly
\[
\max_{0\le s,t\le 1}\left| \int_s^t dx\, W(x) e^{2ikx}
\right| \le 2 M_W(k/\pi),
\]
we see that in the case under consideration
\[
|S_n(\sigma)| \le 4 \frac{\|W\|_1^{n-2}}{(n-2)!}
M_W^2(k/\pi).
\]
On the other hand, if $l=m+1$, say,
then the limits of the double integral
are (in self-explanatory notation)
\[
\int_{t_{m+2}}^{t_{m-1}} dt_m\, \int_{t_{m+2}}^{t_m} dt_{l}.
\]
Since
\[
\int_a^b dt\, \int_a^t ds = \int_0^b\int_0^t
-\int_0^a \int_0^t + \int_0^a\int_0^a -
\int_0^b\int_0^a,
\]
we get this time that
\begin{equation}
\label{0.1}
|S_n(\sigma)| \le 2 \frac{\|W\|_1^{n-2}}{(n-2)!}
\left( M_W^2(k/\pi) + M_{W,W}(k/\pi)\right).
\end{equation}
In either case, it is true that $S_n(\sigma)$ can
be estimated by the right-hand side of \eqref{0.1} with
$2$ replaced by $4$. By \eqref{0.2}, $|D_n|$ satisfies
the same inequality.
Since we want to expand $D_n(k)$ around $k=j\pi$,
we also need similar estimates on $dD_n/dk$. We have that
\begin{multline*}
\frac{d}{dk} S_n(\sigma) = i
\int_0^1 dt_1\, W(t_1)\int_0^{t_1} dt_2\,
W(t_2) \ldots \int_0^{t_{n-1}} dt_n\, W(t_n)\times\\
e^{ik(\sigma_2-\sigma_1)t_1} e^{ik(\sigma_3-\sigma_2)t_2}
\cdots e^{ik(\sigma_1-\sigma_n)t_n}
\left( (\sigma_2-\sigma_1)t_1+\ldots + (\sigma_1
-\sigma_n)t_n \right).
\end{multline*}
We pick $m,l$ as above, and
we treat separately the contributions coming
from $2it_m$, $-2it_l$, and those coming from
$\sum_{r\not= m,l} (\sigma_{r+1}-\sigma_r) t_r$.
For these latter terms, it suffices to note that
$\left|\sum_{r\not= m,l} (\sigma_{r+1}-\sigma_r) t_r
\right| \le 2(n-2)$ and then apply the above estimates
again.
Dealing with the other two contributions, we also proceed
as above, but now new maximal
functions enter; for instance, in the case when
$l=m+1$, we use that
\begin{multline*}
\left| \int_{t_{m+2}}^{t_{m-1}} dt_m\, 2it_m W(t_m)
e^{2ikt_m} \int_{t_{m+2}}^{t_m} dt_l \,
W(t_l) e^{-2ikt_l} \right| \le\\
4 \left( M_{tW}(k/\pi)M_W(k/\pi) + M_{tW,W}(k/\pi)\right).
\end{multline*}
We omit the details;
the final result is an inequality of the form
\[
\left| \frac{dS_n(\sigma)}{dk}\right|
\le 8 \frac{\|W\|_1^{n-2}}{(n-3)!} \left(
M_W^2 + M_{tW}M_W + M_{W,W} + M_{tW,W}
+ M_{W,tW} \right)
\]
(where all maximal functions are evaluated at
$k/\pi$). Since the additional term coming from
differentiating $e^{i\sigma_1 k}$ is easily controlled,
it now follows from \eqref{0.2} that also
\[
|D'_n(k)| \le 12 \frac{\|W\|_1^{n-2}}{(n-3)!}
\left( M_W^2 + M_{tW}M_W + M_{W,W} + M_{tW,W}
+ M_{W,tW} \right).
\]
For the second derivative, a crude estimate
will be sufficient. It follows directly from
\eqref{0.10} that
\[
\left| \frac{d^2D_n(k)}{dk^2}\right| \le
4 \frac{\|W\|_1^n}{(n-2)!}.
\]
We now write $k=j\pi \pm \delta$ with $0\le \delta \le \pi/2$
(and, typically, $j$ large). Then our results on $D_n$
from above imply that
\begin{align*}
|D_n(k)| \le & |D_n(j\pi)| + |D'_n(j\pi)|\, \delta
+ \max |D''_n(k)| \cdot \delta^2/2\\
\le & C\frac{\|W\|_1^{n-2}}{(n-2)!}
\left[ M_W^2(j) + M_{W,W}(j)\right] + C
\frac{\|W\|_1^{n-2}}{(n-3)!} \left[
M_W^2(j) +\right. \\ &\left. M_{tW}(j)M_W(j)+
M_{W,W}(j) + M_{tW,W}(j)
+ M_{W,tW}(j) \right] \delta + \\
& C\frac{\|W\|_1^n}{(n-2)!}\, \delta^2.
\end{align*}
Now if $\|W\|_1/j \le 1$, say, then this inequality
leads to
\[
\left| \sum_{n=3}^{\infty} \frac{D_n(k)}{k^n} \right|
\le C M(j) \frac{\|W\|_1}{j^3} + C\frac{\|W\|_1^3}{j^3} \delta^2,
\]
where $M\equiv M_W^2+M_{tW}M_W + M_{W,W}+M_{tW,W}
+M_{W,tW}$. Theorem
\ref{TCK} shows that
for $q>2, 1/p+1/q=1$,
\begin{align*}
\|M\|_{\ell_{q/2}}
& \le \|M_W\|_q^2 + \|M_{tW}\|_q\|M_W\|_q +
\|M_{W,W}\|_{q/2}+\|M_{tW,W}\|_{q/2}
+\|M_{W,tW}\|_{q/2}\\
& \le C_q\left( \|W\|_p^2 + \|tW\|_p^2
\right) \le 2C_q \|W\|_p^2 \le 2C_q\|W\|_2^2.
\end{align*}
This implies the weak type bound
\[
\# \{ j\in\mathbb{N}: M(j)> \lambda\} \le C_q
\lambda^{-q/2}\|W\|_2^q.
\]
Finally, we need to take a closer look at $D_2(k)$.
A calculation gives
\[
D_2(k) = \frac{|\widehat{W}(k/\pi)|}{4}^2\cos k
+ \frac{1}{2}\, \text{Im} \int_0^1 dt_1\, W(t_1)
e^{2ikt_1} \int_0^{t_1}dt_2\, W(t_2) e^{-2ikt_2} \sin k,
\]
where $\widehat{W}(s)= \int_0^1 W(x)e^{2\pi isx}\, dx$.
We again use Taylor expansions about $k=j\pi$ to control
the second term on the right-hand side; also using the fact that
$|\sin k|\le \delta$, we get the bound
$CM(j)\delta+C\|W\|_1^2\delta^2$.
Therefore, putting everything together,
we obtain
\[
D(k)=2\cos k\left( 1+\frac{|\widehat{W}(k/\pi)|^2}{8k^2} \right)
+ O\left(M(j)(\|W\|_1/j^3+ \delta/j^2)+
(\|W\|_1 \delta/j)^2\right).
\]
The constant implicit in the error term $O(\ldots)$ does
not depend on anything. To conclude the proof, we also
expand the leading term:
\begin{align*}
\cos k & = (-1)^j \left( 1- \delta^2/2 + O(\delta^4)\right),\\
\widehat{W}(k/\pi) & = \widehat{W}_j + O(\|W\|_1\delta).
\end{align*}
Using this and noting that, by hypothesis,
$\delta \le |\widehat{W}_j|/(10j)$,
it is now routine, if somewhat tedious, to check that the
assertion holds if $\|W\|_1/j$ is small enough, that is,
if we take $C_0$ sufficiently large. The role
of $R(j)$ is taken by a remainder of the order
$O(M(j)\|W\|_1/j^3)$. $\square$
\section{Construction of the exceptional set}
We now make the program outlined in Section 3 rigorous. We
start by constructing a Cantor type set (which will basically
be the set $S$ from \eqref{exc}) and, simultaneously,
the basic periodic potential $W$.
So, fix $\alpha\in (2/3,1)$, and let $D=2(1-\alpha)$. We
want to construct a potential $V$ of the order
$V(x)=O(x^{-\alpha})$ so that $\dim S=D$.
It will be more convenient to work with still another
parameter $a$ related to $D$ by
\begin{equation}
\label{3.5}
a=\frac{4(1-D)}{2-3D};
\end{equation}
notice that $a>2$. Pick $b\in (1,a)$, and put
\begin{align}
\label{3.6}
g_n & = \exp\left( -a^{n+1}\frac{2a-3}{(2a-4)(a-1)} \right),\\
\label{3.6a}
c_n & = 10\, \exp\left( \frac{a^{n+1}}{a-1}-b^{n+1} \right).
\end{align}
The motivation for these choices comes from the arguments
of Section 3: We make the ansatz $N_n=\exp(a^n)$, so
$P_n \approx \exp(a^{n+1}/(a-1))$, then use \eqref{3.99},
\eqref{3.99a} and finally pick $a$ so that $c_ng_n \ll 1$.
The use of the rapidly increasing sequence $\exp(a^n)$
is not a whim but is necessary to be able to satisfy this latter
condition. Finally, the additional parameter $b$ is
introduced for technical reasons; it allows us to
slightly improve the decay rate of the $V$ to be
constructed. In fact, Theorem \ref{T1.3} can be proved
with $b=0$, but in the proof of Theorem \ref{T7.1}
below, it will be essential that we squeeze out
a little more than what Theorem \ref{T1.3} states.
In order to get gaps at the right places (by an
application of Theorem \ref{T0.2}), we need to
make sure that certain Fourier coefficients of
$W$ are large enough. More precisely, we construct
a Cantor type set $\bigcap F_n$, and we simultaneously
try to find a function $W$ satisfying the following
conditions:
\begin{enumerate}
\item Put $F_0=[1,2]$.
\item The set $F_1$ will be a union of intervals
of the form
\[
J_j^{(1)}=\left[ j\pi g_1 - l_1/2, j\pi g_1 + l_1/2
\right],
\]
where
\[
l_1=\frac{\pi}{10}c_1 g_1^2
\exp\left(-\frac{a^{1+1}}
{2(a-1)}\right)
\]
(the motivation for this choice will become clear
shortly -- see Lemma \ref{LL5.1}). In fact, we
use only those $J_j^{(1)}$ which are contained in $F_0$.
For the corresponding indices $j$, we
require that
\[
|\widehat{W}_j| \ge \exp\left(-\frac{a^{1+1}}
{2(a-1)}\right).
\]
Let $F_1$ be the union
of the $J_j^{(1)}$'s with $J_j^{(1)}\subset F_0$.
\item Similarly, $F_2$ is a union of intervals
\[
J_j^{(2)} = \left[ j\pi g_2 -l_2/2, j\pi g_2
+ l_2/2 \right]
\]
with
\[
l_2= \frac{\pi}{10}c_2 g_2^2
\exp\left(-\frac{a^{2+1}}
{2(a-1)}\right).
\]
This time, we use only those $j$ for which $J_j^{(2)}
\subset F_1$,
and we require that then
\[
|\widehat{W}_j| \ge \exp\left(-\frac{a^{2+1}}
{2(a-1)}\right).
\]
\end{enumerate}
Continuing in this way, we obtain a sequence of sets $F_n\subset
F_{n-1}$. Every $F_n$ is a union of intervals $J_j^{(n)}$
of the form
\begin{align}
J_j^{(n)} & = \left[ j\pi g_n -l_n/2, j\pi g_n +l_n/2
\right], \nonumber\\
l_n & =
\frac{\pi}{10}c_n g_n^2
\exp\left(-\frac{a^{n+1}}
{2(a-1)}\right).
\label{33.0}
\end{align}
Actually, we can't be sure at this point that
$F_n\not=\emptyset$ for all $n$. However, it's
easy to repair this: We start out with
a collection of $J_j^{(n_0)}\subset [1,2]$ instead
of $F_0$, where
we take $n_0\in\mathbb N$ large enough, and then
proceed as described above. We will see shortly
that then $F_n\not=\emptyset$ for all $n\ge n_0$.
In any event,
we're also trying to find a function $W$ with the property that
\begin{equation}
\label{3.1}
|\widehat{W}_j| \ge \exp\left(-\frac{a^{n+1}}
{2(a-1)}\right)
\end{equation}
whenever $j$ is such that $J_j^{(n)}\subset F_{n}$.
The existence of such a $W$ will follow from a result
of de Leeuw, Kahane, and Katznelson:
\begin{Theorem}
\label{T3.1}
Suppose $w_j\ge 0$, $\sum w_j^2 <\infty$. Then there
exists a bounded function
$f:[0,1]\to \mathbb{R}$ with $|\widehat{f}_j| \ge w_j$
for all $j$. Moreover, it is possible to choose
$f$ so that $\|f\|_{\infty} \le C\left(\sum w_j^2
\right)^{1/2}$.
\end{Theorem}
See \cite[Section 5.9]{Kah} for the {\it proof.}
The bound on $\|f\|_{\infty}$ is also derived in
the proof given there, although it is then not mentioned
in the theorem of that section. Incidentally, we'll need
this bound only in Section 8.
Before applying Theorem \ref{T3.1},
we can simplify things a little by passing to
subsets of the $F_n$'s which are, in a sense,
more symmetric. To this end, we first observe
that every fixed $J_j^{(n-1)}\subset F_{n-1}$ contains at least
\[
\frac{l_{n-1}}{\pi g_n}-2=
\frac{\pi}{10} c_{n-1}g_{n-1}^2
\exp\left( -\frac{a^n}
{2(a-1)} \right)\frac{1}{\pi g_n} - 2
\]
intervals $J_k^{(n)}$.
A computation using \eqref{3.6}, \eqref{3.6a}
shows that this number is equal to
$\exp(a^n-b^n)-2$. In particular, for large $n$, it is
certainly larger than $(1/2)\exp(a^n-b^n)$, say. We fix
once and for all integers
\begin{equation}
\label{3.3}
N_n \in [\frac{1}{3}\exp(a^n-b^n),
\frac{1}{2}\exp(a^n-b^n)] \cap \mathbb{N} \quad (n\ge n_0+1).
\end{equation}
By the above remarks, we can now (inductively) pass to subsets
$G_n\subset F_n\: (n\ge n_0)$ as follows. Put
$G_{n_0}=F_{n_0}$; then $G_{n_0}$ is the union of
certain intervals $J_j^{(n_0)}$. Generally, if $G_{n-1}$
is such a union, pick precisely $N_n$ subintervals
$J_k^{(n)}\subset J_j^{(n-1)}$
for every interval $J_j^{(n-1)}\subset G_{n-1}$,
and let $G_n$ be the union of these $J_k^{(n)}$'s.
Now, if we denote the right-hand side
of \eqref{3.1} by $w_n$ and if $N_{n_0}$ is the number
of intervals contained in $G_{n_0}$, then
\begin{align*}
\sum_{n=n_0}^{\infty}
\sum_{\{ j:J_j^{(n)}\subset G_n \} } w_n^2
&= \sum_{n=n_0}^{\infty} N_{n_0}N_{n_0+1}\cdots
N_n w_n^2\\
&\le C \sum_{n=n_0}^{\infty} 2^{-n} \exp\left( \frac{a^{n+1}}
{a-1}-\frac{b^{n+1}}{b-1}
\right) w_n^2 \\ & = C \sum_{n=n_0}^{\infty}
2^{-n}\exp\left( - \frac{b^{n+1}}{b-1}\right) < \infty.
\end{align*}
So Theorem \ref{T3.1} applies: There is
a bounded function $W$ so that for every
$n\ge n_0$, condition \eqref{3.1} holds
for all $j\in \mathbb N$ with $J_j^{(n)}\subset
G_n$. Since we want to apply Theorem \ref{T0.2}
to (multiples of) $W$ later on, we replace this
$W(x)$ by $W(x)-\int_0^1 W$ to make sure that
$\int_0^1 W=0$. Of course, this
doesn't change $\widehat{W}_j$ for $j\not= 0$.
Let $T=\bigcap_{n\ge n_0} G_n$. In the sequel, we
assume that $n_0=0$ and $G_0=F_0=[1,2]$.
This is done in order to simplify the notation;
the following arguments are of course
also valid in the general case. We summarize the properties of
$T$, and we also add an important additional
observation:
\begin{Lemma}
\label{LL5.1}
$G_n$ is a (disjoint) union of precisely $N_1N_2
\cdots N_n$ intervals $J_j^{(n)}$ of the form
\[
J_j^{(n)} = \left[ j\pi g_n - l_n/2,j\pi g_n + l_n/2
\right].
\]
Every $J_j^{(n)}\subset G_n$ contains exactly
$N_{n+1}$ intervals $J_i^{(n+1)}\subset
G_{n+1}$. If
$J_j^{(n)}\subset G_n$, then \eqref{3.1} holds. Moreover,
if $k\in J_j^{(n)}\subset G_n$, then
\[
\left| \frac{k}{g_n} - j\pi \right|
\le \frac{c_n|\widehat{W}_j|}{10j}.
\]
\end{Lemma}
{\it Proof.} It remains to check the last statement.
So suppose $k\in J_j^{(n)}\subset G_n$. Then,
by \eqref{33.0} and \eqref{3.1},
\[
\left| \frac{k}{g_n} - j\pi \right| \le
\frac{l_n}{2g_n} = \frac{\pi}{20}c_ng_n
\exp \left( -\frac{a^{n+1}}{2(a-1)} \right)
\le \frac{\pi}{20}c_ng_n |\widehat{W}_j|.
\]
Since $j\pi g_n \in G_n \subset F_0=[1,2]$,
we have $g_n\le 2/(\pi j)$, proving the assertion.
$\square$
There's a natural Borel (probability)
measure $\mu$ supported by $T$.
Namely, we define $\mu$ by the following two
properties: $\mu(\mathbb{R}\setminus T)=0$, and for
every $n$, $\mu$ puts
equal weight on the subintervals of $G_n$, that is
\[
\mu(J_j^{(n)}) = \frac{1}{N_1N_2\cdots N_n}
\]
for every $J_j^{(n)}\subset G_n$. It's easy to see
that $\mu$ is indeed well defined by these requirements.
We can now complete our analysis of $T$ by showing that
$T$ itself and certain subsets are not too small.
More precisely, we have:
\begin{Lemma}
\label{L3.1}
If $\mu(M)>0$, then $\dim M\ge D$.
\end{Lemma}
{\it Proof.}
We will prove
the following: For all $\gamma0$,
and let $I$ be an interval with
$|I|\le \delta$. Define
$n\in\mathbb N$ by $l_n< |I| \le l_{n-1}$. First, we consider
the case when $l_n < |I| \le \pi g_n$. Then $I$ intersects
at most two of the intervals $J_j^{(n)}$ that build up
$G_n$, so
\[
\frac{\mu(I)}{|I|^{\gamma}} \le \frac{2}
{N_1N_2\cdots N_n} |I|^{-\gamma} \le
\frac{2 l_n^{D-\gamma}}{N_1N_2\cdots N_n l_n^D}.
\]
Next, if $\pi g_n < |I| \le l_{n-1}$, then
$I$ intersects at most $|I|/(\pi g_n)+1 \le 2|I|/(\pi g_n)$
of the intervals $J_j^{(n)}$. Thus
\[
\frac{\mu(I)}{|I|^{\gamma}} \le \frac{2}{\pi g_nN_1N_2\cdots N_n}
|I|^{1-\gamma} \le
\frac{2 l_{n-1}}{\pi N_n g_n}
\frac{l_{n-1}^{D-\gamma}}{N_1N_2\cdots N_{n-1} l_{n-1}^D}.
\]
By \eqref{3.3}, we have that
\[
N_1\cdots N_n \ge C3^{-n}
\exp\left( \frac{a^{n+1}}{a-1} - \frac{b^{n+1}}{b-1} \right),
\]
and a computation using
\eqref{3.5}, \eqref{3.6}, \eqref{3.6a}, and \eqref{33.0} shows that
\[
l_n = \pi \exp \left( -\frac{a^{n+1}}{D(a-1)}
- b^{n+1} \right).
\]
Therefore, $N_1\cdots N_n l_n^D \ge C_1\exp(-C_2b^n)$.
Similarly, one sees that $l_{n-1}/( N_n g_n)\le C_1\exp(C_2b^n)$.
So, since $D-\gamma>0$ by assumption,
we get, in either case, an inequality of the form
\[
\frac{\mu(I)}{|I|^{\gamma}} \le C_1
\exp\left(C_2b^n-\epsilon a^n \right)
\]
with $\epsilon=\epsilon(D,\gamma)>0$. Our claim
follows, because $n$ goes to infinity as $\delta$ tends to
zero. $\square$
\section{Asymptotics of the solutions}
We know from Section 3 that we can expect
things to work out provided nothing special happens,
but there are two kinds of
bad luck we have to be prepared against: First of all, the
Pr\"ufer angle $\varphi$ could be close
to the exceptional phase $\theta$ from Lemma \ref{L2.2},
and second, the error term $R(j)$ from Theorem \ref{T0.2}
could be large.
The first problem is overcome by randomizing the Pr\"ufer
angles with the help of the parameters $\Delta_n$.
More precisely, we show
\begin{Lemma}
\label{L4.1}
Given measurable functions
$\theta_n(k)$,
it is possible to choose $\Delta_n\in [0,\pi/2]$ in such a way
that there is a subset $T_1\subset T$
of measure $\mu(T_1)\ge 1/2$
with the following property: If $k\in T_1$, then
\begin{equation}
\label{4.2}
\left| \varphi(a_{n},k)-\theta_n(k)
\right| \ge \frac{\pi}{8}
\end{equation}
for infinitely many $n$.
\end{Lemma}
{\it Remarks.} 1.\ In this Lemma and its proof, all angles
are evaluated modulo $\pi$.\\
2.\ As is probably already clear from the remarks
preceding the Lemma, $\theta_n$ will eventually
be the exceptional
phase defined by Lemma \ref{L2.2}.
However, the following proof clearly also works for
arbitrary (measurable) functions $\theta_n$.\\
3.\ Since $\varphi(a_{n},k)$ depends on the values
of $V(x)$ for $x\le a_{n}$, it's clear that we
should first fix
$c_n,g_n,L_n,W$, and only then can Lemma \ref{L4.1}
be used to finally pick the $\Delta_n$'s. (On top
of that, the exceptional angle $\theta_n$ from
Lemma \ref{L2.2} of course depends on $c_n,g_n,W$.)
Having said that,
we won't find it necessary to follow this correct order
also in the representation of our arguments, because
that would involve making rather unmotivated choices.
{\it Proof.} We begin by showing that the $\Delta_n$'s
can be (inductively) chosen so that
\begin{equation}
\label{4.1}
\mu\left( \{ k: |\varphi(a_{n},k)-\theta_n(k)|<\pi/8 \}
\right) \le 1/2
\end{equation}
for every $n$.
Since $V=0$ on $(b_{n-1},a_{n})$, we have
$\varphi(a_{n},k)=\varphi(b_{n-1},k)+k \Delta_{n-1}$. So, if we
denote the set defined in \eqref{4.1} by $M(\Delta_{n-1})$, then
\begin{align*}
\int_0^{\pi/2} \mu & (M(\Delta_{n-1}))\, d\Delta_{n-1} \\
& = \int d\mu(k) \int_0^{\pi/2} d\Delta_{n-1}\,
\chi_{(-\pi/8,\pi/8)}
(\varphi(b_{n-1},k)-\theta_n(k)+k\Delta_{n-1})\\
& \le
\int d\mu(k)\, \frac{\pi}{4k} \le \frac{\pi}{4}.
\end{align*}
Hence \eqref{4.1} must indeed hold for some
$\Delta_{n-1}\in [0,\pi/2]$.
Now \eqref{4.1} implies the statement of the Lemma by
an elementary probabilistic argument. Namely,
let $T'$ be the set of $k$'s for which \eqref{4.2}
holds only for finitely many $n$. Also, for $k\in T'$,
let $N(k)$ be the smallest integer with the property
that \eqref{4.2} fails for all $n\ge N(k)$. Then
the sets $T'_n\equiv \{ k\in T': N(k)\le n \}$ increase to
$T'$, thus $\mu(T')=\lim \mu(T'_n)$. But \eqref{4.1} in
particular implies that $\mu(T'_n)\le 1/2$ for every
$n$, so the proof is complete. $\square$
Next, we show that the error term $R(j)$ from Theorem
\ref{T0.2} can cause problems only on a set of $\mu$
measure zero. To formulate this precisely, we need
some notation: Of course, eventually, we want to get
information on the trace of the transfer matrix associated
with the potential $c_nW_{g_n}$. By Lemma \ref{L2.1}, we
can instead apply Theorem \ref{T0.2} to $c_nW$ if we also
replace $k$ by $k/g_n$. We thus denote the trace and the largest
eigenvalue (in absolute value) of the transfer matrix
associated with $c_nW$ by $D_n$ and $\lambda_n$,
respectively. Our goal is to establish the following
estimate for $k\in T$:
\begin{equation}
\label{4.10}
|\lambda_n(k/g_n)| \ge 1 + \frac{1}{6} c_ng_n
\exp\left(-\frac{a^{n+1}}{2(a-1)}\right).
\end{equation}
Indeed, we have
\begin{Lemma}
\label{L4.2}
Let $T_0=\{ k\in T: \text{\eqref{4.10} fails for
infinitely many }n\in\mathbb N \}$.
Then $\mu(T_0)=0$.
\end{Lemma}
{\it Proof.} In view of \eqref{mu},
inequality \eqref{4.10} is implied by
\begin{equation}
\label{4.11}
|D_n(k/g_n)|\ge 2 + \left( \frac{c_ng_n}{6} \right)^2
\exp\left( -\frac{a^{n+1}}{a-1} \right),
\end{equation}
so it suffices to show that for $\mu$-almost all
$k$, this latter condition holds for all but finitely
many $n$. Now if $k\in T$, then, for every $n$,
there's a unique $j=j(n,k)$ so that $k\in J_j^{(n)}$.
Moreover, $1\le j\pi g_n\le 2$; in particular,
$j\ge 1/(\pi g_n)$, and this is bigger than
$C_0 c_n \|W\|_1$ for $n$ large enough, because
$c_ng_n\to 0$. Using this and invoking Lemma \ref{LL5.1},
we see that for $n\ge n_0$,
Theorem \ref{T0.2} applies to $k/g_n$ and
$c_nW$. Here, $n_0$ is independent of $k\in T$. Thus
\begin{align}
|D_n(k/g_n)| & \ge 2+ \frac{c_n^2|\widehat{W}_j|^2}
{70j^2} - R_n(j) \nonumber\\
& \ge 2 + \frac{\pi^2}{280} c_n^2g_n^2
\exp\left(-\frac{a^{n+1}}{a-1}\right) - R_n(j).
\label{4.12}
\end{align}
To pass to the second line, we used \eqref{3.1} and
the fact that $j\le 2/(\pi g_n)$.
The remainder $R_n$ satisfies, for every $q>2$, the following weak
type estimate. Here, we absorb some irrelevant constants
by $C_q$, so $C_q$ below
is of course not the same as the one
from Theorem \ref{T0.2}. In particular, it now depends
on $W$ (but not on $n$!).
\begin{equation}
\label{4.14}
\# \{j\in \mathbb N: 1\le j\pi g_n\le 2
\text{ and } R_n(j)> \lambda c_ng_n^3\}
\le C_q \lambda^{-q/2} c_n^q
\end{equation}
We can now again reformulate the claim. Namely, let
\[
S_n=\{j\in \mathbb N: 1\le j\pi g_n\le 2
\text{ and } R_n(j) > \delta c_n^2 g_n^2
\exp(-a^{n+1}/(a-1))\}
\]
with $\delta=\pi^2/280-1/36$ (this is positive);
then if $k\in J_j^{(n)}$ with $j\notin S_n$,
then \eqref{4.12} implies \eqref{4.11}. In other words,
it suffices to prove that for $\mu$-almost all $k$,
there are only finitely many $n$ so that
\[
k\in \bigcup_{j\in S_n} J_j^{(n)}.
\]
But by definition of $\mu$,
\[
\mu\left( \bigcup_{j\in S_n} J_j^{(n)} \right)
\le \frac{|S_n|}{N_1\cdots N_n},
\]
and \eqref{4.14} with
\[
\lambda=\delta\frac{c_n}{g_n} \exp\left(
-\frac{a^{n+1}}{a-1}\right)
\]
says that
\[
|S_n| \le C_q \delta^{-q/2} (c_ng_n)^{q/2}
\exp\left( \frac{q}{2}\frac{a^{n+1}}{a-1} \right).
\]
Using this and inserting the definitons
\eqref{3.6}, \eqref{3.6a}, and
\eqref{3.3}, we finally arrive at
\begin{equation}
\label{4.15}
\mu\left( \bigcup_{j\in S_n} J_j^{(n)} \right) \le
C'_q \exp \left[ \frac{a^{n+1}}{a-1}
\left( \frac{q}{2}\frac{2a-5}{2a-4} - 1 \right)
+ Cb^n \right].
\end{equation}
We can now take $q=4a/(2a-1)$, say, (obviously,
this is bigger than $2$, as required). Then the right hand
side of \eqref{4.15} is summable, and hence an application
of the Borel-Cantelli Lemma completes the proof.
$\square$
We are now in a position to show that the potential we've
constructed (we still have to choose the $L_n$'s, though)
has the properties stated in Theorem \ref{T1.3}.
We begin by proving that the set $S$ defined in
\eqref{exc} contains $T_1\setminus T_0$. By Lemmas
\ref{L3.1}, \ref{L4.1}, and \ref{L4.2}, it must then indeed be
true that $\dim S\ge D$.
So fix $k\in T_1\setminus T_0$, and let $n$ be such
that \eqref{4.2} and \eqref{4.10} hold. There are
infinitely many such $n$'s. We want to show that the
solution of \eqref{1.1} grows on the interval $(a_n,b_n)$
(which corresponds to the $n$th step of the construction;
please consult Section 2 again for this and other
definitions). The Pr\"ufer radius $R$ satisfies
\[
R(b_n,k)=R(a_n,k)\, \left\|\left(T_{c_nW_{g_n}}(1/g_n,0;k)
\right)^{L_n}
e_{\varphi(a_n,k)}\right\|,
\]
so Lemmas \ref{L2.1} and \ref{L2.2} together with
\eqref{4.2} and \eqref{4.10} now show that
\[
R(b_n,k) \ge R(a_n,k) \sin (\pi/8) \left(
1 + \frac{1}{6} c_ng_n
\exp\left( -\frac{a^{n+1}}{2(a-1)} \right) \right)^{L_n},
\]
or, taking logarithms,
\[
\ln (R(b_n,k)/R(a_n,k)) \ge C L_n c_ng_n
\exp\left( -\frac{a^{n+1}}{2(a-1)} \right)
+ \ln (\sin(\pi/8)).
\]
This suggests that we take
\begin{equation}
\label{4.16}
L_n = \left[ A\frac{1}{c_ng_n}
\exp\left(\frac{a^{n+1}}{2(a-1)} \right)
\right],
\end{equation}
where $[x]$ denotes the largest integer $\le x$. By taking
the constant $A$ in this definition large enough, we can
achieve that
$\ln (R(b_n)/R(a_n)) \ge \ln 2$, say. Hence
\begin{equation}
\label{4.3}
\limsup_{n\to\infty} \frac{R(b_n,k)}{R(a_n,k)} \ge 2
\end{equation}
for all $k\in T_1\setminus T_0$.
On the other hand, it's easy to see
that WKB asymptotics would imply that $R(x,k)$ tends to
a positive limit as $x\to\infty$, and this is incompatible
with \eqref{4.3}. Thus $S\supset T_1\setminus T_0$,
as claimed.
It remains to check that the $V$ constructed above has
the right rate of decay, but this is easy: Clearly, if $a_{n}
0$; moreover, we can get
arbitrarily small $\epsilon$ by taking $b$ sufficiently
close to $a$.
I don't think this improvement of $V(x)=O(x^{-\alpha})$
is particularly
interesting. We use Hausdorff dimensions to measure
the size of the set $S$, so it's not natural to use a
scale finer than the power scale to classify the potentials.
The real point is the following:
As we show in the next section, we can sacrifice
a bit of the decay rate of $V$ to get more detailed
information on the solutions. Then, of course, it will
be most useful to start out with a bound better
than $O(x^{-\alpha})$.
\section{Embedded eigenvalues}
In this section, we show that it's also possible
to get $\dim P=2(1-\alpha)$.
Here, $P$ is the set
of positive energies (or wavenumbers, depending on
the context) with $L_2$ solutions, as defined
in the Introduction. Recall also that $P\subset S$,
so the following result is, in a sense, a strengthening
of Theorem \ref{T1.3}.
\begin{Theorem}
\label{T7.1}
Suppose $\alpha>2/3$. Then there is a potential
so that \eqref{1.2} holds and $\dim P=2(1-\alpha)$.
\end{Theorem}
To explain the significance of this variation on
Theorem \ref{T1.3}, we combine it with a result of
del Rio, Jitomirskaya, Last, and Simon:
\begin{Theorem}{\bf (= \cite[Theorem 5.1]{dRJLS})}
Let $B=\{\beta\in [0,\pi): \sigma_{pp}(H_{\beta})
\cap (0,\infty)\not=\emptyset \}$.
Then $\dim B = \dim P$.
\end{Theorem}
In other words, Theorem \ref{T7.1} now shows that
Theorem \ref{T1.5}b) is optimal
(for $\alpha\ge 2/3$).
{\it Proof of Theorem \ref{T7.1}.}
We use the results and notations of the preceding
sections.
The basic idea of the modification is as follows.
We will try to get rapidly increasing solutions
on the set $T$, and then prove that there are also
decaying solutions which will eventually turn out
to be square integrable.
To this end, we modify the construction at two places.
The $L_n$'s will be taken somewhat larger, and the
$\Delta_n$'s will be picked such that eventually the Pr\"ufer
angle is {\it never} very close to the phase
$\theta$ corresponding to the decaying direction.
We discuss this latter point first. For technical
reasons, it will be necessary to work with two
linearly independent solutions $y_1, y_2$ in this proof. We can
take, let's say, $y_1(0)=y'_2(0)=1, y'_1(0)=
y_2(0)=0$. Denote the corresponding Pr\"ufer angles
by $\varphi_1$ and $\varphi_2$, respectively.
The following statement will replace
Lemma \ref{L4.1}.
\begin{Lemma}
\label{L7.1}
Given measurable functions
$\theta_n(k)$, it
is possible to choose $\Delta_n\in [0,\pi/2]$ such
that the following holds on a set $T_1\subset T$
of measure $\mu(T_1)=1$. If $k\in T_1$, then there
is an $n_0=n_0(k)$ so that
\[
|\varphi_i(a_{n},k)-\theta_n(k)| \ge 1/n^2
\]
for all $n\ge n_0$ and for $i=1,2$.
\end{Lemma}
{\it Proof.} Let
\[
M_n^{(i)} = \{k\in T: |\varphi_i(a_{n},k)-\theta_n(k)|
< 1/n^2 \},
\]
$M_n=M_n^{(1)}\cup M_n^{(2)}$, and argue as in the
first part of the proof of Lemma \ref{L4.1}; one sees
that the $\Delta_n$'s can inductively be picked so that
$\mu(M_n) \le 8/(\pi n^2)$. This is summable, so
the claim follows by the Borel-Cantelli Lemma.
$\square$
To go from increasing solutions to decaying solutions,
we will need the following tool.
\begin{Proposition}{\bf (= \cite[Theorem 8.1]{LS})}
\label{P7.1}
Let $B_n\in \mathbb R^{2\times 2}$ with
$\det B_n=1$, and put $T_n=B_n B_{n-1}\cdots B_1$.
Suppose that
\[
\sum_{n=1}^{\infty} \frac{\|B_{n+1}\|^2}{\|T_n\|^2}
< \infty.
\]
Then there is a $u\in \mathbb R^2, \|u\|= 1$ so that
\begin{equation}
\label{estP7.1}
\|T_n u\|^2 \le 8 \left[ \|T_n\|^{-2}
+ \|T_n\|^2 \left( \sum_{m=n}^{\infty} \frac{\|B_{m+1}\|^2}
{\|T_m\|^2} \right)^2 \right].
\end{equation}
Moreover, if $v\in \mathbb R^2$ is linearly independent
of $u$, then
\[
\inf_{n\in\mathbb N} \frac{\|T_n v\|}{\|T_n\|} >0.
\]
\end{Proposition}
See \cite{LS} for the {\it proof}.
The ``moreover'' part is not explicitly
stated in \cite{LS}, but is established in the
proof given there.
In this section, it will be necessary to analyze the
behavior of the solutions at the points $x_{n,j}\equiv
a_n + j/g_n$. Lemma \ref{L7.1}, however, gives control
on the phases $\varphi$ only at the points $x_{n,0}$.
We will therefore need the following extension of
Lemma \ref{L2.2}. (We drop $k$ in the notation.)
\begin{Lemma}
\label{L7.2}
In the situation of Lemma \ref{L2.2}, we also have
that
\[
\|T^{m+n}e_{\varphi}\| \ge \|T^m e_{\varphi}\|\cdot
|\lambda|^n \sin|\theta-\varphi|
\]
for all $m,n\in\mathbb N_0$.
\end{Lemma}
Clearly, in the special case $m=0$, we recover
Lemma \ref{L2.2}.
{\it Proof.}
We use the notation from the proof of Lemma \ref{L2.2}.
Of course, we can again assume that
$\psi=0$. In fact, we can further reduce the general
case to the following situation:
$\lambda>1$, $0<\theta\le
\pi/2$, and
$0< \varphi < \pi$ (the case $\varphi=0$ is of course
trivial).
We define $\omega\in (0,2\pi)$ by
$T^me_{\varphi}=\|T^me_{\varphi}\|e_{\omega}$. We must then
show that $\|T^ne_{\omega}\|\ge \lambda^n \sin|\theta-
\varphi|$. Intuitively,
this holds because the position of $\omega$ is not
worse than that of the original phase $\varphi$.
More precisely, we have the following:
If $0<\varphi<\theta$, then $0<\omega
<\varphi$, and if $\theta<\varphi<\pi$, then
$\varphi<\omega<\pi$. In other words, $T^me_{\varphi}$
approaches the subspace corresponding to the
large eigenvalue $\lambda$. Since this is geometrically
obvious, we don't give the formal verification
(which is carried out using the explicit form of $T$).
Now if $0<\omega\le \pi/2 +\theta$, then
$\sin|\theta - \omega|\ge \sin|\theta-\varphi|$
by the discussion above,
so applying Lemma \ref{L2.2} with $\omega$ taking
the role of $\varphi$ gives the assertion in this case.
But if $\pi/2 +\theta<\omega<\pi$, things are even
better, because then $\|T^ne_{\omega}\|\ge \lambda^n$.
To prove this, it suffices to consider $n=1$ because
by the above remarks again, under repeated application of
$T$, the phase will remain in the interval
$(\pi/2+\theta,\pi)$. By direct computation
as in the proof of Lemma \ref{L2.2},
we have that
\begin{equation}
\label{7.1}
\|Te_{\omega}\|^2=\frac{\lambda^2}{\sin^2\theta}
f(\lambda^{-2}),
\end{equation}
where
\[
f(x)\equiv x^2\sin^2\omega + \sin^2(\omega-\theta)
-2x\cos\theta\sin\omega\sin(\omega-\theta).
\]
The parabola $f$ has its minimum at
\[
x_0= \frac{\cos\theta\sin(\omega-\theta)}{\sin\omega}.
\]
Since $0<\theta<\pi/2$ and $\pi/2+\theta<\omega<\pi$,
we see that
\[
x_0=\cos^2\theta + \sin\theta\cos\theta|\cot\omega| \ge
\cos^2\theta+\sin\theta\cos\theta|\cot(\pi/2+\theta)| = 1.
\]
By assumption $\lambda^{-2}\le 1$, so we get from \eqref{7.1}
\[
\|Te_{\omega}\|^2\ge\frac{\lambda^2}{\sin^2\theta}
f(1)= \lambda^2,
\]
as claimed. $\square$
With these preparations out of the way, we can now
begin the more careful analysis of the solutions, as
outlined above. To this end, fix $k\in T_1\setminus T_0$,
where $T_0,T_1$ are the sets defined in Lemmas
\ref{L4.2} and \ref{L7.1}, respectively. (Although
we're about to change $\Delta_n,L_n$, we may
still apply Lemma \ref{L4.2}, because the set $T_0$
defined there is independent of these parameters.)
By these Lemmas together with Lemma \ref{L3.1},
$\dim T_1\setminus T_0 \ge D$.
Recall that $x_{n,j}
=a_n+j/g_n$, and abbreviate the corresponding
transfer matrix by $T_{n,j}\equiv
T(x_{n,j},0;k)$. Our first aim is to show that
Proposition \ref{P7.1} applies to this situation.
More precisely, we claim that for appropriate
$L_n$'s (see \eqref{defL} below),
\[
\sum_{n=1}^{\infty}\sum_{j=0}^{L_n}
\frac{\|T(x_{n,j+1},x_{n,j};k)\|^2}{\|T_{n,j}\|^2}
<\infty
\]
(here, we set $x_{n,L_n+1}=x_{n+1,0}$):
First of all, we observe that the sequence of integrals
\[
\int_{x_{n,j}}^{x_{n,j+1}}|V(x)|\,dx =
c_ng_n \|W\|_1
\]
is bounded. Thus also $\|T(x_{n,j+1},x_{n,j};k)\|
\le C$ by a Gronwall estimate. Moreover, writing
$R_{n,j}=R(x_{n,j},k)$ for the Pr\"ufer radius
(corresponding to either $y_1$ or $y_2$)
at $x_{n,j}$, we clearly have that $\|T_{n,j}\|\ge R_{n,j}$,
so it suffices to prove that $\sum_{n,j} R_{n,j}^{-2}
<\infty$.
Now for $n$ big enough and $l\ge j$, we can proceed as in the
preceding section to estimate $R_{n,l}/R_{n,j}$. Namely,
we have that $R_{n,l}=R_{n,0}\|T^l e_{\varphi}\|$, where
$T\equiv T_{c_nW}(k/g_n)$ and $\varphi\equiv \varphi(a_n,k)$.
Of course, the same relation holds if $l$ is replaced by
$j$. Thus $R_{n,l}/R_{n,j} = \|T^{l-j+j}e_{\varphi}\|/
\|T^je_{\varphi}\|$, and we can now invoke Lemma \ref{L7.2}
together with Lemma \ref{L7.1} and inequality \eqref{4.10}.
We get
\begin{equation}
\label{7.2}
\ln (R_{n,l}/R_{n,j}) \ge C(l-j)
c_ng_n \exp \left( -\frac{a^{n+1}}{2(a-1)}\right)
-2\ln n\quad\quad(C>0).
\end{equation}
The last term comes from the
bound $1/n^2$
on $\sin|\theta-\varphi|$. In this section, we take
\begin{equation}
\label{defL}
L_n= \left[ A^n \frac{1}{c_ng_n}
\exp \left( \frac{a^{n+1}}{2(a-1)}\right) \right],
\end{equation}
where $A$ is a constant with $A>a$.
Then \eqref{7.2},
when specialized to
$j=0,l=L_n$, says that
\begin{equation}
\label{7.3}
\ln (R_{n+1,0}/R_{n,0}) \ge CA^n.
\end{equation}
Using \eqref{7.2}, \eqref{7.3}, it's now easy to
obtain bounds on $R^{-2}$ which show that indeed
$\sum_{n,j} R_{n,j}^{-2}
<\infty$.
We omit the details, because a similar
analysis will be carried out below.
We now want to show that the solution $y$ obtained
from the vector $u$ from Proposition \ref{P7.1} as
$(y(x),y'(x)/k)^t=T(x,0;k)u$ is square integrable.
So we must analyze the right-hand side of \eqref{estP7.1}.
To begin with, we notice that by the last statement
of Proposition \ref{P7.1}, we must have an estimate
of the type
$R_{n,j}\ge C\|T_{n,j}\|$ with $C>0$ for at least one
of the two solutions $y_1,y_2$ (recall that $R$ involved
a choice between $y_1,y_2$, even though this has not
been made explicit in the notation). Fix such a solution once and
for all. Since the reverse inequality $R\le \|T\|$ is
trivially true, we may replace $\|T\|$ by $R$ whenever
it occurs on the right-hand side of \eqref{estP7.1}.
Finally, we can of course again estimate $\|B_{m+1}\|$
by a constant. Thus, we can work with
\[
\|T_{n,j}u\|^2 \le CR_{n,j}^{-2}\left[
1+ \left( \sum_{x_{m,l}\ge x_{n,j}}
\frac{R_{n,j}^2}{R_{m,l}^2}
\right)^2 \right].
\]
We break this up according as $m=n$, $m=n+1$, or
$m\ge n+2$:
\[
\|T_{n,j}u\|^2 \le CR_{n,j}^{-2}\left[
1+ \left( \sum_{l=j}^{L_n}
\frac{R_{n,j}^2}{R_{n,l}^2}+
\sum_{l=0}^{L_{n+1}}
\frac{R_{n,j}^2}{R_{n+1,l}^2}+ \sum_{m=n+2}^{\infty}
\sum_{l=0}^{L_m}\frac{R_{n,j}^2}{R_{m,l}^2}\right)^2
\right],
\]
and we call these three sums
$\Sigma_1$, $\Sigma_2$, and $\Sigma_3$,
respectively. As for $\Sigma_{1,2}$, we use the
crude estimates $\Sigma_1\le Cn^4L_n$, $\Sigma_2 \le
C n^4 L_{n+1}$ which follow at once from
\eqref{7.2} and also \eqref{7.3} in the case
of $\Sigma_2$.
To analyze $\Sigma_3$,
we write it in the form
\[
\Sigma_3 = \sum_{m=n+2}^{\infty}\sum_{l=0}^{L_m}
\frac{R_{n,j}^2}{R_{n+1,0}^2}\frac{R_{m,0}^2}{R_{m,l}^2}
\frac{R_{n+1,0}^2}{R_{m,0}^2}.
\]
Noting that $R_{n+1,0}=R_{n,L_n}$, we can again use
\eqref{7.2} to bound
the first two ratios by $Cn^4$ and $Cm^4$,
respectively. The last ratio is
controlled by iterating \eqref{7.3}. This gives
a bound of the type $\exp(-CA^m)$. Hence
\[
\Sigma_3\le C_1n^4\sum_{m=n+2}^{\infty}m^4L_m
\exp(-C_2A^m).
\]
Collecting the bounds just proved and using a similar
estimate on $R_{n,j}^{-2}$, we get
\[
\|T_{n,j}u\|^2 \le C_1n^4\exp(-C_2A^n)
\left( n^4L_{n+1}^2 +n^4 \sum_{m=n+2}^{\infty}
m^4L_m \exp(-C_2A^m) \right)^2,
\]
with positive constants $C_1,C_2$.
In view of the definitions \eqref{3.6}, \eqref{3.6a},
and \eqref{defL}, it's now clear that
the small factors $\exp(-C_2A^n)$,
$\exp(-C_2A^m)$ dominate everything else. Straightforward
manipulations (details are left to the reader) finally
give an estimate of the form $\|T_{n,j}u\|^2 \le C
\exp(-\delta A^n)$, with $\delta>0$.
It now follows easily that the solution $y$ defined
as the first component of $T(x)u$ is in $L_2$.
Namely, since $\int_{x_{n,j}}^{x_{n,j+1}}|V|$ is
bounded, a Gronwall type estimate lets us extend
the inequality $\|T(x)u\|^2 \le C
\exp(-\delta A^n)$ to all of $x_{n,j}\le x \le x_{n,j+1}$
(the constant $C$ here might be larger than the original
one). Thus
\[
\int_0^{\infty}|y(x)|^2\, dx \le C
\sum_{n=1}^{\infty}\left(\Delta_n +L_n
g_n^{-1}\right) \exp(-\delta A^n),
\]
and this is finite since $\Delta_n+L_ng_n^{-1}
\le \exp(Ca^n)$.
Finally, we have to estimate $V$. We proceed as in
the last part of Section 6. Instead of \eqref{decayV}
we obtain this time that if $a_{n}0\: (n\in\mathbb N, m=1,\ldots,
N_n)$, put
\[
f_n(x;\theta)= \sum_{m=1}^{N_n} k_m^{(n)}
\sin\left( 2k_m^{(n)}x + \theta_m^{(n)}\right).
\]
Then there are cut-offs $x_n\ge 0, x_n\to\infty$
so that for arbitrary
$\theta_m^{(n)}\in [0,2\pi )$, the Schr\"odinger equation
\eqref{1.1} with potential
\[
V(x)= \frac{4}{1+x} \sum_{n=1}^{\infty} f_n(x;\theta)
\chi_{(x_n,\infty)}(x)
\]
has an $L_2$ solution for all $E\in \left\{\left( k_m^{(n)}
\right)^2 \right\}$.
\end{Theorem}
We refer to \cite{Simpp} for the {\it proof},
but a few comments may be helpful. First of all,
note that the sum defining $V(x)$ is finite for each
fixed $x$, so convergence is not an issue.
Then, in \cite{Simpp}, the $k$'s are not grouped
together as in the definition of the functions $f_n$ above,
but that doesn't make a real difference in the proof.
The basic idea is that $V_N$ defined as the finite
sum $4/(1+x)\sum_{n=1}^N \cdots$ would generate
$L_2$ solutions at the $k$'s occuring in the sum
by the standard theory of
von Neumann-Wigner potentials \cite{East2}.
Then, with $x_{N+1}$ chosen
sufficiently large, one shows that things change
only little at the energies considered so far when
passing from $V_N$ to $V_{N+1}$.
In \cite{Simpp}, Simon uses the $\theta$'s to control
the initial phases of the $L_2$ solutions. Here, we
trade this additional information for bounds on $|V|$,
thus obtaining an example that proves Corollary
\ref{C1.1}. We specialize to the situation where
\[
k_m^{(n)} = \epsilon_n \left( 1+\frac{m}{N_n}\right),
\quad\quad m=1,\ldots, N_n,
\]
with $\epsilon_n>0, N_n\in\mathbb N$.
One could of
course treat other examples as well, but I don't see
how to obtain Theorem \ref{T1.4} in full generality
with the method of this section. The crucial property
of the choice above is that $f_n$ becomes a periodic
function with not too large period. To see that
something of this sort is necessary to get reasonable
bounds on $f_n$, consider the extreme case when the
$k_m^{(n)}$ ($m=1,\ldots,N_n$) are rationally independent:
Then $\|f_n\|_{\infty}=\sum_m k_m^{(n)}$, independently
of the $\theta$'s, and this is far too big for our
purposes.
We will use the fact that random trigonometric
polynomials are, up to logarithmic (in the degree)
corrections, bounded pointwise by their $L_2$ norm.
In the case at hand, this result takes the following
form:
\begin{Proposition}
\label{P9.1}
There exist $\theta_m^{(n)}\in [0,2\pi)$ so that
\[
\|f_n(\cdot;\theta)\|_{\infty} \le
4 \left[ \epsilon_n^2 N_n \ln (32\pi N_n^2)\right]^{1/2}.
\]
\end{Proposition}
Although this is merely an adaptation of classical
methods to the situation under consideration here, we will
give the full proof below.
However, let us first show how the Proposition can be
used to prove Corollary \ref{C1.1}.
To this end,
we let $\epsilon_n=2^{-n^2-n/2}, N_n=2^{2n^2}$. Then
\[
\sum_{m=1}^{N_n} \left( k_m^{(n)} \right)^{2-1/n}
\ge \epsilon_n^{2-1/n} N_n = 2^{1/2},
\]
so $E_{m,n}\equiv \left( k_m^{(n)} \right)^2$ is
not in $\ell_p$ for any $p<1$. On the other hand,
with $\theta$'s picked according to Proposition \ref{P9.1},
we have that
\[
\|f_n\|_{\infty} \le Cn2^{-n/2}
\]
is summable, thus the $V$ from Theorem \ref{T9.1}
satisfies $V(x)=O((1+x)^{-1})$, as required.
{\it Proof of Proposition \ref{P9.1}.}
We basically
follow the development of \cite[Chapter 6]{Kah}.
Of course, $n$ is fixed, so we can drop this
index in the notation. We view the $\theta_m$
($m=1,\ldots,N$) as independent, identically
distributed random variables with uniform
distribution, and we show that the assertion
in fact holds with large probability. The function
$f$ is periodic in $x$ with period $\pi N/\epsilon$, so
\[
M(\theta) \equiv \sup_x \left| f(x;\theta)\right|
= \max_{0\le x\le \pi N/\epsilon}
\left| f(x;\theta)\right|.
\]
Pick $x_0$ so that $|f(x_0)|=M$. Notice that
\[
k_m=\frac{2\epsilon}{\pi N}\int_0^{\pi N/\epsilon}
f(x)\sin(2k_mx+\theta_m)\, dx \le 2M,
\]
hence
\[
\left| f'(x)\right| = 2\left|
\sum_{m=1}^N k_m^2 \cos (2k_mx+\theta_m)\right|
\le 8 \epsilon N M.
\]
It follows that
$|f(x)|\ge M- 8 \epsilon NM|x-x_0|$.
In particular, there's an interval $I=I(\theta)\subset
[0,\pi N/\epsilon]$ of
length
\[
|I(\theta)| \ge \min \left\{ \frac{1}{8 \epsilon N},
\frac{\pi N}{\epsilon} \right\}=
\frac{1}{8 \epsilon N}
\]
so that $|f(x)|\ge M/2$ for all $x\in I$.
We can now estimate the quantity $E(e^{\lambda M/2})$,
where $E(\cdots)$ denotes expectation with respect
to the probability measure introduced above, and
$\lambda>0$ will be chosen later. Namely, also using
independence and the computation
\begin{align*}
\frac{1}{2\pi}\int_0^{2\pi} e^{a\sin(\alpha +\theta)}
\, d\theta & = \sum_{n=0}^{\infty} \frac{a^n}{n!}\,
\frac{1}{2\pi} \int_0^{2\pi} \sin^n\theta\, d\theta\\
& =\sum_{n=0}^{\infty} \frac{a^{2n}}{(2n)!}
4^{-n} \binom{2n}{n} \, \le \, e^{a^2/4},
\end{align*}
we get
\begin{align*}
\frac{1}{8\epsilon N}E\left(e^{\lambda M/2}\right) & \le
E\left( |I| e^{\lambda M/2}\right)
\le E \left( \int_{I(\theta)}
\left( e^{\lambda f(x;\theta)} +
e^{-\lambda f(x;\theta)} \right)\, dx \right) \\
&\le \int_0^{\pi N/\epsilon} E\left( e^{\lambda f(x)}+
e^{-\lambda f(x)} \right) \, dx
\le \frac{2\pi N}{\epsilon}\, e^{\lambda^2\epsilon^2 N}.
\end{align*}
We can write this in the form
\[
E\left( \exp \left(\frac{\lambda}{2}
\left[ M -
2\lambda\epsilon^2N-\frac{2}{\lambda}\ln (32\pi N^2)
\right]\right)\right) \le \frac{1}{2}
\]
and deduce that with probability at least $1/2$,
\[
M \le 2\lambda\epsilon^2N+\frac{2}{\lambda}\ln (32\pi N^2).
\]
With $\lambda=\epsilon^{-1} N^{-1/2} (\ln (32\pi N^2))^{1/2}$,
we obtain the claim. $\square$
\begin{appendix}
\section{Norm bounds}
In this appendix, we sketch the proof of
Theorem \ref{TCK}. We follow \cite{CK} and
private notes of Kiselev. First of all,
the bound on $M_f$ follows from the fact
that the Fourier transform itself is bounded as a map
from $L_p(0,1)$ to $\ell_{q}$ if $1\le p \le 2$
and $1/p+1/q=1$. Now such norm bounds automatically
carry over to the corresponding maximal functions,
provided that $p