Content-Type: multipart/mixed; boundary="-------------0107090255950" This is a multi-part message in MIME format. ---------------0107090255950 Content-Type: text/plain; name="01-250.keywords" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="01-250.keywords" Schr\"odinger operator, finite gap potential, WKB asymptotics, singular spectrum ---------------0107090255950 Content-Type: application/x-tex; name="fg.tex" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="fg.tex" \documentclass{article} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \newtheorem{Theorem}{Theorem}[section] \newtheorem{Proposition}[Theorem]{Proposition} \newtheorem{Lemma}[Theorem]{Lemma} \newtheorem{Definition}[Theorem]{Definition} \newtheorem{Corollary}[Theorem]{Corollary} \begin{document} \title{Finite gap potentials and WKB asymptotics for one-dimensional Schr\"odinger operators} \author{Thomas Kriecherbauer$^1$ and Christian Remling$^2$} \date{February 26, 2001} \maketitle \begin{center} (to appear in {\it Commun.\ Math.\ Phys.}) \end{center} \vspace{0.5cm} \noindent 1.\ Universit\"at M\"unchen, Mathematisches Institut, Theresienstr.\ 39, 80333 M\"unchen, GERMANY\\ E-mail: tkriech@rz.mathematik.uni-muenchen.de\\[0.1cm] 2.\ Universit\"at Osnabr\"uck, Fachbereich Mathematik/Informatik, 49069 Osnabr\"uck, GERMANY\\ E-mail: cremling@mathematik.uni-osnabrueck.de\\[0.3cm] 2000 AMS Subject Classification: primary 34L40, 81Q10, secondary 30F99 \\[0.3cm] Key words: Schr\"odinger operator, finite gap potential, singular spectrum, Jacobi inversion problem\\[0.3cm] \begin{abstract} Consider the Schr\"odinger operator $H=-d^2/dx^2+V(x)$ with power-decaying potential $V(x)=O(x^{-\alpha})$. We prove that a previously obtained dimensional bound on exceptional sets of the WKB method is sharp in its whole range of validity. The construction relies on pointwise bounds on finite gap potentials. These bounds are obtained by an analysis of the Jacobi inversion problem on hyperelliptic Riemann surfaces. \end{abstract} \section{Introduction} We are interested in one-dimensional Schr\"odinger equations, \begin{equation} \label{se} -y''(x)+V(x)y(x)=Ey(x), \end{equation} and the spectra of the corresponding self-adjoint operators $H_{\beta}=-d^2/dx^2 + V(x)$ on $L_2(0,\infty)$, say. The index $\beta\in [0,\pi)$ refers to the boundary condition $y(0)\cos\beta + y'(0)\sin\beta = 0$. The spectral properties of the operators $H_{\beta}$ give information on the large time behavior of the quantum mechanical system described by \eqref{se}. In this paper, we will present an alternate approach to an earlier result of one of us \cite{Remex}. With this new approach, we can remove a technical condition and thus prove that a previously obtained bound on the embedded singular spectrum of $H_{\beta}$ is sharp in its whole range of validity. We will describe this result shortly; let us first point out that the new idea of this paper is to use finite gap potentials in the construction of \cite{Remex}. The main difficulty is to obtain good pointwise bounds on these potentials. A substantial part of this paper is devoted to this problem. More specifically, we will have to study in some detail the Jacobi inversion problem on hyperelliptic Riemann surfaces. The result of this analysis is formulated as Theorem \ref{T3.2}. Actually, our proof gives more than stated: We obtain a whole sequence of good pointwise approximations (where, very roughly, ``good'' means better than expected, due to cancellations) to finite gap potentials. While our motivation for proving Theorem \ref{T3.2} is to provide tools for the proof of Theorem \ref{T1.1} below, this discussion is perhaps of independent interest. Let us now return to \eqref{se}; suppose that the potential $V$ is bounded by a decaying power, that is, \begin{equation} \label{hyp} |V(x)| \le \frac{C}{(1+x)^{\alpha}} \quad\quad (\alpha > 0). \end{equation} Then, if $\alpha > 1/2$, the operators $H_{\beta}$ have absolutely continuous spectrum essentially supported by $(0,\infty)$, as was first proved in \cite{CK,Remac}. Embedded singular spectrum can occur if $\alpha \le 1$ (see \cite{Nab,Simpp,vNW}), but there are restrictions on the dimension of the singular part of the spectral measure. This is intimately related to the problem of solving \eqref{se} asymptotically (for large $x$). We say that a solution $y(x,E)$ of the Schr\"odinger equation satisfies the WKB asymptotic formulae if \begin{equation} \label{WKB} \left(\begin{array}{c} y(x,E) \\ y'(x,E) \end{array} \right) = \left( \begin{array}{c} 1 \\ i\sqrt{E} \end{array} \right) \exp\left( i\int_0^x \sqrt{E-V(t)}\, dt \right) + o(1) \quad\quad (x\to\infty). \end{equation} It is well known that there exist solutions of \eqref{se} satisfying \eqref{WKB} for all $E>0$ if the potential $V$ decays and is slowly varying in a suitable sense (see, for instance, \cite[Chapter 2]{East1}). Obviously, this latter assumption need not hold if $V$ only satisfies \eqref{hyp}. Nevertheless, recent work \cite{CK,CK1,CK2,Remac,Remdim} has shown that \eqref{WKB} continues to hold off a small exceptional set of energies $E$ as long as $\alpha>1/2$. Call this exceptional set $S$; in other words, \begin{equation} \label{defS} S=\{ E>0: \text{ No solution of \eqref{se} satisfies \eqref{WKB}} \} . \end{equation} General criteria \cite{GP,Sbdd,Stolz} show that if there is some embedded singular spectrum on $(0,\infty)$, then the corresponding parts of the spectral measures are supported by $S$. In other words, if $\rho^{(\beta)}$ denotes the spectral measure of $H_{\beta}$, then $\rho_{sing}^{(\beta)}((0,\infty)\setminus S)=0$ for all $\beta$. Therefore, it is interesting to study $S$ in detail. We know from \cite{CK,Remac} that $S$ is of Lebesgue measure zero if $\alpha>1/2$; this was subsequently strengthened in \cite{Remdim} where it was proved that the Hausdorff dimension of $S$ satisfies $\dim S\le 2(1-\alpha)$. Formally, this result is valid for all $\alpha\in\mathbb R$ (if one defines $\dim\emptyset=-\infty$), but it gives nontrivial information only if $1/2<\alpha\le 1$. We will show that this bound is sharp and is even attained for suitable potentials: \begin{Theorem} \label{T1.1} For every $\alpha\in (1/2,1]$, there exist potentials $V(x)$ satisfying \eqref{hyp}, so that $\dim S= 2(1-\alpha)$. \end{Theorem} If $\alpha\notin (1/2, 1]$, the whole picture is different. More precisely, if $\alpha\le 1/2$, then $S$ can have full Lebesgue measure in $(0,\infty)$ \cite{KLS,KotU,Sim82} and the spectrum can be purely singular. On the other hand, it is easy to prove that $S=\emptyset$ if $\alpha>1$ (see, e.g., \cite{East1}). In \cite{Remex}, Theorem \ref{T1.1} was proved for $\alpha >2/3$. Things get more difficult as $\alpha$ approaches $1/2$. In particular, we really need the full force of Theorem \ref{T3.2} in that the exponent $N$ there gets larger and larger as $\alpha$ decreases to $1/2$. Actually, here, too, we show more than stated: For any given function $\epsilon(x)$ with $\epsilon(x)\to 0$ as $x\to\infty$ (no matter how slowly), we can construct a potential $V(x)=O(x^{-\alpha-\epsilon(x)})$, so that $\dim S=2(1-\alpha)$. There are extensions of the results quoted above to far more general settings. Deift and Killip \cite{DK} have proved that there is absolutely continuous spectrum essentially supported by $(0,\infty)$ already if $V\in L_1+L_2$; very recently, Killip has obtained even stronger results in this direction \cite{Kil}. WKB asymptotics off exceptional sets have been established by Christ and Kiselev \cite{CK1,CK2} under very general conditions, including $V\in L_1+L_p$ for some $p<2$ (but not in the borderline case $p=2$, which remains open). A major open question in this context is Simon's problem no.\ 7 \cite{Sim21}: Are there potentials satisfying \eqref{hyp} with $\alpha>1/2$, so that for some boundary condition $\beta$, the operator $H_{\beta}$ has some singular {\it continuous} spectrum? We organize this paper as follows. In Sect.\ 2, we discuss the construction of the so-called finite gap potentials, that is, of quasi-periodic potentials with finitely many prescribed gaps in the spectrum. Since this material is rather classical, we concentrate on those aspects of the theory that are needed later. The following section introduces the problem of obtaining pointwise bounds on finite gap potentials. We state our main result on finite gap potentials (Theorem \ref{T3.2}) and discuss some general features of this result. The proof is given in Sect.\ 4, 5, 6; this analysis is perhaps the central part of this paper. It depends on a study of the Jacobi inversion problem in cases where a large number of small gaps is present. A major role will be played by a graphical representation of the terms of a perturbation series, which we introduce in Sect.\ 5. With Theorem \ref{T3.2} as new input, we can then obtain Theorem \ref{T1.1}, relying mainly on the ideas already contained in \cite{Remex}. This is done in Sect.\ 7. In fact, with our new approach, the treatment becomes more transparent. {\bf Acknowledgment:} C.R.\ acknowledges financial support by the Heisenberg program of the Deutsche Forschungsgemeinschaft. \section{Finite gap potentials} In this section, we will briefly review the construction and some results on finite gap potentials. We will more or less follow the representation given in \cite{McK}. For further information on this many-faceted topic (for example, the connections to equations of the KdV hierarchy), see \cite{GRT,GesWei}. The needed facts from the theory of compact Riemann surfaces can be found in \cite{FarKra,Spr}. Let energies $E_00$. Of course, this representation refers to the coordinate maps $\widehat{z} \mapsto z$ discussed above. The Abel-Jacobi map $\alpha$ sends positive divisors of degree $g$ (that is, unordered collections of $g$ points from $S$) to the Jacobi variety of $S$, which is the complex torus equal to $\mathbb C^g$ modulo the period lattice of the holomorphic differentials. This map is onto; in other words, the Jacobi inversion problem can be solved. We will need the Abel-Jacobi map only for divisors of the form $(\widehat{\mu}_1,\ldots,\widehat{\mu}_g)$ with $\mu_i\in [E_{2i-1},E_{2i}]$. The Abel-Jacobi map is then given by \begin{equation} \label{ajm} \alpha_i(\widehat{\mu}_1,\ldots,\widehat{\mu}_g) = 2\pi \sum_{j=1}^g \int_{\widehat{E}_{2j-1}}^{\widehat{\mu}_j} \omega_i \mod 2\pi; \end{equation} here, we take paths of integration whose projections lie entirely in the corresponding gaps $[E_{2j-1},E_{2j}]$. It follows from classical theorems of Abel and Jacobi \cite[Chapter 10]{Spr} that $\alpha=(\alpha_1,\ldots,\alpha_g)$ is a bijection from the set of divisors specified above onto the real part of the Jacobi variety $\mathbb T^g = [0,2\pi)^g$. Alternately, this fact may be verified directly, using a representation of the Abel-Jacobi map that will be derived below (see eq.\ \eqref{abeljac}). Actually, \eqref{ajm} differs from the standard definition of the Abel-Jacobi map by an additive constant vector and the factor $2\pi$; the choice \eqref{ajm} is more convenient here. Now the stage has been set for the actual construction of the finite gap potentials. Consider the following linear flow on $\mathbb T^g$: $\phi_x \alpha_0= \alpha_0+ \nu x$, where the frequency vector $\nu$ is given by \[ \nu_j = 2\pi \text{ res} \left. \left( (-z)^{1/2}\omega_j \right) \right|_{z=\infty}. \] Using the coordinate $\zeta= (-z)^{-1/2}$ at $z=\infty$, we can easily evaluate the residue to obtain $\nu_j=4\pi c_j$, where $c_j$ is the normalization constant of the polynomial $p_j$ (see \eqref{omega}). Now pull back the flow $\phi_x$ to the set of divisors $(\widehat{\mu}_1,\ldots,\widehat{\mu}_g)$, using the Abel-Jacobi map. In other words, define the functions $\widehat{\mu}_j(x)\in S$ ($\mu_j(x)\in [E_{2j-1},E_{2j}]$) by requiring that \[ \alpha(\widehat{\mu}_1(x),\ldots,\widehat{\mu}_g(x)) = \alpha_0 + \nu x . \] Next, introduce a potential $V$ by the following trace formula: \begin{equation} \label{trace} V_{\alpha_0}(x) = E_0 + \sum_{n=1}^g (E_{2n-1}+E_{2n}-2\mu_n(x)). \end{equation} One can then show that this family of potentials solves the inverse problem stated at the beginning of this section. Namely, the operators $H= -d^2/dx^2 +V_{\alpha_0}(x)$ on $L_2(\mathbb R)$ have purely absolutely continuous spectrum equal to the set given in \eqref{spectrum}. This follows from the following representation of the diagonal of the Green function of $H$: \begin{equation} \label{Green} G(x,x;z) = \frac{1}{2R(z)} \prod_{n=1}^g \left( \mu_n(x)-z\right) . \end{equation} This important formula is derived with the aid of the so-called Baker-Akhieser function, which gives explicit expressions for the solutions $y$ of the DE $Hy=zy$. See \cite{McK} for the details. Actually, the right-hand side of \eqref{Green} defines a meromorphic function on $S$ (with simple poles precisely at the finite branch points) for every fixed $x\in\mathbb R$. The Green function, however, depends on $z\in\mathbb C$; therefore, we must complement \eqref{Green} by recalling that $\text{Im }G(z) \text{ Im }z >0$ for $\text{Im }z\not= 0$. It is useful (and probably also more natural) to interpret the above recipe in a slightly different way. Namely, define $f:\mathbb T^g\to \mathbb R$ implicitly by the trace formula: \begin{equation} \label{defff} f(\beta)=E_0 + \sum_{n=1}^g (E_{2n-1}+E_{2n}-2\mu_n), \end{equation} where $\alpha(\widehat{\mu}_1,\ldots,\widehat{\mu}_g)=\beta$. Then the finite gap potential $V_{\alpha}(x)$ is obtained by evaluating $f$ along the trajectory of $\alpha$ under the flow $\phi_x \alpha= \alpha+ \nu x$. So, if $\nu$ is known, then $V_{\alpha}$ is computed by inverting the Abel-Jacobi map. We remark parenthetically that there is an ``explicit'' solution to this problem which uses Riemann theta functions \cite{GRT,GesWei,Mum}, but these formulae are not of much use here. \section{Pointwise estimates on finite gap potentials} Since $\mu_j\in [E_{2j-1},E_{2j}]$, the trace formula \eqref{trace} immediately implies the following bound on $V_{\alpha}(x)$: \begin{equation} \label{l1bd} \sup_{x\in\mathbb R} \left|V_{\alpha}(x)-E_0\right| \le \sum_{n=1}^g (E_{2n}-E_{2n-1}). \end{equation} In other words, $\|V_{\alpha}-E_0\|_{\infty}$ is bounded by the $\ell_1$-norm of the sequence of the gap lengths $E_{2n}-E_{2n-1}$. It is also obvious that nothing more can be said in general: Indeed, if the components of the frequency vector $\nu$ are rationally independent, every trajectory $\{ \phi_x\alpha: x\in \mathbb R \}$ is dense in the torus $\mathbb T^g$, and thus the $L_{\infty}$-norm of $V_{\alpha}(x)-E_0$ is equal to the maximum of $f-E_0$ over the torus, so \eqref{l1bd} holds with equality in this case. However, there still is hope that \eqref{l1bd} {\it can} be improved if the supremum is only taken over a bounded (but large) interval $0\le x\le L$. Then the problem is to choose a trajectory whose initial piece avoids those points of the torus where $|f|$ is large. Our next major goal is to confirm this hope. This will occupy us for the following four sections. It will be convenient to use the centers and the half-lengths of the gaps as new parameters. So, define \[ m_n = \frac{E_{2n-1}+E_{2n}}{2},\quad l_n=\frac{E_{2n}-E_{2n-1}}{2}. \] The finite gap potentials that are needed in the construction underlying Theorem \ref{T1.1} have gaps which are small compared to the bands $[E_{2n},E_{2n+1}]$. Therefore, we from now on concentrate on this situation. To make this condition precise, we introduce \[ l=\max_{n=1,\ldots,g} l_n, \quad d=\min_{n=2,\ldots,g} (m_n-m_{n-1}); \] our condition on the parameters of the construction will be that $(l/d)\ln g$ is sufficiently small. The following theorem is our main result on finite gap potentials. It says that the family of finite gap potentials $\{ V_{\alpha}: \alpha\in\mathbb T^g \}$ contains functions which are over long intervals ``almost'' bounded by the $\ell_2$-norm of the gap lengths (rather than the $\ell_1$-norm). According to the above remarks, we now use the following parameters to describe finite gap potentials: $g\in\mathbb N$ ($g\ge 2)$ is the number of gaps; $E_00$ describe the locations and the lengths of the gaps, respectively. We require that the gaps do not touch or overlap. Clearly, this amounts to demanding that $E_00$ be constants so that $C_1\le m_1-E_0$, $m_g-E_0\le C_2$, and let $N\in \mathbb N_0$. Then there exists a constant $C$, depending only on $C_1,C_2$, and $N$ (but not on the parameters of the finite gap potentials), such that the following holds. For every $L\ge 1$, there exists an $\alpha\in \mathbb T^g$ so that \[ \sup_{0\le x\le L}\left|f(\alpha+\nu x)-\widehat{f}_0\right| \le C \left[ g^{1/2}l \left(\ln (gL)\right)^{1/2} + gl (ld^{-1}\ln g)^{N+1} \right] , \] where $f$ was defined in \eqref{defff} and \[ \widehat{f}_0 = \int_{\mathbb T^g} f(\beta) \, \frac{d\beta}{(2\pi)^g}. \] \end{Theorem} Recall from Sect.\ 2 that $f(\alpha +\nu x)$ is just the finite gap potential $V_{\alpha}(x)$. In the proof of Theorem \ref{T3.2}, we will in fact show that the assertion holds with large probability if $\alpha\in \mathbb T^g$ is chosen at random. The first term of the bound is the $\ell_2$-norm of the gap lengths (as promised), times a logarithmic factor. Of course, the point is that the increase in $L$ is slow, so we can still take a relatively large $L$. Note, however, that we no longer get an improvement over the trivial bound $gl$ if $L$ is of the order $e^g$. In the application of Theorem \ref{T3.2} in this paper, we will have $L \le g^{\gamma}$, and then Theorem \ref{T3.2} indeed gives a good bound. From a theoretical point of view, a particularly neat situation arises when the flow $\phi_x$ and thus also the finite gap potentials are periodic with period $p$. In that case, one can take $L=p$ to obtain a bound which is valid for all $x\in\mathbb R$. This remark is not as academic as it may seem, because one can show, using topological arguments, that in situations with small gaps one can get a periodic $\phi_x$ by slightly moving the centers $m_n$. The period will be of the order $p\approx d^{-1}$. See also \cite[Appendix C.2]{DKV} for statements of this type. The second term of the above bound contains the $\ell_1$-norm $gl$, but multiplied by an arbitrarily high power of $ld^{-1} \ln g$. So Theorem \ref{T3.2} is interesting only if this combination is small, but this is the case in our construction for proving Theorem \ref{T1.1}. What exactly ``small'' means obviously depends on $C$ and thus on $C_1,C_2,N$, but on nothing else. This will be very important in the proof of Theorem \ref{T1.1}, where we will apply Theorem \ref{T3.2} to a whole sequence of finite gap potentials. We would like to emphasize the fact that we do not subtract $E_0$ from $V_{\alpha}(x)$ (which is perhaps the constant that comes to mind first), but rather the average of $f$ over the real part of the Jacobi variety. This may be viewed as a renormalization, due to higher order terms. Indeed, $E_0$ is the limiting value of $f$ at $l=0$; now the Theorem says that the zeroth Fourier coefficient $\widehat{f_0}$ (which contains also terms which are of higher order in the small parameter $l/d$) gives a better constant approximation to $V_{\alpha}(x)$. This remark is actually true for approximations by trigonometric polynomials of arbitrarily high degree; we will comment on this point again after having discussed the proof of Theorem \ref{T3.2}. We will give this proof in the following three sections. This is the plan of attack: We will first solve the Jacobi inversion problem up to order $N$ in the small parameter $ld^{-1}\ln g$. (As explained above, the problem of computing finite gap potentials basically is the Jacobi inversion problem, that is, the problem of inverting the Abel-Jacobi map.) This will be done by expanding in Fourier and Taylor series and solving the equations by iteration. The expressions obtained in this way rapidly get out of hand as $N$ increases. However, things become surprisingly transparent if a graphical representation of the perturbation series is introduced. This will be developed in Sect.\ 5, after having discussed some preparatory material in Sect.\ 4. Then, in Sect.\ 6, we extend classical methods, due to Salem and Zygmund \cite{SZ}, for bounding random trigonometric polynomials to finish the proof. In its original version, this argument shows, for example, that for a random choice of signs, $p(x)=\sum_{n=1}^N \pm a_n \cos nx$ is almost bounded by the $\ell_2$-norm of its coefficients: $\|p\|_{\infty} \le C \|a\|_2 (\ln N)^{1/2}$. Theorem \ref{T3.2} can perhaps be viewed as a nonlinear version of this result. \section{Proof of Theorem \ref{T3.2}: Basic estimates} We want to analyze the Abel-Jacobi map \eqref{ajm}. We can parametrize the divisors $(\widehat{\mu}_1,\ldots, \widehat{\mu}_g)$ (as usual, $\mu_j\in [E_{2j-1},E_{2j}]$) by the points $(\psi_1,\ldots,\psi_g)$ of another copy of the torus $\mathbb T^g = [0,2\pi)^g$ (not to be confused with the real part of the Jacobi variety) as follows: Write \begin{align} \label{subst1} \mu_j & = m_j - l_j \cos\psi_j, \\ \label{subst2} R(\mu_j) & = R_j(\mu_j) i l_j \sin\psi_j, \end{align} where $R_j(z)= R(z)/\sqrt{(E_{2j-1}-z)(E_{2j}-z)}$. This definition is not yet complete since the sign of $R_j(\mu_j)$ on the right-hand side of \eqref{subst2} also needs to be specified. Note that $iR_j(\mu)$ is real and non-zero for $E_{2j-1}<\mu < E_{2j}$. Therefore, it makes sense to require that $iR_j(\mu)$ be positive for odd $j$ and negative for even $j$ and $\mu$ as above. So, for a given $\psi_j\in [0,2\pi)$, eq.\ \eqref{subst1} tells us what the projection $\mu_j$ of $\widehat{\mu}_j$ is, while \eqref{subst2} determines the sheet on which $\widehat{\mu}_j$ lies. In particular, if $\psi_j + \psi'_j = 2 \pi$, then the corresponding points $\widehat{\mu}_j, \widehat{\mu}'_j \in S$ have the same projections but lie on different sheets. The substitution \eqref{subst1}, \eqref{subst2} allows us to write integrals involving the $\omega_j$ in a particularly convenient way. Indeed, recalling \eqref{omega}, we see that the normalization condition \eqref{normal} now takes the form \begin{equation} \label{normal'} 2 \int_0^{\pi} \left. \frac{p_j(\mu)}{iR_n(\mu)} \right|_{\mu=m_n-l_n\cos\psi} \, d\psi = \delta_{jn} . \end{equation} Similarly, the Abel-Jacobi map, now viewed as a map from $\mathbb T^g$ to $\mathbb T^g$ (but still denoted by $\alpha$), can be written as \begin{equation} \label{abeljac} \alpha_j(\psi_1,\ldots,\psi_g) = 2\pi \sum_{n=1}^g \int_0^{\psi_n} \left. \frac{p_j(\mu)}{iR_n(\mu)} \right|_{\mu=m_n-l_n\cos\psi} \, d\psi. \end{equation} Finally, the function $f$ from the trace formula for $V$ (see \eqref{defff}) takes the following form when expressed in terms of the new variables: \begin{equation} \label{f} f=E_0+2\sum_{n=1}^g l_n \cos\psi_n. \end{equation} As already discussed, Theorem \ref{T3.2} is vacuous if $ld^{-1} \ln g$ is not small (if $ld^{-1} \ln g \ge \epsilon$, just take $C=4\epsilon^{-N-1}$). Thus only the case where \begin{equation} \label{epsilon} ld^{-1} \ln g < \epsilon \end{equation} needs proof; here, $\epsilon>0$ can be chosen according to our needs and may depend on $C_1,C_2$, and $N$ (but on nothing else). In addition to the hypotheses of Theorem \ref{T3.2}, we will therefore assume \eqref{epsilon} with a sufficiently small $\epsilon$ from now on. In particular, the reader should keep in mind that \eqref{epsilon} with a suitable $\epsilon=\epsilon(C_1,C_2,N)$ as well as the hypotheses of Theorem \ref{T3.2} are (tacit) assumptions in all lemmas of Sect.\ 4--6. {\it Notational remark.} In the sequel, we will use the following conventions. A ``constant'' (usually denoted by $C$) is a number that only depends on $C_1,C_2$, and $N$. In particular, the constants which are implicit in the Landau notation $O(\cdots)$ may only depend on $C_1$, $C_2$, and $N$. We will sometimes write $a\lesssim b$ instead of $a\le Cb$ (or $a=O(b)$); here, $C$ is a constant in the sense just explained. Similarly, $a\approx b$ is short-hand for two-sided estimates. Finally, the value of $C$ may change from one formula to the next, so there is nothing wrong with an inequality like $C+1 \le C$ (to give a blatant example). Assuming \eqref{epsilon}, we can analyze \eqref{normal'}, \eqref{abeljac} in some detail by using Taylor expansions. The following lemma will get us started. \begin{Lemma} \label{L3.1} For all $j,n=1,\ldots,g$, the function $p_j(z)/R_n(z)$ is holomorphic in a neighborhood of $[E_{2n-1},E_{2n}]$, and for all $s\in\mathbb N_0$, \[ \max_{z\in[E_{2n-1},E_{2n}]} \left| \frac{d^s}{dz^s} \frac{p_j(z)}{R_n(z)} \right | \le \left(\frac{C}{d}\right)^s \, \frac{Cs!}{d^{-1}|m_j-m_n|+1} . \] Moreover, \[ c_j = \frac{(m_j-E_0)^{1/2}}{2\pi} \left( 1+ O((l/d)^2\ln g) \right). \] \end{Lemma} {\it Proof.} The first assertion is obvious; in fact, it holds on any simply connected neighborhood of $[E_{2n-1},E_{2n}]$ that avoids the other branch points. Thus, for $z\in [E_{2n-1},E_{2n}]$, we can use the Cauchy formula to represent the derivatives: \begin{equation} \label{cauchy} f^{(s)}(z) = \frac{s!}{2\pi i} \int_K \frac{f(\zeta)}{(\zeta-z)^{s+1}}\, d\zeta . \end{equation} Here, we integrate over the contour $K=\{ \zeta=m_n+(d/2)e^{i\varphi}: 0\le\varphi\le 2\pi \}$ in counter-clockwise direction. Note that $K$ is well separated from all gaps $[E_{2i-1},E_{2i}]$. In particular, if $l/d$ is sufficiently small (for instance, $l/d \le 1/4$ will do), then $|\zeta-z| \gtrsim d$ for all $\zeta\in K$, $z\in[E_{2n-1},E_{2n}]$, so \eqref{cauchy} implies that \begin{equation} \label{taylor} \max_{z\in[E_{2n-1},E_{2n}]} \left| f^{(s)}(z) \right| \le C(C/d)^s s! \, \max_{\zeta\in K} \left| f(\zeta) \right| . \end{equation} We want to apply this to $f=p_j/R_n$, so we need to estimate $p_j$ and $R_n$: We have that for $\zeta\in K$, \begin{align*} \left| R_n(\zeta)\right| & = |\zeta-E_0|^{1/2} \left(\prod_{i\not=n} |m_i-l_i-\zeta|\, |m_i+l_i-\zeta| \right)^{1/2}\\ & \gtrsim \prod_{i\not=n} |m_i-\zeta| \left( 1+ O\left( \frac{l^2}{(m_i-m_n)^2} \right) \right) . \end{align*} Here we used the fact that $m_n-E_0 \approx 1$ by the hypotheses of Theorem \ref{T3.2}. Similarly, since the unknown zeros $\lambda_i^{(j)}$ of $p_j$ satisfy $\lambda_i^{(j)}\in (E_{2i-1},E_{2i})$ and since $c_j>0$, we obtain \[ \left| p_j(\zeta)\right| = c_j \prod_{i\not=j} \left| \lambda_i^{(j)} -\zeta \right| = c_j \prod_{i\not=j} \left| m_i - \zeta \right| \left( 1+ O\left( \frac{l}{|m_i-m_n|+d} \right) \right). \] Now $|m_i-m_j|\ge d|i-j|$, so taking logarithms, we see that for small $ld^{-1}\ln g$, \begin{align*} \prod_{i\not=j} \left( 1+O\left( \frac{l}{|m_i-m_n|+d} \right) \right) & = 1+O(ld^{-1}\ln g),\\ \prod_{i\not=n} \left( 1+O\left( \frac{l^2}{(m_i-m_n)^2} \right) \right) & = 1 + O((l/d)^2). \end{align*} Estimates of this type will be used quite often in the sequel. Combining the bounds just proved, we get \[ \left|\frac{p_j}{R_n}(\zeta)\right| \le c_j\, \frac{C}{d^{-1}|m_j-m_n|+1}, \] and the claim on the derivatives of $p_j/R_n$ would follow with \eqref{taylor} if we knew already the asserted formula for the $c_j$'s. So, it only remains to prove the estimate on $c_j$ stated in Lemma \ref{L3.1}. Taylor's theorem with remainder gives \begin{multline*} \left. \frac{p_j(\mu)}{R_n(\mu)}\right|_{\mu=m_n-l_n\cos\psi}= \frac{p_j(m_n)}{R_n(m_n)}- \frac{d}{dz} \left(\frac{p_j}{R_n}\right)(m_n)\, \, l_n\cos\psi +\\ O\left( c_j\, \frac{(l/d)^2}{d^{-1}|m_j-m_n|+1}\right). \end{multline*} Plug this into \eqref{normal'}. The first order term integrates to zero. Also, \[ R_n(m_n) = -i \sqrt{m_n-E_0} (1+O((l/d)^2)) \prod_{i\not=n} (m_i-m_n), \] so we obtain \begin{multline} \label{1.1} \frac{2\pi c_j}{\sqrt{m_n-E_0}} \frac{\prod_{i\not=j} (\lambda_i^{(j)}-m_n)}{\prod_{i\not=n} (m_i-m_n)}(1+O((l/d)^2)) + \\ O\left( c_j\, \frac{(l/d)^2}{d^{-1}|m_j-m_n|+1}\right) = \delta_{jn}. \end{multline} For $j\not=n$, \eqref{1.1} leads to \[ \frac{\lambda_n^{(j)}-m_n}{m_j-m_n}(1+O(ld^{-1}\ln g)) = O\left( \frac{l^2/d}{|m_j-m_n|} \right), \] thus $\lambda_n^{(j)}=m_n+O(l^2/d)$. Using this in \eqref{1.1} with $j=n$, we finally obtain \[ \frac{2\pi c_n}{\sqrt{m_n-E_0}} \prod_{i\not=n} \left( 1+O\left( \frac{l^2d^{-1}}{|m_i-m_n|}\right)\right) = 1 + O\left( c_n\, (l/d)^2\right), \] and the lemma follows. $\square$ We now expand the integrands of the Abel-Jacobi map \eqref{abeljac} in a Fourier series. This, and not a Taylor series, is the appropriate choice here, because it gives the correct ``renormalized'' constant term immediately, without contributions from higher order terms. So write \begin{equation} \label{fourier} \left. \frac{2\pi p_j(\mu)}{iR_n(\mu)}\right|_{\mu=m_n-l_n\cos\psi} = \sum_{m\in\mathbb Z} a_m(j,n) e^{im\psi}. \end{equation} Since the left-hand side is in $C^{\infty}(\mathbb T)$ as a function of $\psi$, this expansion converges uniformly. Moreover, \begin{equation} \label{amjn} a_m(j,n)= \int_0^{2\pi}\left. \frac{p_j(\mu)}{iR_n(\mu)}\right|_{\mu=m_n-l_n\cos\psi} e^{-im\psi}\, d\psi, \end{equation} and, as a consequence, $a_0(j,n)=\delta_{jn}$ (by \eqref{normal'}). \begin{Lemma} \label{L4.1} \[ \left| a_m(j,n)\right| \le \frac{(Cl/d)^{|m|}}{d^{-1}|m_j-m_n|+1} \] \end{Lemma} {\it Proof.} This is trivially satisfied if $m=0$, so we suppose that $m\not= 0$. Then, by Taylor's theorem and Lemma \ref{L3.1}, \[ \frac{p_j}{R_n}(m_n-l_n\cos\psi) = \sum_{k=0}^{|m|-1} b_k(j,n) (-l_n\cos\psi)^k + \rho_{|m|}(\psi), \] where the remainder satisfies the estimate \[ \left| \rho_{|m|}(\psi) \right| \le \frac{(Cl/d)^{|m|}}{d^{-1}|m_j-m_n|+1}. \] Since $\int_0^{2\pi} \cos^k \psi\, e^{-im\psi} \, d\psi =0$ for $|m|>k$, the claim now follows from \eqref{amjn}. $\square$ Using $a_0(j,n)=\delta_{jn}$, we can now plug \eqref{fourier} into \eqref{abeljac} to write the Abel-Jacobi map in the form \begin{equation} \label{4.3} \alpha_j = \psi_j + {\sum_{m\in\mathbb Z}}' \sum_{n=1}^g \frac{a_m(j,n)}{im} (e^{im\psi_n}-1). \end{equation} Here and in the sequel, the prime at the sum sign indicates omission of the term with $m=0$. To obtain \eqref{4.3}, we have integrated \eqref{fourier} term by term, which is allowed because of the uniform convergence. We want to solve the system of equations \eqref{4.3} for $\psi_1,\ldots,\psi_g$. It is useful to separate the leading term, which, due to the smallness of the $a_m(j,n)$'s expressed by Lemma \ref{L4.1}, is $\alpha_j$. So, introduce $\theta_j$ by writing $\psi_j= \alpha_j+\theta_j$; then \eqref{4.3} becomes \begin{equation} \label{1.4} \theta_j + {\sum_{m}}' \sum_{n=1}^g \frac{a_m(j,n)}{im} (e^{im\alpha_n} e^{im\theta_n}-1 ) = 0, \end{equation} and these equations must now be solved for the $\theta_j$'s. Actually, we will compute the $\theta_j$'s only up to an error of order $O((ld^{-1}\ln g)^{N+1})$. Note that by Lemma \ref{L4.1} and \eqref{1.4}, \begin{equation} \label{1.5} |\theta_j| \le 2 {\sum_{m}}' (Cl/d)^{|m|} \sum_{n=1}^g \frac{1}{d^{-1}|m_j-m_n|+1} \lesssim ld^{-1}\ln g, \end{equation} since $d^{-1}|m_j-m_n|\ge |j-n|$. Now we keep only those terms of \eqref{1.4} which are of order $\le N$ in the small parameter $ld^{-1}\ln g$, and we iterate these new equations $N$ times. The following lemma justifies this procedure; we get indeed a good approximation to $\theta_j$. \begin{Lemma} \label{L4.2} Define $\theta_j^{(0)}=0$, \begin{align} \theta_j^{(s+1)}=& -{\sum_{|m|\le N}}' \sum_{n=1}^g \frac{a_m(j,n)}{im}\, ( e^{im\alpha_n} - 1) \nonumber\\ \label{rectheta} & - {\sum_{|m|\le N}}' \sum_{n=1}^g \frac{a_m(j,n)}{im}\, e^{im\alpha_n} \sum_{t=1}^{N-|m|} \frac{ (im\theta_n^{(s)})^t}{t!}, \end{align} $s=0,1,\ldots,N-1$. Then $\left| \theta_j^{(N)} - \theta_j \right| \le C (ld^{-1}\ln g)^{N+1}$. \end{Lemma} {\it Proof.} We will prove by induction that \begin{equation} \label{1.6} \left| \theta_j^{(s)} - \theta_j \right| \le C (ld^{-1}\ln g)^{s+1} \end{equation} for $s=0,1,\ldots,N$. For $s=0$, this is just \eqref{1.5}. Now assume \eqref{1.6} holds for some $s\ge 0$. We claim that then \begin{equation} \label{4.1} \theta_{j}^{(s+1)} = -{\sum_{m\in\mathbb Z}}' \sum_{n=1}^g \frac{a_m(j,n)}{im} \left( e^{im\alpha_n} e^{im\theta_n^{(s)}}-1 \right) + O((ld^{-1}\ln g)^{N+1}). \end{equation} Indeed, comparison with \eqref{rectheta} shows that the error from \eqref{4.1}, which we want to bound by $C(ld^{-1}\ln g)^{N+1}$, is equal to \begin{multline*} {\sum_{|m|\le N}}' \sum_{n=1}^g \frac{a_m(j,n)}{im} \, e^{im\alpha_n} \sum_{t=N+1-|m|}^{\infty} \frac{(im\theta_n^{(s)})^t}{t!} +\\ \sum_{|m|>N} \sum_{n=1}^g \frac{a_m(j,n)}{im} \left( e^{im\alpha_n} e^{im\theta_n^{(s)}}-1 \right). \end{multline*} The induction hypothesis implies that \[ \left| \theta_n^{(s)} \right| \le \left| \theta_n^{(s)} - \theta_n \right| + \left| \theta_n \right| = O(ld^{-1}\ln g), \] so, by Lemma \ref{L4.1}, the first contribution to the error is bounded by a constant times \begin{multline*} {\sum_{|m|\le N}}' \sum_{n=1}^g \frac{(l/d)^{|m|}}{d^{-1}|m_j-m_n|+1}\, (ld^{-1}\ln g)^{N+1-|m|} \\ \lesssim {\sum_{|m|\le N}}' (l/d)^{N+1} (\ln g)^{N+2-|m|} \lesssim (ld^{-1}\ln g)^{N+1}, \end{multline*} as desired. Similarly, the second contribution to the error term can be estimated by \[ \sum_{|m|>N} \sum_{n=1}^g \frac{(Cl/d)^{|m|}}{d^{-1}|m_j-m_n|+1} \lesssim \ln g \sum_{|m|>N} (Cl/d)^{|m|} \lesssim (l/d)^{N+1} \ln g. \] This concludes the proof of \eqref{4.1}. Adding \eqref{4.1} and \eqref{1.4}, we obtain \[ \theta_j^{(s+1)} = \theta_j -{\sum_{m\in\mathbb Z}}' \sum_{n=1}^g \frac{a_m(j,n)}{im}\, e^{im\alpha_n} \left( e^{im\theta_n^{(s)}}-e^{im\theta_n} \right) + O((ld^{-1}\ln g)^{N+1}). \] Lemma \ref{L4.1} together with \[ \left| e^{im\theta_n^{(s)}}-e^{im\theta_n} \right| \lesssim |m| (ld^{-1}\ln g)^{s+1}, \] which follows from the induction hypothesis, now yield the induction statement \eqref{1.6} for $s+1$. $\square$ \section{The Feynman rules} We now introduce, as announced above, a graphical representation of the terms obtained from the recursion \eqref{rectheta}. We have $\theta_j^{(0)}=0$ and \[ \theta_j^{(1)}= -{\sum_{|m|\le N}}' \sum_{n=1}^g \frac{a_m(j,n)}{im} (e^{im\alpha_n}-1). \] This latter expression can be represented by the following graph:\\[0.5cm] \setlength{\unitlength}{1cm} \begin{picture}(6,1) \put(1,0.5){\circle*{0.2}} \put(1,0.5){\line(1,0){2.9}} \put(1,0.5){\vector(1,0){1.6}} \put(4,0.5){\circle{0.3}} \put(0.8,0.8){$j$} \put(4.2,0.8){$n$} \put(2.4,0.2){$m$} \end{picture} \\[0.5cm] Here is the recipe to recover $\theta_j^{(1)}$ from this graph: Associate the factor $a_m(j,n)$ with the edge $m$ with vertices $j$ and $n$. The circled vertex $n$ contributes a factor $e^{im\alpha_n}-1$, where $m$ is the parameter of the incoming edge. Finally, multiply by $i/m$ and sum over $m=\pm 1,\ldots,\pm N$ and $n=1,\ldots, g$. These rules, suitably generalized, also work for larger values of $s$. At first sight, the formula for $\theta_j^{(2)}$ looks considerably more complicated than that for $\theta_j^{(1)}$ because now the second line of \eqref{rectheta} also contributes. However, it is not hard to convince oneself that $\theta_j^{(2)}$ can actually be computed by evaluating the following graphs. \\[0.5cm] \begin{picture}(10,6) \put(1,5){\circle*{0.2}} \put(1,5){\line(1,0){1.9}} \put(1,5){\vector(1,0){1.2}} \put(3,5){\circle{0.3}} \put(0.8,5.2){$j$} \put(3.1,5.2){$n$} \put(1.9,4.6){$m$} \put(3.9,5){$+$} \put(5,5){\circle*{0.2}} \put(5,5){\line(1,0){2}} \put(5,5){\vector(1,0){1.1}} \put(7,5){\circle*{0.2}} \put(7,5){\line(1,0){1.9}} \put(7,5){\vector(1,0){1.2}} \put(9,5){\circle{0.3}} \put(4.8,5.2){$j$} \put(6.9,5.2){$n_1$} \put(5.9,4.6){$m_1$} \put(9.1,5.2){$n_2$} \put(7.9,4.6){$m_2$} \put(9.9,5){$+$} \put(1,2){\circle*{0.2}} \put(1,2){\line(1,0){2}} \put(1,2){\vector(1,0){1}} \put(3,2){\circle*{0.2}} \put(0.8,2.2){$j$} \put(1.9,1.6){$m_1$} \put(2.9,2.2){$n_1$} \put(3,2){\line(2,1){1.9}} \put(3,2){\vector(2,1){1}} \put(3,2){\line(2,-1){1.9}} \put(3,2){\vector(2,-1){1}} \put(5,3){\circle{0.3}} \put(5,1){\circle{0.3}} \put(4,2.2){$m_2$} \put(4,1.1){$m_3$} \put(5.1,3.2){$n_2$} \put(5.1,0.6){$n_3$} \put(6,2){$+\quad\quad \cdots$} \end{picture} \\[0.5cm] More precisely, there are $N$ such graphs; they have the common property that every edge except the first one emanates from the second vertex. Again, edges contribute factors of the form $a_m(n,n')$, and for each vertex $\not=j$, there is a factor $e^{im\alpha_n}$ ($e^{im\alpha_n}-1$ if the vertex is marked by a circle). Then one has to multiply by a factor that depends on the edge indices $m_i$ and also on the graph and finally sum over all parameters except $j$. (Explicitly, this factor is \[ i(-1)^{E+1}\frac{m_1^{E-2}}{(E-1)!} \prod_{k=2}^E m_k^{-1}, \] where $E$ is the number of edges.) We are now ready to formulate the rules for computing $e^{i(\alpha_j+\theta_j)}$ from graphs of this type. The quantity $e^{i(\alpha_j+\theta_j)}=e^{i\psi_j}$ is of especial interest here because the function $f$ from \eqref{f} depends on exactly this combination.\\ \begin{center} {\it Feynman rules for $e^{i(\alpha_j+\theta_j)}$} \end{center} \vspace{0.2cm} \begin{enumerate} \item Draw all directed trees with at most $N$ edges. By a ``directed tree'', we mean a connected graph with the property that there is precisely one vertex with only outgoing edges, while for every other vertex, there is exactly one incoming edge. The vertices without outgoing edges are called final vertices (the trivial graph consisting of just one vertex is excluded in this definition); they are marked by circles. Formally, such a graph (with $E$ edges, say) may be represented by $E+1$ symbols $V_1,\ldots,V_{E+1}$ (``vertices'') and a collection of $E$ ordered pairs $(V_i,V_j)$ with $i\not= j$ (``edges''). Two graphs are equal if there is a bijection from one set of vertices to the other which preserves the edges. The figure below illustrates the case $N=3$.\\[0.5cm] \setlength{\unitlength}{0.7cm} \begin{picture}(15,3) \put(1,3){\circle*{0.2}} \put(3,3){\circle*{0.2}} \put(3,3){\line(1,0){1.9}} \put(3,3){\vector(1,0){1}} \put(5,3){\circle{0.3}} \put(6,3){\circle*{0.2}} \put(6,3){\line(1,0){1.4}} \put(6,3){\vector(1,0){0.8}} \put(7.5,3){\circle*{0.2}} \put(7.5,3){\line(1,0){1.4}} \put(7.5,3){\vector(1,0){0.8}} \put(9,3){\circle{0.3}} \put(10,3){\circle*{0.2}} \put(10,3){\line(1,0){1.4}} \put(10,3){\vector(1,0){0.8}} \put(11.5,3){\circle*{0.2}} \put(11.5,3){\line(1,0){1.4}} \put(11.5,3){\vector(1,0){0.8}} \put(13,3){\circle*{0.2}} \put(13,3){\line(1,0){1.4}} \put(13,3){\vector(1,0){0.8}} \put(14.5,3){\circle{0.3}} \put(0,1){\circle*{0.2}} \put(0,1){\line(2,1){1.9}} \put(0,1){\line(2,-1){1.9}} \put(0,1){\vector(2,1){1}} \put(0,1){\vector(2,-1){1}} \put(2,2){\circle{0.3}} \put(2,0){\circle{0.3}} \put(3,1){\circle*{0.2}} \put(3,1){\line(2,1){1.9}} \put(3,1){\line(2,-1){1.9}} \put(3,1){\vector(2,1){1}} \put(3,1){\vector(2,-1){1}} \put(5,2){\circle{0.3}} \put(5,0){\circle{0.3}} \put(3,1){\line(1,0){1.9}} \put(3,1){\vector(1,0){1}} \put(5,1){\circle{0.3}} \put(6,1){\circle*{0.2}} \put(6,1){\line(3,2){1.4}} \put(6,1){\line(3,-2){1.4}} \put(6,1){\vector(3,2){0.8}} \put(6,1){\vector(3,-2){0.8}} \put(7.5,2){\circle{0.3}} \put(7.5,0){\circle*{0.2}} \put(7.5,0){\line(1,0){1.4}} \put(7.5,0){\vector(1,0){0.8}} \put(9,0){\circle{0.3}} \put(10,1){\circle*{0.2}} \put(10,1){\line(1,0){2}} \put(10,1){\vector(1,0){1}} \put(12,1){\circle*{0.2}} \put(12,1){\line(2,1){1.9}} \put(12,1){\vector(2,1){1}} \put(12,1){\line(2,-1){1.9}} \put(12,1){\vector(2,-1){1}} \put(14,2){\circle{0.3}} \put(14,0){\circle{0.3}} \end{picture} \vspace{0.4cm} \item For every graph, label the (unique) vertex without incoming edge $j$. Then, attach the indices $n_1,\ldots, n_E$ to the remaining vertices, and label the edges $m_1,\ldots , m_E$. It is of no significance how the indices $n_1,\ldots, n_E$ and $m_1,\ldots, m_E$ are assigned to the vertices and edges, respectively, but once a graph has been labeled, this particular labeling is fixed once and for all. \item These labeled graphs are translated into formulae as follows. An edge labeled $m$ pointing from vertex $n$ to vertex $n'$ stands for a factor $a_m(n,n')$. A non-final vertex with index $n\not=j$ contributes $e^{im\alpha_n}$, where $m$ is the (unique) incoming edge. In case $n$ is a final vertex, the rule is similar except that the factor now is $e^{im\alpha_n}-1$. The vertex $j$ always carries the factor $e^{i\alpha_j}$. Finally, the result is multiplied by a number $c_G(m_1,\ldots,m_E)$ which depends on the graph and the edge indices (and $N$, but this is fixed throughout). In principle, $c_G$ can be computed, as the discussion below will show, but we do not need to know the precise values of the $c_G$'s here. \item Sum over $m_i\not= 0$, $\sum_{i=1}^E |m_i|\le N$, and $n_i=1,\ldots, g$. Finally, sum over all graphs. \end{enumerate} Carrying out these instructions produces a (complicated) function of the $\alpha_n$'s. The claim is that up to an error $O((ld^{-1} \ln g)^{N+1})$, this function coincides with $e^{i(\alpha_j+\theta_j)}$. We will now prove this assertion, which is the central result of this section. This proof, though not really difficult, is not easy to formulate; to get a feeling for the underlying principles, it is advisable to try things out by iterating \eqref{rectheta} a few times and drawing some pictures. Our verbal description will thus be somewhat sketchy. The strategy of the proof, however, is straightforward. First of all, we show that the approximations $\theta_j^{(s)}$ from \eqref{rectheta} admit a representation by diagrams. We have demonstrated this already for $s=1,2$, and the general case is hardly more difficult. Then, we use this knowledge to formulate similar rules for $e^{i(\alpha_j+\theta_j)}$. Finally, terms of order $(ld^{-1}\ln g)^{N+1}$ or higher can be dropped on the way. So, our first claim is the following statement: The $\theta_j^{(s)}$ from Lemma \ref{L4.2} can be calculated by evaluating certain graphs according to similar rules like the ones given above. There are a number of differences: All graphs have exactly one edge emanating from $j$, there is no factor $e^{i\alpha_j}$ attached to $j$, the factors $c_G$ are different, the $m_i$'s are summed over the range $|m_i|\le N$, $m_i\not= 0$, and there may be graphs with more than $N$ edges. In fact, we know this already for $s=1,2$, and the proof of the general case is by induction on $s$. By its definition, $\theta_j^{(s+1)}$ is obtained by inserting $\theta_n^{(s)}$ on the right-hand side of \eqref{rectheta}. By induction hypothesis, $\theta_n^{(s)}$ is a sum of many terms each of which corresponds to a graph with certain parameters. We now multiply out the right-hand side of \eqref{rectheta} and only then take the various sums. To prove our claim, it suffices to make the following observations: graphs are multiplied together by attaching them to one another at the ``initial'' vertex $j$. Similarly, multiplying a graph by $a_m(j,n)e^{im\alpha_n}$, as in the second line of \eqref{rectheta}, amounts to attaching this graph to the single-edge graph in such a way that the final vertex of this single-edge graph and the initial vertex of the other graph combine to one new vertex. Also, we may restrict ourselves to graphs with at most $N$ edges and to parameters $m_i$ with $\sum_{i=1}^E |m_i|\le N$. Indeed, since each edge carries a factor $a_m(n,n')$, Lemma \ref{L4.1} implies that the omitted contributions are $O((ld^{-1}\ln g)^{N+1})$. Here, the logarithmic factors come from the denominators of the bound of Lemma \ref{L4.1}, when the vertex indices are summed over. Note also in this context that summing over the $m_i$'s is never dangerous because the restrictions $|m_i|\le N$ imply that there is an a priori bound (depending on $N$ only) on the number of summands. Next, we have that \[ e^{i(\alpha_j+\theta_j)} = e^{i\alpha_j}\sum_{t=0}^N \frac{(i\tilde{\theta}_j^{(N)})^t}{t!} + O((ld^{-1}\ln g)^{N+1}). \] The tilde on the right-hand side indicates the omission of higher order terms, as discussed in the preceding paragraph. Again, the task is to multiply this out. The $\tilde{\theta}_j^{(N)}$ have graphical representations, as we have just seen, and the above remarks about multiplying together different graphs are still relevant here. The asserted rules follow from this. The additional factor $e^{i\alpha_j}$ has simply been attached to the vertex $j$. Note also that the same graph may arise many times when the process of multiplying out is performed, but then we can simply combine these contributions to a single one. This will only affect the numbers $c_G(m_1,\ldots,m_E)$. \section{Bounds along a random trajectory} This last part of the proof of Theorem \ref{T3.2} deals with the problem of bounding $f(\alpha)-\widehat{f}_0$ along trajectories $\alpha=\alpha_0+\nu x$, given the information obtained in the preceding section. First of all, recall from \eqref{f} that \begin{equation} \label{f0} f(\alpha)= E_0+2\sum_{j=1}^g l_j\cos\psi_j =E_0 +2 \text{ Re }\sum_{j=1}^g l_j e^{i(\alpha_j+\theta_j)}. \end{equation} We will first convince ourselves that $f$ is of the form \begin{equation} \label{ff} f(\alpha)=\widehat{f}_0 + \sum\nolimits_{|m|_1\le N+1}' b(m)\sin (m\cdot\alpha+\varphi_m) +O(lg(ld^{-1}\ln g)^{N+1}). \end{equation} We use a slightly different notation in this section in that now $m=(m_1,\ldots,m_g)$ with $m_i\in\mathbb Z$. Also, $|m|_1=\sum |m_i|$ and $m\cdot\alpha = \sum m_i\alpha_i$; finally, the prime at the sum sign now means omission of the summand with $m=(0,\ldots,0)$. To prove \eqref{ff}, use \eqref{f0} and think of the exponentials $e^{i(\alpha_j+\theta_j)}$ as being evaluated according to the Feynman rules. Then $\alpha$-dependent factors come in only through the vertices of the graphs; more precisely, vertices contribute factors of the form $e^{im\alpha_n}$ (or $e^{im\alpha_n}-1$ if the vertex is final), where $m$ is the index of the incoming edge. The vertex $j$ always contributes a factor $e^{i\alpha_j}$, so each graph is a sum of $\alpha$-independent factors times an exponential of the form $\exp(i(\alpha_j +\sum m_i\alpha_{n_i}))$. Since rule 4 imposes the restriction $\sum |m_i| \le N$, a rearrangement of terms gives \eqref{ff}, as asserted. Clearly, this argument has not only established \eqref{ff}, but it has also indicated how the coefficients $b(m)$ can be computed, at least in principle, using the graphs introduced in Sect.\ 5. This will become very important in a moment. (Just proving \eqref{ff} is easy and does not require the Feynman rules.) To prove Theorem \ref{T3.2}, we need to estimate the second term on the right-hand side of \eqref{ff}. Call this sum $f_N(\alpha)$. The main step will be to prove the following estimate. Given Lemma \ref{L5.1}, we will then be able to apply the methods of \cite{SZ}. \begin{Lemma} \label{L5.1} There is a constant $C$, so that for every $\lambda\in\mathbb R$, \[ \int_{\mathbb T^g} e^{\lambda f_N(\alpha)}\, \frac{d\alpha}{(2\pi)^g} \le e^{C\lambda^2 l^2g}. \] \end{Lemma} {\it Remark.} Our ``definition'' of $f_N$ is not quite complete, since \eqref{ff} does {\it not} uniquely determine $f_N$, given $f$. Lemma \ref{L5.1} really asserts that for some fixed choice of $f_N$, consistent with \eqref{ff}, the stated estimate holds. More precisely, $f_N$ is obtained by going from \eqref{f0} to \eqref{ff} in exactly the way described above. The following proof will also clarify this. {\it Proof.} We will further decompose $f_N$ and then analyze the individual terms separately. To this end, we first introduce equivalence classes of indices $m$. Namely, we say that $m$ and $m'$ are equivalent if they have the same non-zero entries, taking the order into account. To put this into more formal language, write \[ m=(0,\ldots,0,k_1,0,\ldots,0,k_2,0,\ldots,0,k_r,0,\ldots,0), \] with $r\in\mathbb N$ and $k_i\not= 0$ for all $i=1,\ldots,r$. Then $m$ and $m'$ are equivalent precisely if $r=r'$ and $k_i=k'_i$ for all $i$. This definition may not look very useful at first sight, but recall that $N$ (which bounds the $\ell_1$-norm of $m$) is fixed while $g$ (which is the length of the vectors $m$) is typically large, so the vectors $m$ indeed have only relatively few non-zero entries. The number of equivalence classes in the set of indices $\{ m\in\mathbb Z^g : |m|_1 \le N+1 \}$ only depends on $N$, but not on $g$. (Note, however, that the cardinality of the equivalence classes themselves does go to infinity as $g$ increases.) Now fix an equivalence class $(m_0)$ and consider \[ \sum_{m\in(m_0)} b(m)\sin(m\cdot\alpha+\varphi_m). \] Denote the positions of the non-zero entries $k_i$ of $m\in (m_0)$ by $n_i$. Then, if we vary the $n_i$'s (respecting the obvious restrictions $1\le n_10$, we can write this inequality in the form \[ E\left( \exp \left( \frac{\lambda}{2} \left[ M - 2C\lambda l^2 g - \frac{2}{\lambda} \ln (4gL) \right] \right) \right) \le \frac{1}{2} , \] with $E(\cdots)$ denoting the expectation taken with respect to the probability measure $(2\pi)^{-g}d\alpha$ on the torus $\mathbb T^g$. By a Chebyshev estimate, the inequality \[ M(\alpha) \le 2C\lambda gl^2 + \frac{2}{\lambda} \ln (4gL) \] holds with probability $\ge 1/2$. The parameter $\lambda>0$ is still at our disposal, the optimal choice being \[ \lambda= \left( \frac{\ln (4gL)}{Cgl^2} \right)^{1/2} . \] Then the bound becomes \[ M(\alpha) \le 4C^{1/2} g^{1/2}l \left( \ln (4gL) \right)^{1/2}, \] and this holds for $\alpha$'s from a set of $(2\pi)^{-g}d\alpha$ measure at least $1/2$. The proof of Theorem \ref{T3.2} is complete. $\square$ Moreover, by re-examining the reasoning of this section, we see that we can also prove the more general result already mentioned. We can obtain a whole series of pointwise approximations to $V_{\alpha}(x)$. More specifically, the difference between $f(\phi_x\alpha)$ and those terms of $\sum b(m)\sin(m\cdot \phi_x\alpha+\varphi_m)$ for which $|m|_1\le M$ is \[ \lesssim g^{1/2}l (ld^{-1}\ln g)^M \left( \ln (gL) \right)^{1/2} + gl (ld^{-1}\ln g)^{N+1} \] for suitable $\alpha$. In other words, the Fourier series of $f$ (viewed as a function on the Jacobi variety), up to some order, gives a very good pointwise approximation to $V_{\alpha}(x)$ with positive probability (in fact, with as large probability as we please) if $\alpha$ is chosen at random. With $M=0$, Theorem \ref{T3.2} is recovered. We do not need these more refined statements to prove Theorem \ref{T1.1}. \section{Proof of Theorem \ref{T1.1}} The basic idea of the construction of \cite{Remex} was to glue together suitably chosen periodic potentials. In this paper, we will instead use finite gap potentials with gaps of equal length. Roughly speaking, the construction runs as follows. We will choose the first finite gap potential $V_1$ so that all gaps lie in, let us say, $[1,2]$. $V_2$ will have much smaller gaps; also, these new gaps will be contained in the gaps of $V_1$. If we continue in this way, the intersection over all $n$ of the unions of the gaps of $V_n$ will be a Cantor type set whose dimension is easily controlled, provided there is an appropriate scaling. Moreover, the set $S$ defined in \eqref{defS} will contain this Cantor type set because if the energy $E$ is in a gap of $V_n$, the solutions to the Schr\"odinger equation are on average exponentially increasing or decreasing and hence do not satisfy \eqref{WKB}. Of course, we must also take care of the required decay of $V(x)$, that is, $V_n$ must be sufficiently small for large $n$. The bounds on $V_n$ will be established with the aid of Theorem \ref{T3.2}. We start by investigating the solutions of \eqref{se} for finite gap potentials $V$ and energies $E$ which lie in some gap of $V$. \begin{Lemma} \label{L6.1} Let $V(x)$ be a finite gap potential whose parameters satisfy the assumptions of Theorem \ref{T3.2}. Then there exists an $\epsilon=\epsilon(C_1,C_2,N)>0$ and a constant $C=C(C_1,C_2,N)$, such that for $ld^{-1}\ln g<\epsilon$, the following holds. If $|E-m_n| \le l_n/2$ for some $n\in \{ 1,\ldots, g\}$, then there is a solution $y(x)$ of the Schr\"odinger equation \eqref{se} with $y(x_0)=1$ for some $x_0\in [0,1]$ and \[ \int_{x_0}^{\infty} |y(x)|^2 \, dx \le C/l_n . \] \end{Lemma} {\it Remark.} This statement cannot, in general, hold with a fixed, prescribed $x_0$ because the decaying solution has zeros. Roughly speaking, the lemma says that there is a solution which has some decay over intervals of length $\gg l_n^{-1}$. {\it Proof.} Our starting point is the following formula (see, for example, \cite[Chapter 9]{CL}): \begin{equation} \label{m} \int_x^{\infty} |f(t,z)|^2 \, dt = \frac{\text{Im }m_x(z)}{\text{Im }z}. \end{equation} Here, $m_x$ is the $m$-function of $-d^2/dt^2 + V(t)$ on $[x,\infty)$ with Dirichlet boundary conditions at $t=x$. More specifically, let $u,v$ be the solutions of $-y''+Vy=zy$ with the initial values $u(x,z)=v'(x,z)=1$, $u'(x,z)=v(x,z)=0$ and write \[ f(t,z) = u(t,z)+m_x(z) v(t,z); \] then $m_x(z)$ is defined by requiring that $f\in L_2(x,\infty)$. The Green function of the whole line problem is related to the $m$-function by $G(x,x;z)=(m_x^{-}(z)-m_x(z))^{-1}$, where $m_x^{-}$ is the $m$-function of the operator on $L_2(-\infty,x)$ (see again \cite{CL}). Since the imaginary parts of $m_x^{-}$ and $m_x$ have opposite signs, the right-hand side of \eqref{m} is less than $-\text{Im }G(x,x;z)^{-1}/\text{Im }z$. So, if we use \eqref{Green} and abbreviate $\prod (\mu_j(x)-z) = U_x(z)$, then \eqref{m} becomes \begin{equation} \label{m1} \int_x^{\infty} |f(t,z)|^2 \, dt < -\frac{2}{\text{Im }z} \, \text{Im }\frac{R(z)}{U_x(z)} . \end{equation} Here, the sign of $R(z)$ is determined by the fact that $\text{Im }G(x,x;z)$ has the same sign as $\text{Im }z$ for $\text{Im }z\not= 0$ (compare the discussion following \eqref{Green}). Now let $E$ be as in the hypothesis, and put $z=E+i\delta$ with $\delta>0$. By slightly changing $x$ if necessary, we may assume that $\mu_n(x)\not= E$. Then $E$ is not in the spectrum of the operator on $L_2(x,\infty)$ with Dirichlet boundary conditions. This is so simply because $\mu_n(x)$ is the only eigenvalue in the gap $(E_{2n-1},E_{2n})$. Thus $m_x(z)$ and $R(z)/U_x(z)$ are holomorphic in a neighborhood of $z=E$. For this latter function, this may of course be seen by direct inspection. Moreover, $R(E)/U_x(E)$ is real. Therefore, the right-hand side of \eqref{m1} converges to $-2(R/U_x)'(E)$ as $\delta\to 0+$, while the function $f(t,E+i\delta)$ tends to $f(t,E)=u(t,E)+m_x(E)v(t,E)$. Fatou's Lemma together with \eqref{m1} imply \[ \int_x^{\infty} |f(t,E)|^2\, dt \le -2 \, \frac{d}{dz} \left( \frac{R}{U_x} \right) (E) . \] We have that $f(x,E)=1$, so it remains to evaluate $(R/U_x)'(E)$. To this end, note that \[ \left( \ln \frac{R}{U_x} \right)'(E) = \frac{1}{2(E-E_0)} +\sum_{j=1}^g \left( \frac{1}{\mu_j(x)-E} - \frac{1}{2(E_{2j-1}-E)} - \frac{1}{2(E_{2j}-E)} \right). \] Estimating as in the proof of Lemma \ref{L3.1}, we see that \[ \sum_{j\not= n} \left| \frac{1}{\mu_j(x)-E} - \frac{1}{2(E_{2j-1}-E)} - \frac{1}{2(E_{2j}-E)} \right| \lesssim ld^{-2} . \] Furthermore, the term with $j=n$ can be bounded by $C/|\mu_n(x)-E|$. Since this is $\gtrsim l_n^{-1}$ which is much larger than $ld^{-2}$, we can in fact estimate the whole logarithmic derivative by $C/|\mu_n(x)-E|$. Finally, similar arguments show that $|(R/U_x)(E)| \lesssim l_n/|\mu_n(x)-E|$, so we conclude that \[ \int_x^{\infty} |f(t,E)|^2\, dt \le \frac{Cl_n}{(\mu_n(x)-E)^2} . \] The proof is finished by observing that $\mu_n(x)$ moves by an amount $\gtrsim l_n$ if $x$ varies over an interval of length one. Indeed, if we again use the variables $\psi_j$ (see \eqref{subst1}, \eqref{subst2}), then the $\psi_j$'s evolve according to the differential equations \[ \frac{d\psi_n}{dx} = \frac{2i R_n(m_n-l_n\cos\psi_n)} {\prod_{j\not= n} (m_j-m_n-l_j\cos\psi_j+l_n\cos\psi_n)}, \] and the right-hand sides are $\approx 1$, independently of the positions the $\psi_j(x)$'s. $\square$ Now let $a_1=0$, $a_{n+1}=a_n+L_n$, where $L_n>0$ will be chosen later. Then $V$ will be of the form \[ V(x) = \sum_{n=1}^{\infty}\chi_{(a_n,a_{n+1})}(x) V_n(x-a_n); \] the building blocks $V_n$ are finite gap potentials. We now pick these $V_n$'s. We basically keep the notation of the preceding sections, except that there is now an additional index $n$. The gaps of $V_n$ are taken to be of equal length $l_n$, and $g_n$ denotes the number of gaps of $V_n$. Let $\alpha\in (1/2,1)$ be the exponent from \eqref{hyp} (if $\alpha=1$, there is nothing to prove). We abbreviate $2(1-\alpha)=D$, so $D\in (0,1)$, and $D$ is the dimension the set $S$ from \eqref{defS} must have. Fix a number $a>(1-D)^{-1}$ and put \[ l_n = \exp(-a^n). \] A Cantor type set with $g_n$ intervals of length $l_n$ as its $n$th approximation has dimension $D$ if there is a scaling of the type $g_nl_n^D\sim 1$. This suggests to take $g_n\sim \exp(Da^n)$, but for technical reasons, the actual definition is slightly different. First of all, choose a sequence $\epsilon_n>0$ which tends to zero, but so slowly that $\epsilon_na^n- \epsilon_{n-1}a^{n-1}\to \infty$ and $a^n\exp(-\epsilon_n a^n) \to 0$. (In fact, we could take $\epsilon_n=q^n$ with $a^{-1}(1-D)^{-1}$). Once $E_0^{(n)}$ has been picked, $G_n$ can again be defined as in \eqref{gn}, but with the shifted centers $E_0^{(n)}+m_n(k)$ taking the role of $m_n(k)$. This two step procedure (first choose $m_n(k)$'s, then shift by an appropriate $E_0^{(n)}$ to make $\widehat{f}_0 = 0$) can now be used to pick the $m_n(k)$'s and $E_0^{(n)}$ (inductively) for all $n\ge n_0$. Note that the construction ensures that $G_n\subset G_{n-1}$. We must still choose, for every $n\ge n_0$, a particular potential from the corresponding family $V_{\alpha_0}$ of finite gap potentials. Fortunately, this choice is easy: We fix once and for all a sufficiently large $N\in\mathbb N$ (where ``sufficiently large'' will be made precise at the end of the proof) and then simply take a $V_n$ that satisfies the conclusion of Theorem \ref{T3.2} for $L=L_n$. Note also that the assumptions of Theorem \ref{T3.2} on the location of the gaps (that is, $C_1 \le m_n(k)-E_0^{(n)}\le C_2$ for all $k=1,\ldots,g_n$) hold with $n$-independent constants $C_1,C_2>0$ because $E_0^{(n)}$ lies in a small interval centered at zero while all gaps are in $[1,2]$. Therefore, the constant $C$ from the statement of Theorem \ref{T3.2} is also independent of $n$. Finally, for $n (1+\delta)^{-1}$. Now routine estimates show that if $\delta>0$ was chosen sufficiently small, then \eqref{lsg} implies that \[ \int_{x_0^{(n)}}^{a_{n+1}} \left|f_n(x)\right|^2 \, dx \ge C_0 L_n = AC_0/l_n. \] This inequality contradicts \eqref{contra} if $A$ is sufficienctly large, so $T\subset S$, as claimed. The next step is to prove that $\dim T = D$. To this end, we introduce a Borel measure $\mu$ that reflects the self-similar scaling structure of $T$. More specifically, $\mu$ gives equal weight to the intervals of $G_n$ for every $n$: $\mu(I_n)= g_n^{-1}$ if $I_n$ is one of the intervals $[m_n(k)-l_n/2, m_n(k)+l_n/2]$. Moreover, we also demand that $\mu$ be supported by $T$: $\mu(\mathbb R \setminus T)=0$. It is not hard to show (for instance, by considering approximations $\mu_n$ supported by $G_n$) that there indeed exists a unique Borel (probability) measure $\mu$ satisfying these requirements. We will now establish the following property of the generalized derivatives of $\mu$: For every fixed $\gamma < D$, we have that \begin{equation} \label{deri} \lim_{\delta\to 0+} \sup_{|I|\le \delta} \frac{\mu(I)}{|I|^{\gamma}} =0 . \end{equation} The supremum is over all intervals $I\subset\mathbb R$ of length at most $\delta$. If \eqref{deri} holds, then, by general facts on Hausdorff measures \cite[Section 3.4, Theorem 67]{Rog}, $\mu$ gives zero weight to sets of dimension strictly less than $D$, and therefore $\dim T\ge D$, as desired. The converse inequality $\dim T\le D$ does not need explicit proof (although that would actually be easy to do) because we know that always $\dim S\le 2(1-\alpha)$ (this is the result whose optimality we are about to prove), and thus $\dim T\le\dim S\le D$ will follow automatically once we have established that $V(x)=O(x^{-\alpha})$. So let us prove \eqref{deri}: Fix $\gamma$, and let $I$ be an interval with $|I|\le\delta$, where $\delta>0$ is small. Then, define $n\in\mathbb N$ by requiring that $l_n<|I|\le l_{n-1}$. Clearly, $n$ is large if $\delta$ is small. We first treat the case when $|I| \le d_n$. Recall that $d_n$ is the minimal distance between adjacent gaps of $V_n$. So the above assumption implies that $I$ intersects at most two of the intervals that build up $G_n$. Each of these intervals has measure $g_n^{-1}$, hence \[ \frac{\mu(I)}{|I|^{\gamma}} \le \frac{2}{g_n |I|^{\gamma}} \le \frac{2}{g_n l_n^{\gamma}}. \] On the other hand, if $|I|> d_n$, then the number of subintervals of $G_n$ intersecting $I$ is $\le 3|I|/d_n$, thus in this case, \[ \frac{\mu(I)}{|I|^{\gamma}} \le \frac{3|I|^{1-\gamma}}{d_ng_n} \le 3\, \frac{g_{n-1}l_{n-1}}{d_ng_n}\, \frac{1}{g_{n-1}l_{n-1}^{\gamma}} . \] Now $g_nl_n^{\gamma} \gtrsim \exp( \sigma a^n)$, where $\sigma>0$ depends on $\gamma$, and $g_{n-1}l_{n-1}/(d_n g_n) \lesssim \exp(\epsilon_n a^n)$; indeed, relations of this type motivated our definition of $g_n$ and $d_n$. Since, as noted above, $n\to\infty$ as $\delta\to 0+$, \eqref{deri} now follows. It remains to show that $V$ satisfies the bound \eqref{hyp}. So, let $x\in (a_n,a_{n+1})$ with large $n$. Recall that $\widehat{f}_0=0$, where $f$ is the function from the trace formula for $V_n$. Theorem \ref{T3.2} therefore implies that \[ \left|V(x) \right| \le C \left[ g_n^{1/2} l_n \left(\ln (g_nL_n) \right)^{1/2} + g_nl_n (l_nd_n^{-1}\ln g_n)^{N+1} \right] \] for these $x$. On the other hand, \[ x \le a_{n+1} =\sum_{m=1}^n L_m \lesssim \sum_{m=1}^n l_m^{-1} \lesssim l_n^{-1}, \] and \eqref{hyp} indeed follows, provided we took \[ N+1\ge \frac{1-\alpha}{2\alpha - 1}\, \frac{a}{a-1}. \] (As expected, $N\to\infty$ as $\alpha\to 1/2+$.) $\square$ Actually, doing these final estimates carefully, we obtain a stronger bound of the form $x^{-\alpha-\epsilon_n/2}(\ln x)^{1/2}$. The strengthening of Theorem \ref{T1.1} mentioned in Sect.\ 1 follows from this by taking a sequence $\epsilon_n$ that tends to zero sufficiently slowly. \begin{thebibliography}{99} \bibitem{CK} M.\ Christ and A.\ Kiselev, Absolutely continuous spectrum for one-dimensional Schr\"odinger operators with slowly decaying potentials: some optimal results, J.\ Amer.\ Math.\ Soc.\ {\bf 11} (1998), 771--797. \bibitem{CK1} M.\ Christ and A.\ Kiselev, WKB asymptotic behavior of almost all generalized eigenfunctions for one-dimensional Schr\"odinger operators with slowly decaying potentials, preprint (2000). \bibitem{CK2} M.\ Christ and A.\ Kiselev, WKB and spectral analysis of one-dimensional Schr\"odinger operators whose potentials have slowly decaying derivatives, preprint (2000). \bibitem{CL} E.A.\ Coddington and N.\ Levinson, Theory of Ordinary Differential Equations, McGraw-Hill, New York, 1955. \bibitem{DK} P.\ Deift and R.\ Killip, On the absolutely continuous spectrum of one-dimensional Schr\"odinger operators with square summable potentials, Commun.\ Math.\ Phys. {\bf 203} (1999), 341--347. \bibitem{DKV} P.\ Deift, T.\ Kriecherbauer, and S.\ Venakides, Forced lattice vibrations II, Commun.\ Pure Appl.\ Math.\ {\bf 48} (1995), 1251--1298. %\bibitem{East} M.S.P.\ Eastham, The Spectral Theory of Periodic %Differential Equations, Scottish Academic Press, London, 1973. \bibitem{East1} M.S.P.\ Eastham, The Asymptotic Solution of Linear Differential Systems, London Math.\ Soc.\ Monographs New Series {\bf 4}, Clarendon Press, Oxford, 1989. \bibitem{FarKra} H.M.\ Farkas and I.\ Kra, Riemann Surfaces, Springer, New York, 1980. \bibitem{GRT} F.\ Gesztesy, R.\ Ratnaseelan, and G.\ Teschl, The KdV hierarchy and associated trace formulas, in I.\ Gohberg (ed.) et al., Oper. Theory: Advances and Applications {\bf 87} (1996), 125--163. \bibitem{GesWei} F.\ Gesztesy and R.\ Weikard, Spectral deformations and soliton equations, in W.F.\ Ames (ed.) et al., Differential equations with applications to mathematical physics, 101--139, Academic Press, Boston, 1993. \bibitem{GP} D.J.\ Gilbert and D.B.\ Pearson, On subordinacy and analysis of the spectrum of one-dimensional Schr\"odinger operators, J.\ Math.\ Anal.\ Appl.\ {\bf 128} (1987), 30--56. %\bibitem{Hoch} H.\ Hochstadt, Functiontheoretic properties %of the discriminant of Hill's equation, Math.\ Z.\ {\bf 82} %(1963), 237--242. \bibitem{Kah} J.-P.\ Kahane, Some Random Series of Functions, Cambridge University Press, Cambridge, 1985. \bibitem{Kil} R.\ Killip, Perturbations of one-dimensional Schr\"odinger operators preserving the absolutely continuous spectrum, Ph.D.\ thesis, Caltech 2000; electronically available at {\tt http://www.ma.utexas.edu/mp\_arc-bin/mpa?yn=00-326}. \bibitem{KLS} A.\ Kiselev, Y.\ Last, and B.\ Simon, Modifed Pr\"ufer and EFGP transforms and the spectral analysis of one-dimensional Schr\"odinger operators, Commun.\ Math.\ Phys.\ {\bf 194} (1998), 1--45. \bibitem{KotU} S.\ Kotani and N.\ Ushiroya, One-dimensional Schr\"odinger operators with random decaying potentials, Commun.\ Math.\ Phys.\ {\bf 115} (1988), 247--266. \bibitem{McK} H.P.\ McKean, Variation on a theme of Jacobi, Commun.\ Pure Appl.\ Math.\ {\bf 38} (1985), 669--678. %\bibitem{McKM} H.P.\ McKean and P.\ van Moerbeke, The spectrum %of Hill's equation, Inv.\ Math.\ {\bf 30} (1975), 217--274. \bibitem{Mum} D.\ Mumford, Tata Lectures on Theta 2, Birkh\"auser-Verlag, Basel, 1984. \bibitem{Nab} S.N.\ Naboko, Dense point spectra of Schr\"odinger and Dirac operators, Theor.\ and Math.\ Phys.\ {\bf 68} (1986), 646--653. \bibitem{Remac} C.\ Remling, The absolutely continuous spectrum of one-dimensional Schr\"odinger operators with decaying potentials, Commun.\ Math.\ Phys.\ {\bf 193} (1998), 151--170. \bibitem{Remdim} C.\ Remling, Bounds on embedded singular spectrum for one-dimensional Schr\"odinger operators, Proc.\ Amer.\ Math.\ Soc.\ {\bf 128} (2000), 161--171. \bibitem{Remex} C.\ Remling, Schr\"odinger operators with decaying potentials: some counterexamples, Duke Math.\ J.\ {\bf 105} (2000), 463--496. \bibitem{Rog} C.A.\ Rogers, Hausdorff Measures, Cambridge University Press, Cambridge, 1970. \bibitem{SZ} R.\ Salem and A.\ Zygmund, Some properties of trigonometric series whose terms have random signs, Acta Math.\ {\bf 91} (1954), 245--301. \bibitem{Sim82} B.\ Simon, Some Jacobi matrices with decaying potential and dense point spectrum, Commun.\ Math.\ Phys.\ {\bf 87} (1982), 253--258. \bibitem{Sbdd} B.\ Simon, Bounded eigenfunctions and absolutely continuous spectra for one-dimensional Schr\"odinger operators, Proc.\ Amer.\ Math.\ Soc.\ {\bf 124} (1996), 3361--3369. \bibitem{Simpp} B.\ Simon, Some Schr\"odinger operators with dense point spectrum, Proc.\ Amer.\ Math.\ Soc.\ {\bf 125} (1997), 203--208. \bibitem{Sim21} B.\ Simon, Schr\"odinger operators in the twenty-first century, A.\ Fokal et al (eds.), Mathematical Physics 2000, pp.\ 283--288, Imperial College, London, 2000. \bibitem{Spr} G.\ Springer, Introduction to Riemann Surfaces, Addison-Wesley, Massachusetts, 1957. \bibitem{Stolz} G.\ Stolz, Bounded solutions and abolute continuity of Sturm-Liouville operators, J.\ Math.\ Anal.\ Appl.\ {\bf 169} (1992), 210--228. \bibitem{vNW} J.\ von Neumann and E.\ Wigner, \"Uber merkw\"urdige diskrete Eigenwerte, Phys.\ Z.\ {\bf 30} (1929), 465--467. \end{thebibliography} \end{document} ---------------0107090255950--