Content-Type: multipart/mixed; boundary="-------------0307211319805" This is a multi-part message in MIME format. ---------------0307211319805 Content-Type: text/plain; name="03-341.keywords" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="03-341.keywords" ratio asymptotics, orthogonal polynomials, Jacobi matrices ---------------0307211319805 Content-Type: application/x-tex; name="simon_ratios.TEX" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="simon_ratios.TEX" \documentclass[reqno,centertags, 12pt]{amsart} \usepackage{amsmath,amsthm,amscd,amssymb} \usepackage{latexsym} %\usepackage[notref,notcite]{showkeys} %\usepackage{showkeys} \sloppy %%%%%%%%%%%%% fonts/sets %%%%%%%%%%%%%%%%%%%%%%% \newcommand{\bbN}{{\mathbb{N}}} \newcommand{\bbR}{{\mathbb{R}}} \newcommand{\bbD}{{\mathbb{D}}} \newcommand{\bbP}{{\mathbb{P}}} \newcommand{\bbZ}{{\mathbb{Z}}} \newcommand{\bbC}{{\mathbb{C}}} \newcommand{\bbQ}{{\mathbb{Q}}} \newcommand{\bbT}{{\mathbb{T}}} \newcommand{\calH}{{\mathcal H}} %%%%%%%%%%%%%%%%%% abbreviations %%%%%%%%%%%%%%%%%%%%%%%% \newcommand{\dott}{\,\cdot\,} \newcommand{\no}{\nonumber} \newcommand{\lb}{\label} \newcommand{\f}{\frac} \newcommand{\ul}{\underline} \newcommand{\ol}{\overline} \newcommand{\ti}{\tilde } \newcommand{\wti}{\widetilde } \newcommand{\Oh}{O} \newcommand{\oh}{o} \newcommand{\marginlabel}[1]{\mbox{}\marginpar{\raggedleft\hspace{0pt}#1}} \newcommand{\tr}{\text{\rm{Tr}}} \newcommand{\dist}{\text{\rm{dist}}} \newcommand{\loc}{\text{\rm{loc}}} \newcommand{\spec}{\text{\rm{spectrum}}} \newcommand{\rank}{\text{\rm{rank}}} \newcommand{\ran}{\text{\rm{ran}}} \newcommand{\dom}{\text{\rm{dom}}} \newcommand{\ess}{\text{\rm{ess}}} \newcommand{\ac}{\text{\rm{ac}}} \newcommand{\s}{\text{\rm{s}}} \newcommand{\sing}{\text{\rm{sc}}} \newcommand{\pp}{\text{\rm{pp}}} \newcommand{\supp}{\text{\rm{supp}}} \newcommand{\AC}{\text{\rm{AC}}} \newcommand{\bi}{\bibitem} \newcommand{\hatt}{\widehat} \newcommand{\beq}{\begin{equation}} \newcommand{\eeq}{\end{equation}} \newcommand{\ba}{\begin{align}} \newcommand{\ea}{\end{align}} \newcommand{\veps}{\varepsilon} %\newcommand{\Ima}{\operatorname{Im}} %\newcommand{\Real}{\operatorname{Re}} %\newcommand{\diam}{\operatorname{diam}} % use \hat in subscripts % and upperlimits of int. %%%%%%%%%%%%% marginal warnings %%%%%%%%%%%%%%%% % ON: \newcommand{\TK}{{\marginpar{x-ref?}}} % OFF: %\newcommand{\TK}{} % % Rowan's unspaced list % \newcounter{smalllist} \newenvironment{SL}{\begin{list}{{\rm\roman{smalllist})}}{% \setlength{\topsep}{0mm}\setlength{\parsep}{0mm}\setlength{\itemsep}{0mm}% \setlength{\labelwidth}{2em}\setlength{\leftmargin}{2em}\usecounter{smalllist}% }}{\end{list}} % %smaller \bigtimes % \newcommand{\bigtimes}{\mathop{\mathchoice% {\smash{\vcenter{\hbox{\LARGE$\times$}}}\vphantom{\prod}}% {\smash{\vcenter{\hbox{\Large$\times$}}}\vphantom{\prod}}% {\times}% {\times}% }\displaylimits} %%%%%%%%%%%%%%%%%%%%%% renewed commands %%%%%%%%%%%%%%% %\renewcommand{\Re}{\text{\rm Re}} %\renewcommand{\Im}{\text{\rm Im}} %%%%%%%%%%%%%%%%%%%%%% operators %%%%%%%%%%%%%%%%%%%%%% \DeclareMathOperator{\Real}{Re} \DeclareMathOperator{\Ima}{Im} \DeclareMathOperator{\diam}{diam} \DeclareMathOperator*{\slim}{s-lim} \DeclareMathOperator*{\wlim}{w-lim} \DeclareMathOperator*{\simlim}{\sim} \DeclareMathOperator*{\eqlim}{=} \DeclareMathOperator*{\arrow}{\rightarrow} \allowdisplaybreaks \numberwithin{equation}{section} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%% end of definitions %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newtheorem{theorem}{Theorem}[section] %\newtheorem*{t0}{Theorem} \newtheorem*{TA}{Theorem A} \newtheorem*{TB}{Theorem B} \newtheorem*{t1}{Theorem 1} \newtheorem*{t2}{Theorem 2} \newtheorem*{NC2.16}{Nevai Conjecture 2.16} \newtheorem*{NC2.17}{Nevai Conjecture 2.17} \newtheorem*{WNC2.17}{Weaker Nevai Conjecture 2.17} \newtheorem*{ct2}{Corollary to Theorem 2} \newtheorem*{TP}{Theorem of Poincar\'e} \newtheorem*{t3}{Theorem 3} \newtheorem*{t4}{Theorem 4} \newtheorem*{t5}{Theorem 5} %\newtheorem*{c4}{Corollary 4} %\newtheorem*{p2.1}{Proposition 2.1} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} %\newtheorem{hypothesis}[theorem]{Hypothesis} %\theoremstyle{hypothesis} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{xca}[theorem]{Exercise} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} % Absolute value notation \newcommand{\abs}[1]{\lvert#1\rvert} \begin{document} \title[Ratio Asymptotics and Weak Asymptotic Measures] {Ratio Asymptotics and Weak Asymptotic Measures for Orthogonal Polynomials \\on the Real Line} \author{Barry Simon} \thanks{$^1$ Mathematics 253-37, California Institute of Technology, Pasadena, CA 91125. E-mail: bsimon@caltech.edu. Supported in part by NSF grant DMS-0140592} %\thanks{{\it To be submitted to IMRN}} \date{July 3, 2003} \begin{abstract} We study ratio asymptotics, that is, existence of the limit of $P_{n+1}(z)/P_n(z)$ ($P_n =$ monic orthogonal polynomial) and the existence of weak limits of $p_n^2 \, d\mu$ ($p_n =P_n/\|P_n\|$) as $n\to\infty$ for orthogonal polynomials on the real line. We show existence of ratio asymptotics at a single $z_0$ with $\Ima (z_0)\neq 0$ implies $d\mu$ is in a Nevai class (i.e., $a_n\to a$ and $b_n \to b$ where $a_n,b_n$ are the off-diagonal and diagonal Jacobi parameters). For $\mu$'s with bounded support, we prove $p_n^2\, d\mu$ has a weak limit if and only if $\lim b_n$, $\lim a_{2n}$, and $\lim a_{2n+1}$ all exist. In both cases, we write down the limits explicitly. \end{abstract} \maketitle \section{Introduction} \lb{s1} In \cite{Khr}, Khrushchev asked two questions about orthogonal polynomials on the unit circle \cite{GBk,Sib,Szb} and found the following remarkable theorems in terms of the monic orthogonal polynomials, $\Phi_n$; the orthonormal polynomials, $\varphi_n = \Phi_n/\|\Phi_n\|_{L^2}$; and the Verblunsky coefficients, $\alpha_n =-\ol{\Phi_{n+1}(0)}$. \begin{TA} $\Phi_{n+1}^*(z)/\Phi_n^*(z)$ has a limit uniformly in $z$ over compact subsets of $\bbD$ if and only if either \begin{SL} \item[{\rm{(i)}}] For $\ell=1,2,\dots$, $\lim_{n\to\infty} \alpha_{n+\ell} \alpha_n =0$, or \item[{\rm{(ii)}}] There is $a\in (0,1]$ and $\lambda\in\partial\bbD$ so that $\lim_{n\to\infty} \abs{\alpha_n} =a$, $\lim_{n\to\infty} \bar\alpha_{n+1} \alpha_n =a^2 \lambda$. \end{SL} \end{TA} \begin{TB} $\abs{\varphi_n}^2 \, d\mu$ has a weak limit if and only if either \begin{SL} \item[{\rm{(i)}}] For $\ell=1,2,\dots$, $\lim_{n\to\infty} \alpha_{n+\ell} \alpha_n =0$, or \item[{\rm{(ii)}}] There exists $a,a'\in (0,1]$, $\lambda\in\partial\bbD$, and integers $k\geq 1$ and $\ell\in \{0,1,\dots, k-1\}$ so that \begin{gather*} \lim_{n\to\infty} \, \abs{\alpha_{2nk+\ell+j}} = \begin{cases} a &\text{if } j=0 \\ a' &\text{if } j=k \\ 0 &\text{if } j=1, \dots, k-1, k+1, \dots, 2k-1 \end{cases} \\ \\ \lim_{n\to\infty}\, \bar\alpha_{2nk+\ell} \alpha_{2nk+k+\ell} = aa'\lambda \end{gather*} \end{SL} \end{TB} Khrushchev \cite{Khr} also describes explicitly the limits in both cases. Our goal in this paper is to find the analogs of these theorems for orthogonal polynomials on the real line. The answers and proofs are much simpler --- the methods of Khrushchev which depend heavily on Schur functions do not seem to extend, nor does mapping bounded intervals on $\bbR$ to $\partial\bbD$ (as in Szeg\H{o} \cite[Sect. 11.5]{Szb}) seem to allow direct transfer. Before stating our results, let us set up notation. Given a measure $d\mu$ on $\bbR$ with $\int x^{2n} \, d\mu <\infty$ for all $n$, we let $P_n(x)$ be the monic orthogonal and $p_n(x)$ the orthonormal polynomials. To define them, we suppose henceforth that $d\mu$ is nontrivial, that is, not supported on a finite set, and we will also assume throughout that $\mu(\bbR)=1$. Thus $P_n$ is determined by $P_n(x) =x^n+$ lower order and $\int x^j P_n(x)\, d\mu(x)=0$ for $j=0,1,\dots, n-1$. $p_n = P_n /\|P_n\|$ where $\|\cdot\|$ is the $L^2 (\bbR,d\mu)$ norm. It is well-known \cite{Szb} that the $P_n$'s obey a three-term recursion relation. There are $b_j\in\bbR$ and $a_j\in (0,\infty)$ so that \begin{equation} \lb{1.1} x P_n(x) = P_{n+1}(x) + b_{n+1} P_n (x) + a_n^2 P_{n-1}(x) \end{equation} Our indexing of $b_1, b_2, \dots$ and $a_1, a_2, \dots$ is not common --- often the labelling starts at $b_0$ and $a_0$. We take this convention from \cite{KS} for reasons explained there. \eqref{1.1} implies inductively that \begin{equation} \lb{1.2} \|P_n\|=a_n\dots a_1 \end{equation} and then that the $p_n$ obey the recursion relation \begin{equation} \lb{1.3} x p_n(x) =a_{n+1} p_{n+1}(x) + b_{n+1} p_n(x) + a_n p_{n-1}(x) \end{equation} In turn, \eqref{1.3} suggests we study the Jacobi matrix \begin{equation} \lb{1.4} J= \begin{pmatrix} b_1 & a_1 & 0 & 0 & \cdots \\ a_1 & b_2 & a_2 & 0 & \cdots \\ 0 & a_2 & b_3 & a_3 & \cdots \\ \vdots & \vdots & \vdots & \vdots & \ddots \end{pmatrix} \end{equation} Since $\{p_n\}_{n=0}^\infty$ is an orthonormal set, \[ U:\sum_{j=0}^N v_jp_j\to \begin{pmatrix} v_1 \\ v_2 \\ \vdots \end{pmatrix} \] is a unitary map of the closed span, $S$, of the $p$'s to $\ell^2 (\bbZ_+)$ ($\bbZ_+ = \{1,2,\dots\}$) and for $v\in S_0 =$ span of $p$'s, we have $U^{-1} JUv = \text{(multiplication by $x$) } v$. In case the moment problem is determinant \cite{AkhB,S270}, $J$ is selfadjoint, and $d\mu$ is just the spectral measure for $J$ and vector $\delta =(1,0, \dots)$. We can now state our main results. \begin{t1}[$\equiv$ Theorems~\ref{T2.1} and \ref{T2.2}] Suppose that at a single $z_0\in\bbC\backslash\bbR$, we have \begin{equation} \lb{1.5} \lim_{n\to\infty} \, \f{P_{n+1}(z)}{P_n(z)} = f(z) \end{equation} for $z=z_0$. Then, for some $a\in [0,\infty)$ and $b\in\bbR$, \begin{equation} \lb{1.6} \lim_{n\to\infty}\, a_n =a \qquad \lim_{n\to\infty}\, b_n =b \end{equation} Conversely, if \eqref{1.6} holds and $\spec(J)$ is the spectrum of the operator $J$, then \eqref{1.5} holds for all $z\in\bbC\backslash\spec(J)$ and \begin{equation} \lb{1.7} f(z) = \f{(z-b) + \sqrt{(z-b)^2 -4a^2}}{2} \end{equation} where the branch of the square root is taken with $\sqrt{\cdots} = z+O(1/z)$ near $z=\infty$. \end{t1} \begin{t2}[$\equiv$ Theorems~\ref{T3.1} and \ref{T3.2}]\lb{t2} Let \begin{equation} \lb{1.8} d\mu_n = p_n^2(x) \, d\mu(x) \end{equation} Suppose that for $\ell=1,2$, and $4$, $\lim_{n\to\infty} \int x^\ell\, d\mu_n$ exists. Then for $a,c\in [0,\infty)$ and $b\in\bbR$, we have \begin{equation} \lb{1.9} \lim_{n\to\infty} \, b_n =b \qquad \lim_{n\to\infty}\, a_{2n}=a \qquad \lim_{n\to\infty} \, a_{2n+1}=c \end{equation} Conversely, if \eqref{1.9} holds, the $d\mu_n$ have supports lying in a fixed bounded interval and there is a measure $d\rho_{b;a,c}$ so that for any continuous $f$ on $\bbR$ {\rm{(}}including $f(x)=x^\ell${\rm{)}}, we have \begin{equation} \lb{1.10} \int f(x)\, d\mu_n (x) \to \int f(x)\, d\rho(x) \end{equation} $d\rho$ is a function of $a,b,c$ only, and if $d\rho_{b;a,c}=d\rho_{b';a'c'}$, we have $b=b'$ and either $a=a'$, $c=c'$ or $a=c'$, $c=a'$. \end{t2} Theorem~1 is proven in Section~\ref{s2} and Theorem~2 in Section~\ref{s3}. $d\rho$ is calculated in Section~\ref{s5}. Theorems~1 and 2 seem to be optimal in that the two pieces of real data \eqref{1.6} (i.e., $a$ and $b$) correspond to one complex number $f(z_0)$ while the three moments in Theorem~2 correspond to the real numbers, $a$, $b$, and $c$. There is previous work of Nevai \cite{Nev79} on the subjects of Theorem~1 and Theorem~2. He proved that \eqref{1.6} implies \eqref{1.5} with $f$ given by \eqref{1.7}, and conversely proved that if \eqref{1.5} holds for all $z\in\bbC\backslash\bbR$ and $f(z)$ given by \eqref{1.7}, then \eqref{1.6} holds. He did not get a result depending on a single $z_0$ nor, more importantly, did he show that \eqref{1.7} are the only possible limits in \eqref{1.5}. Nevai \cite{Nev79} also proved that if $a_n\to a$ and $b_n\to b$, then $d\mu_n$ has a weak limit (he wrote down the explicit form of $d\rho_{b;a,a}$ for $a=c$, as we will in Section~\ref{s5}). In \cite{Nev89}, Nevai made a conjecture closely related to a special case of Theorem~2. Namely, \begin{NC2.16}[\cite{Nev89}]\lb{NC2.16} If \eqref{1.10} holds for all bounded uniformly continuous functions on $\bbR$ with $d\rho (x)=\pi^{-1} \chi_{[-1,1]}(x) (1-x^2)^{-1/2}\, dx$, then $a_n\to \f12$ and $b_n\to 0$. \end{NC2.16} \begin{ct2} If one supposes $\supp (d\mu)$ is bounded, then Nevai's conjecture holds. \end{ct2} \begin{proof} $x^\ell\restriction\supp (d\mu)$ is bounded, so $\int x^\ell \, d\mu_n$ converges for $\ell =1,2,4$ to the same limit as for $a_n \equiv \f12$ and $b\equiv 0$. Uniqueness of the limit (and the fact that $a=c$) completes the proof. \end{proof} Related to this is \begin{NC2.17}[\cite{Nev89}]\lb{NC2.17} If for some $A$, we have $\int_A^\infty d\mu_n\to 0$, then for every $\veps >0$, $[A+\veps, \alpha)\cap\supp (d\mu)$ is finite. \end{NC2.17} We mention \begin{WNC2.17}\lb{WNC2.17} If for some $A>0$, $\mu_n (\{x\mid \abs{x}>A\})\to 0$, then $\supp (d\mu)$ is bounded. \end{WNC2.17} Clearly, Nevai Conjecture~2.17 implies the weaker version. The point of this is that a positive solution of the Weaker Nevai Conjecture~2.17 plus the results of this paper would imply Nevai Conjecture~2.16. \smallskip It is a pleasure to thank Rowan Killip and Paul Nevai for cogent comments. \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Ratio Asymptotics} \lb{s2} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The two main theorems on limits of $P_{n+1}(x)/P_n(x)$ are as follows: \begin{theorem}\lb{T2.1} Suppose $a_n\to a\in [0,\infty)$ and $b_n\to b\in\bbR$. Then for all $z\in \bbC\backslash\spec(J)$, we have that \begin{equation} \lb{2.1} \lim_{n\to\infty}\, \f{P_{n+1}(z)}{P_n(z)} = \f{(z-b)+\sqrt{(z-b)^2-4a^2}\,}{2} \end{equation} \end{theorem} {\it Remarks.} 1. In \eqref{2.1}, we take the branch of the square root with $\sqrt{\cdots}\sim z$ for $\abs{z}$ large, that is, as $z\to\infty$. \smallskip 2. For $z\notin\bbR$, $P_n$ is nonzero for all $n$. \eqref{2.1} for $z_0\in\bbR\backslash\spec(J)$ includes the fact that for $z_0$ fixed, $P_n(z_0)\neq 0$ for all large $n$. \smallskip 3. One can also show that for $z\in\spec(J)\backslash [b-2a, b+2a]$ so that $z$ is an eigenvalue of $J$, \begin{equation} \lb{2.2} \lim_{n\to\infty}\, \f{P_{n+1}(z)}{P_n(z)} = \f{(z-b)- \sqrt{(z-b)^2-4a^2}\,}{2} \end{equation} \begin{theorem}\lb{T2.2} Suppose for one $z_0$ with $\Ima z_0\neq 0$, $\lim_{n\to\infty} P_{n+1}(z) /P_n(z)$ exists {\rm{(}}and is finite{\rm{)}}. Then there exists $a\in [0,\infty)$ with $b\in\bbR$ so that $a_n\to a$ and $b_n\to b$ so that \eqref{2.1} holds. In particular, the only functions that can occur as ratio asymptotics are the ones in \eqref{2.1}. \end{theorem} Theorem~\ref{T2.1} is not new. In this generality, it is due to Nevai \cite{Nev79}, who also proved a converse; namely, he showed that if \eqref{2.1} holds for all $z\in\bbC\backslash\bbR$, then $a_n\to a$ and $b_n\to b$. But we will sketch two proofs of Theorem~\ref{T2.1} for the reader's convenience. One uses transfer matrices and the other, operator theory. As a preliminary, we need: \begin{proposition}\lb{P2.3} Let $\{x_{j,n}\}_{j=1}^n$ be the zeros of $P_n(x)$ with \begin{equation} \lb{2.2a} x_{1,n} < x_{2,n} < \cdots < x_{n,n} \end{equation} Then \begin{SL} \item[{\rm{(i)}}] \begin{equation} \lb{2.3} -\f{P_{n-1}(z)}{P_n(z)} = \sum_{j=0}^n \, \f{\alpha_{j,n}}{x_{j,n}-z} \end{equation} where $\alpha_{j,n}>0$ and \begin{equation} \lb{2.4} \sum_{j=1}^n \alpha_{j,n} =1 \end{equation} \item[{\rm{(ii)}}] If $\Ima z>0$, \begin{equation} \lb{2.5} 0 < -\Ima \biggl( \f{P_{n-1}(z)}{P_n(z)}\biggr) \leq \f{1}{\Ima z} \end{equation} and \begin{equation} \lb{2.5b} \biggl| \f{P_{n-1}(z)}{P_n(z)}\biggr| \leq \f{1}{\Ima z} \end{equation} \item[{\rm{(iii)}}] If $\Ima z >0$, \begin{equation} \lb{2.5a} \Ima \biggl( \f{P_{n+1}}{P_n}\biggr) \geq \Ima z \end{equation} \end{SL} \end{proposition} \begin{proof} (i) Since $P_n$ is monic, $P_n(z) = \prod_{j=1}^n (z-x_{j,n})$. Since $P_{n-1}/P_n$ has simple poles and goes to zero at infinity, \eqref{2.3} holds for some $\alpha_{j,n}$. Multiplying by $x_{i,n} - z$ and taking $z$ to $x_{i,n}$, we find \begin{equation} \lb{2.6} \alpha_{j,n} = \f{\prod_{\ell=1}^{n-1} (x_{j,n} - x_{\ell,n-1})}{\prod_{\ell=1;\, \ell\neq j}^n (z_{j,n}-x_{\ell,n})} \end{equation} For $j=n$, all factors in the right are positive. Since zeros of $P_{n-1}$ and $P_n$ interlace as we decrease $j$ by one, both numerator and denominator each pick up a minus sign which cancel to prove $\alpha_{j,n}>0$. The left side of \eqref{2.3} is $-z^{-1} +O(z^{-2})$ as $z\to\infty$ since $P_n$ is monic. The right side is $-z^{-1} (\sum_{j=0}^n \alpha_{j,n}) + O(z^{-2})$, so \eqref{2.4} holds. \smallskip (ii) This follows from \eqref{2.3} and \eqref{2.4} if one notes that for any $x\in\bbR$ and $z$ with $\Ima z>0$, \[ 0 < \Ima \biggl( \f{1}{x-z}\biggr) \leq \f{1}{\abs{x-z}} \leq \f{1}{\Ima z} \] \smallskip (iii) This follows immediately from \begin{equation} \lb{2.7} \f{P_{n+1}(z)}{P_n(z)} = z-b_{n+1} - a_n^2 \, \f{P_{n-1}(z)}{P_n(z)} \end{equation} Since $\Ima P_{n-1}/P_n <0$, \eqref{2.7} implies \eqref{2.5a}. \end{proof} \begin{proof}[Proof of Theorem~\ref{T2.2}] By replacing $x$ by $(x-\Real z_0)/\Ima z_0$ (i.e., translating and scaling the measure), we can suppose for notational simplicity that $z_0=i$. Let \begin{equation} \lb{2.8} \alpha=\lim_{n\to\infty} \, \f{P_{n+1}(i)}{P_n(i)} \end{equation} By \eqref{2.5a}, $\Ima \alpha \geq 1$ so $-\Ima (\alpha^{-1}) = \Ima \alpha/\abs{\alpha}^2 >0$. Taking imaginary parts of \eqref{2.7}, we see that \begin{equation} \lb{2.9} a_n^2 = \f{[\Ima (P_{n+1}/P_n)-1]}{\Ima (-P_{n-1}/P_n)} \to \f{(\Ima \alpha -1)}{\Ima (-\alpha^{-1})} \equiv a^2 \geq 0 \end{equation} proving $a_n$ has a limit. Taking real parts of \eqref{2.7} shows \begin{equation} \lb{2.10} b_{n+1} = -a_n^2 \Real \biggl( \f{P_{n-1}}{P_n}\biggr) - \Real \biggl( \f{P_{n+1}}{P_n}\biggr) \to -a^2 \Real (\alpha^{-1}) - \Real (\alpha) \equiv b \end{equation} \end{proof} Our first proof of Theorem~\ref{T2.1} is a simple consequence on the following theorem of Poincar\'e (see \cite{Gelf,Poin,Sib} for proofs): \begin{TP} Let $u_j\in\bbC$ solve the $n$-th order difference equation \begin{equation} \lb{2.10a} u_{n+j} = a_{j,1} u_{n+j-1} + a_{j,2} u_{n+j-2} + \cdots + a_{j,n} u_j \end{equation} $j=1,2,\dots$. Suppose \begin{SL} \item[{\rm{(i)}}] $a_{j,n}\neq 0$ for all $j$. \item[{\rm{(ii)}}] $\lim_{j\to\infty} a_{j,\ell}=A_\ell$ exists for $\ell =1, \dots, n$. \end{SL} Let $\lambda_1, \dots, \lambda_n$ be the solutions of \begin{equation} \lb{2.11} A_n + A_{n-1} \lambda + \cdots + A_1\lambda^{n-1} =\lambda^n \end{equation} Suppose $\{\lambda_j\}_{j=1}^n$ are distinct, and for $j\neq k$, $\abs{\lambda_j}\neq \abs{\lambda_k}$. Then, if $u$ is not identically zero, we have for some $k$ that \begin{equation} \lb{2.12} \lim_{j\to\infty}\, \f{u_{j+1}}{u_j} =\lambda_k \end{equation} \end{TP} \begin{proof}[First Proof of Theorem~\ref{T2.1}] \eqref{1.1} is exactly of the form \eqref{2.10a}. The equation \eqref{2.11} becomes \begin{equation} \lb{2.14} \lambda^2 = (z-b)\lambda -a^2 \end{equation} whose solutions are \begin{equation} \lb{2.15} \lambda_\pm = \f{(z-b) \pm \sqrt{(z-b)^2 -4a^2}\,}{2} \end{equation} To prove \eqref{2.1}, we must show that if $z\notin [b-2a, b+2a]$, then $\abs{\lambda_+}\neq \abs{\lambda_-}$ and then identify which root is taken by the ratio. If $\alpha$ and $\beta$ are complex numbers, $\abs{\alpha+\beta}=\abs{\alpha-\beta}$ if and only if $\alpha$ and $\beta$ are orthogonal as vectors in $\bbC=\bbR^2$, if and only if $\beta = ic\alpha$ for $c\in\bbR$, if and only if $\beta^2 =-c^2 \alpha^2$. Taking $\alpha=(z-b)$ and $\beta = \sqrt{(z-b)^2 -4a^2}$, we see $\abs{\lambda_+}=\abs{\lambda_-}$ if and only if \[ -c^2 (z-b)^2 =(z-b)^2 -4a^2 \] or \begin{equation} \lb{2.16} z-b = \pm \,\f{2a}{\sqrt{1+c^2}\,} \end{equation} for $c\in\bbR$. \eqref{2.16} holds if and only if $z\in [b-2a,b+2a]$. Since $\abs{\lambda_+}\neq\abs{\lambda_-}$, $\lambda_\pm(z)$ are analytic in $\bbC\backslash [b-2a,b+2a]$ (as is obvious from \eqref{2.15}). Since $\lambda_+\lambda_-=a^2$, $\abs{\lambda_+} \,\abs{\lambda_-}=a^2$, and $\lambda_+ =z+O(1/z)$ as $\abs{z}\to\infty$, $\abs{\lambda_+}>a$ for $\abs{z}$ large, and so since $\abs{\lambda_+}\,\abs{\lambda_-}=a^2$ and $\abs{\lambda_+}\neq \abs{\lambda_-}$ implies $\abs{\lambda_\pm}\neq a$, we see \begin{equation} \lb{2.17} \abs{\lambda_+} >a \qquad \abs{\lambda_-} < a \qquad \text{all } z\in\bbC\backslash [b-2a, b+2a] \end{equation} Thus for all $z\in\bbC\backslash [b-2a, b+2a]$, Poincar\'e's theorem applies and $P_{n+1}/P_n$ has a limit $\ell(z)$ where for each $z$, $\ell(z)=\lambda_+$ or $\ell(z)=\lambda_-$. By \eqref{2.5}, $P_n/P_{n+1}$ is a normal family on $\bbR\backslash\bbC$, so $\ell(z)$ is analytic on $\bbR\backslash\bbC$. By \eqref{2.5a}, $\abs{\ell(z)} > \abs{\Ima z}$, so for $\abs{z}$ large, $\ell(z) =\lambda_+(z)$ and thus, by analyticity, $\ell(z) =\lambda_+(z)$ for $z\in\bbC\backslash \bbR$. This establishes \eqref{2.1} there. For $z\in\bbR\backslash [b-2a, b+2a]$, \[ \lim_{n\to\infty} \, \f{p_{n+1}(z)}{p_n(z)} = \lim_{n\to\infty} \, \f{1}{a} \, \f{P_{n+1}(z)}{P_n(z)} = \f{\lambda_\pm (z)}{a} \] If the limit is $\lambda_-(z)$, $\lim \abs{p_{n+1}/p_n}<1$, so $p_n\in\ell_2$ and $z\in\spec(J)$. Conversely, if it is $\lambda_+$, $\lim\abs{p_{n+1}/p_n}>1$, so $p_n\notin\ell_2$. Thus \eqref{2.1} holds for $z\in\bbR\backslash\spec(J)$ and \eqref{2.2} holds for $z\in\spec (J)\backslash [b-2a, b+2a]$. \end{proof} \begin{proof}[Second Proof of Theorem~\ref{T2.1}] Let $J^{(n)}$ be the $n\times n$ matrix obtained from the first $n$ rows and columns of $J$. As is well-known, \[ \det (z-J^{(n)})=P_n(z) \] Thus, by Cramer's rule, \begin{equation} \lb{2.18} \f{P_{n-1}(z)}{P_n(z)} = (z-J^{(n)})_{nn}^{-1} = (z-\ti J^{(n)})_{11}^{-1} \end{equation} where $\ti J_{ij}^{(n)}=\ti J_{n-i,\, n-j}^{(n)}$, that is, \begin{equation} \lb{2.19} \ti J^{(n)} = \begin{pmatrix} b_n & a_{n-1} & 0 & \dots &\dots \\ a_{n-1} & b_{n-1} & a_{n-2} & \dots & \dots \\ 0 & a_{n-2} & b_{n-2} & \dots & \dots \\ \dots & \dots & \dots & \dots & \dots \\ \dots & \dots & \dots & a_1 & b_1 \end{pmatrix} \end{equation} If $\ti J^{(\infty)}$ is the constant Jacobi matrix $a_n\equiv a$, $b_n\equiv b$, it is clear as operators on $\ell^2 (\bbZ_+)$, $\ti J^{(n)}\to\ti J^{(\infty)}$ strongly. It follows that as operators on $\ell^2 (\bbZ)$, $(\ti J^{(n)}-z)^{-1} \to (\ti J^{(\infty)}-z)^{-1}$ strongly for $\Ima z\neq 0$. Thus \begin{equation} \lb{2.20} \f{P_{n-1}(z)}{P_n(z)} = (z-\ti J^{(n)})_{11}^{-1} \to (z-\ti J^{(\infty)})_{11}^{-1} \end{equation} Let $w$ solve \begin{equation} \lb{2.21} a(w+w^{-1})+b =z \end{equation} with $\abs{w}<1$. Let $u_n =w^n$. Thus $u_n\in \ell_2$ and \[ (z-\ti J^{(\infty)})u = (aw^{-1})\delta_1 \] so \begin{equation} \lb{2.22} (z-\ti J^{(\infty)})_{11}^{-1} =aw^{-1} \end{equation} Solving \eqref{2.21}, we see that $aw^{-1}=$ RHS of \eqref{2.1}. This proves \eqref{2.1} on $\bbC \backslash\bbR$. On $\bbR\backslash\spec(J)$, one shows the eigenvalues of $\ti J^{(n)}$ (equals zeros of $P_n(z)$) outside $[b-2a,b+2a]$ converge to the eigenvalues of $J$ so for $z\in\bbR\backslash\spec(J)$, $(\ti J^{(n)}-z)^{-1}\to (\ti J^{(\infty)} -z)^{-1}$ strongly also. \end{proof} \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Weak Asymptotic Limits} \lb{s3} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Let $d\mu_n =p_n^2 \, d\mu$. In this section, the main theorems are \begin{theorem}\lb{T3.1} Suppose \begin{equation} \lb{3.1} b_n\to b \qquad a_{2n}\to a \qquad a_{2n+1}\to c \end{equation} for $b\in\bbR$, $a,c\in [0,\infty)$. Then as $n\to\infty$, $d\mu_n$ has a weak limit $d\rho_{b;a,c} (x)$, and for every $\ell$, \begin{equation} \lb{3.2} \int x^\ell\, d\mu_n \to \int x^\ell\, d\rho_{b;a,c}(x) \end{equation} \end{theorem} {\it Remarks.} 1. The hypotheses imply that $d\mu$ is supported in $[\inf b_n -2\sup (a_n), \sup b_n + 2\sup (a_n)]$ which is bounded, so weak convergence is equivalent to convergence of the moments. \smallskip 2. We will see below that $d\rho_{b';a',c'}=d\rho_{b;a,c}$ implies $b'=b$ and either $a'=a$, $c'=c$, or $a'=c$, $c'=a$. \smallskip 3. We will discuss the form of $d\rho_{b;a,c}$ in Section~\ref{s5}. \begin{theorem}\lb{T3.2} Suppose for $\ell=1,2$, and $4$, \begin{equation} \lb{3.3x} \lim_{n\to\infty} \, \int x^\ell d\mu_n =A_\ell \end{equation} Then for some $a,b,c$, \eqref{3.1} holds. Moreover, $A_1, A_2, A_4$ determine $b$, $a+c$, $\abs{a-c}$ {\rm{(}}i.e., they determine $b$ and the unordered pair $(a,c)${\rm{)}}. \end{theorem} {\it Remarks.} 1. We will see $A_2 <\infty$ implies $\sup(\abs{b_n}+\abs{a_n})<\infty$. \smallskip 2. The final assertion proves the second remark after Theorem~\ref{T3.1}. \smallskip Our proofs will depend on a graphical representation of $\int x^\ell\, d\mu_n$. Consider the lattice $\bbZ_+ =\{0,1,\dots\}$. We will consider a random walk on $\bbZ_+$ where at each step, one either stays at the site one is at or one jumps by a single site. Paths have unnormalized weights, products over the steps: $b_{k+1}$ if one stays at site $k$, $a_{k+1}$ is one move from $k$ to $k+1$ or $k+1$ to $k$. To be more precise, a path is a sequence $\rho_0, \rho_1, \dots, \rho_\ell\in\bbZ_+$ so that $\abs{\rho_m - \rho_{m-1}}\leq 1$ and \begin{equation} \lb{3.3} W(\rho) =\prod_{j=0}^{\ell-1} w(\rho_j, \rho_{j+1}) \end{equation} and \begin{equation} \lb{3.4} w(\rho_j, \rho_{j+1}) = \begin{cases} b_{k+1} &\text{if } \rho_{j+1}=\rho_j = k \\ a_{k+1} &\text{if } \rho_{j+1}=\rho_j + 1 =k+1 \\ a_k &\text{it } \rho_{j+1}=\rho_j -1 =k-1 \end{cases} \end{equation} Here is the key tool: \begin{proposition}\lb{P3.3} \begin{equation} \lb{3.5} \int x^\ell \, d\mu_n =\sum_{\rho\in Q_{n,\ell}} W(\rho) \end{equation} where $Q_{n,\ell}$ is the set of all paths of length $\ell$ with $\rho_0 =\rho_\ell =n$. \end{proposition} \begin{proof} Since \[ xp_n = a_{n+1} p_{n+1} + b_{n+1} p_n + a_n p_{n-1} \] we see immediately that, by induction in $j$, \begin{equation} \lb{3.6} x^j p_n =\sum c_{j,m,n}p_m \end{equation} where \[ c_{j,m,n} =\sum_{\rho\in Q_{n,m,j}} W(\rho) \] and $Q_{n,m,j}$ is all paths of length $j$ with $\rho_0 =n$ and $\rho_j =m$. \eqref{3.5} follows since $\int x^\ell \, d\mu_n =\langle p_n, x^\ell p_n\rangle =c_{\ell,n,n}$. \end{proof} \begin{proof}[Proof of Theorem~\ref{T3.1}] Under hypothesis \eqref{3.1}, $J$ is bounded, so $d\mu$ has a bounded support, so weak convergence is equivalent to \eqref{3.2}. By Proposition~\ref{P3.3}, $\int x^\ell\, d\mu_n$ is a finite sum over paths. This representation shows that if $a_n$, $b_n$, and $\ti a_n, \ti b_n$ are two sets of Jacobi parameters and $\lim_{n\to\infty} \abs{a_n -\ti a_n} + \abs{b_n -\ti b_n}=0$, then $\abs{\int x^\ell d\mu_n - \int x^\ell d\ti\mu_n}\to 0$. Thus we need only prove \eqref{3.2} for $b_n\equiv b$, $a_{2n}\equiv a$, $a_{2n+1}\equiv c$. Fix $\ell$. So long as $\ell <2n$, there is a one-one correspondence between paths $\rho\in Q_{n,\ell}$ and $\rho\in Q_{n,+1,\ell}$ by $U=TS$, \begin{align*} T(\rho)_j &= \rho_j +1 \\ S(\rho)_j &= n-(\rho_j -n) \end{align*} $S$ reflects the path in $n$, $T$ translates by $1$. $\ell <2n$ is needed to assure paths do not get mapped into ones that have $\rho_j <0$, which is forbidden (and that the inverse does not do this), showing $U$ is a bijection of $Q_{n,\ell}$ and $Q_{n+1,\ell}$. The key point is that $W(U\rho) = W(\rho)$, for if $\rho_j =\rho_{j+1}$, the weight is always $b$ and both $S$ and $T$ interchange links with weight $a$ and those with weight $c$. It follows that if $b\equiv b$, $a_{2n}\equiv a$, $a_{2n+1}\equiv c$, then $\int x^\ell \, d\rho_n$ is independent of $n$ once $\ell <2n$, so the limit exists. Once the moments exist, they provide a measure since the nonnegative Hankel matrices converge to nonnegative Hankel matrices. \end{proof} \begin{proof}[Proof of Theorem~\ref{T3.2}] $\int xp_n^2\, d\mu = b_{n+1}$, so \eqref{3.3} for $\ell=1$ implies $b_{n+1}\to A_1 \equiv b$. Let $d\ti\mu (x)=d\mu (x+b)$. The Jacobi parameters of $\ti\mu$ are given by \begin{equation} \lb{3.7} \ti a_n =a_n \qquad \ti b_n =b_n-b \end{equation} Moreover, \begin{equation} \lb{3.8} \int x^\ell \, d\mu_n =\sum_{j=0}^\ell \binom{\ell}{j} (b)^{\ell-j} \int x^j\, d\ti\mu_n \end{equation} and \begin{equation} \lb{3.9} \int x^\ell\, d\ti\mu_n =\sum_{j=0}^\ell \binom{\ell}{j} (-b)^{\ell-j} \int x^j \, d\mu_n \end{equation} Since $\ti b_n\to 0$ and every odd $\ell$ random walk has a $b_k$ factor in it, $\int x^\ell \, d\ti\mu_n\to 0$ for all odd $\ell$. Thus, \eqref{3.8} implies $\int x^3\, d\mu_n$ exists and then \eqref{3.9} that $\int x^4\, d\ti\mu_n$ converges. Thus, without loss, we suppose $A_1=0$ and $b_n\to 0$. If $b_n\to 0$, any path with $\rho_j =\rho_{j+1}$ contributes zero in the limit; so we can restrict to paths with $\abs{\rho_{j+1} -\rho_j}=1$. Thus, looking at the two such paths with $\rho_2 =\rho_0 =n$, \begin{equation} \lb{3.10} \lim_{n\to\infty}\, a_{n+1}^2 + a_n^2 =A_2 \end{equation} In looking at paths with $\rho_0 =\rho_4=n$, all those with $\rho_2=n$ contribute $(\int x^2 d\mu_n)^2$, so \begin{equation} \lb{3.11} \lim_{n\to\infty}\, a_{n+2}^2 a_{n+1}^2 + a_n^2 a_{n-1}^2 = A_4 - A_2^2 \end{equation} Thus, using $(x-y)^2 =(x+y)^2 -4xy$, \begin{equation} \lb{3.12} \lim_{n\to\infty} \, (a_{n+2}^2 - a_{n+1}^2)^2 + (a_n^2 - a_{n-1}^2)^2 = 6A_2^2 -4A_4 \end{equation} Suppose $a_n$ has a limit point, $a$, that is, $a_{n(j)}\to a$ as $j\to\infty$ for a subsequence. Define $c=\sqrt{A_2 -a^2}$. By \eqref{3.10}, for any $\ell=0, \pm 1, \pm 2, \dots$, \[ a_{n(j)+\ell}\to \begin{cases} a & \ell \text{ even} \\ c & \ell \text{ odd} \end{cases} \] In particular, by \eqref{3.12}, \begin{equation} \lb{3.13} \abs{a^2 -c^2} = \sqrt{3A_2^2 - 2A_4} \end{equation} Since also \begin{equation} \lb{3.14} a^2 + c^2 = A_2 \end{equation} there are at most two solutions of \eqref{3.13}, \eqref{3.14}: \begin{align} a^2 &= \tfrac12\, \biggl[A_2 + \sqrt{3A_2^2 - 2A_4}\,\biggr] \lb{3.15} \\ c^2 &= \tfrac12\, \biggl[A_2 - \sqrt{3A_2^2 - 2A_4}\,\biggr] \lb{3.16} \end{align} and the one with $a,c,$ reversed. Thus the right sides of \eqref{3.15} and \eqref{3.16} are the only limit points of $a_n^2$. The lemma below completes the proof. \end{proof} \begin{lemma}\lb{L3.4} Let $x_n$ be a sequence so that for some $\alpha,\beta\in\bbR$, \begin{align} \lim_{n\to\infty}\, x_n + x_{n+1} &= \alpha+\beta \lb{3.17} \\ \lim_{n\to\infty}\, \abs{x_n - x_{n+1}} &= \abs{\alpha -\beta} \lb{3.18} \end{align} Then either \[ \lim_n \, x_{2n}=\alpha \qquad \lim_n\, x_{2n+1} =\beta \] or \[ \lim_n\, x_{2n}=\beta \qquad \lim_n \, x_{2n+1}=\alpha \] \end{lemma} \begin{proof} By replacing $x_n$ by $x_n-\f12 (\alpha+\beta)$, we can suppose $\alpha = -\beta \geq 0$. If $\alpha = \beta =0$, the result is trivial, so suppose $\alpha =-\beta >0$. Pick $N$ so that for $n >N$, \begin{align*} \abs{x_n + x_{n+1}} &< \alpha \\ \abs{x_n - x_{n+1}} &> \alpha \end{align*} Thus, since $\abs{x_n -x_{n+1}} > \abs{x_n + x_{n+2}}$, $x_n$ and $x_{n+1}$ have opposite signs for all $n>N$, that is, for $n>N$, either $(-1)^n x_n>0$ or $(-1)^{n+1} x_n >0$. Since $\pm\alpha$ are the only allowed limit points if $(-1)^n x_n >0$, $x_{2n}\to \alpha$, $x_{2n+1}\to\beta =-\alpha$, and if $(-1)^{n+1} x_n >0$, $x_{2n}\to \beta$, $x_{2n+1} \to \alpha$. \end{proof} \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Ratio Asymptotics for $p_{n+1}/p_n$} \lb{s4} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% In this section and the next, we discuss two further issues related to our results: what about $\lim p_{n+1}/p_n$ ($p_n$ rather than $P_n$) and we calculate the measures $d\rho_{b;a,c}$ of \eqref{3.2} (already well known if $a=c$). Let \begin{equation} \lb{4.1} R_n(z) = \f{P_{n+1}(z)}{P_n(z)} \qquad r_n(z) = \f{p_{n+1}(z)}{p_n(z)} \end{equation} Since $p_n =(a_1 \dots a_n)^{-1} P_n$, \begin{equation} \lb{4.2} r_n(z) = a_{n+1}^{-1} R_n(z) \end{equation} so we immediately see with Theorem~\ref{T2.1} that if $a_n\to a\neq 0$ and $b_n\to b$, \begin{equation} \lb{4.3} \lim_{n\to\infty}\, r_n(z) = \f{(z-b) + \sqrt{(z-b)^2 - 4a^2}\,}{2a} \end{equation} We want to address a converse. One problem we find is that while $\abs{R_n(z)}>\Ima\abs{z}$, without an a priori upper bound on $a_n$, we do not have a bound for $\abs{r_n (z)}$, so it is not obvious that existence of the limit implies $a_n$ is bounded. \begin{example}\lb{E4.1} Let $a_n =e^{n!}$, $b_n =$ arbitrary bounded sequence, especially one without a limit. Then, by \eqref{1.3} and \begin{align} r_n(z) &= \f{z-b_{n+1}}{a_{n+1}} - \f{a_n}{a_{n+1}}\, [r_{n-1}(z)]^{-1} \notag \\ &= \f{z-b_{n+1}}{a_{n+1}} - \f{a_n^2}{a_{n+1}}\, [R_{n-1}(z)]^{-1} \lb{4.4} \end{align} by \eqref{4.2}. By \eqref{2.5b}, \begin{equation} \lb{4.5} \abs{r_n(z)} \leq \f{\abs{z-b_{n+1}}}{a_{n+1}} + \f{a_n^2}{a_{n+1}}\, \abs{\Ima z}^{-1} \to 0 \end{equation} since $a_n$ is chosen so $a_n^2/a_{n+1}\to 0$. Thus for this example, $r_n(z)\to 0$. \qed \end{example} Because of this example, we will need to suppose that if $\lim r_n(z)$ exists, it has nonzero imaginary part. Here is a result that requires two points rather than one, with some extra conditions: \begin{theorem}\lb{T4.2} Suppose $\sup_n a_n <\infty$. Suppose $z_1, z_2$ are in $\{z\mid \Ima z>0\}$ and let \begin{equation} \lb{4.6} \lim_{n\to\infty} \, r_n (z_j) =\lambda_j \end{equation} with \begin{alignat*}{2} &\text{\rm{(a)}} \qquad && \Ima\lambda_j >0 \\ \intertext{and either} &\text{\rm{(b1)}} \qquad && \f{\Ima \lambda_1}{\Ima z_1} \neq \f{\Ima \lambda_2}{\Ima z_2} \\ \intertext{or} &\text{\rm{(b2)}} \qquad && \f{\Ima (\lambda_1^{-1})}{\Ima z_1} \neq \f{\Ima (\lambda_2^{-1})}{\Ima z_2} \end{alignat*} Then $a_n\to a\neq 0$ and $b_n\to b$. \end{theorem} \begin{proof} By \eqref{1.3}, \begin{equation} \lb{4.7} a_{n+1} r_n(z) = (z-b_{n+1}) - a_n [r_{n-1}(z)]^{-1} \end{equation} Since $\Ima (-r_{n-1}(z))^{-1} >0$, \eqref{4.7} implies \[ a_{n+1} \Ima r_n (z_j)\geq \Ima z_j \] which implies \[ \liminf a_n \geq \f{\Ima z_j}{\Ima \lambda_j} >0 \] so the $a$'s are bounded above and below. Let $(a,c)$ be a limit point of $(a_{n+1}, a_n)$. By \eqref{4.7}, \begin{equation} \lb{4.8} a\Ima \lambda_j = \Ima z_j + c(\Ima (-\lambda_j)^{-1}) \end{equation} If $\Ima \lambda_1/\Ima z_1\neq \Ima\lambda_2/\Ima z_2$, \eqref{4.8} implies \[ a\biggl[ \f{\Ima \lambda_1}{\Ima z_1} - \f{\Ima\lambda_2}{\Ima z_2}\biggr] = c\biggl[ \f{\Ima(-\lambda_1)^{-1}}{\Ima z_1} - \f{\Ima (-\lambda_2)^{-1}}{\Ima z_2} \biggr] \] so we can solve for $a$ as a multiple of $c$, and then for $c$ in \eqref{4.8} for $j=1$. If $\Ima (\lambda_1)^{-1}/\Ima z_1\neq \Ima (\lambda_2)^{-1}/\Ima z_2$, we solve for $c$ as a multiple of $a$. Either way, we see \eqref{4.8} has a unique solution for $(a,c)$ so $(a_{n+1}, a_n)\to (a,c)$. But then $(a_{n+2}, a_{n+1})\to (a,c)$ so $a=c$ and $a_n\to a\neq 0$. This implies $\lim R_n (z_1)=\lim a_n c_n (z)$ exists. So, by Theorem~\ref{T4.2}, $b_n\to b$. \end{proof} We have a second remark about the existence of $\lim_{n\to\infty} r_n(z)$. In the OPUC case, existence of $\varphi_{n+1}^*(z)/\varphi_n^*(z)$ for all $z\in\bbD$ implies the same of $\Phi_{n+1}^*(z)/\Phi_n^*(z)$ (and, by taking $\alpha_j=0$ if $j\neq n^2$, $\alpha_{n^2}=\f12$, not conversely) for $\varphi_{n+1}^*(0)/\varphi_n^*(0)=\rho_n^{-1}$, so existence of the $\varphi$ ratio limit at $z=0$ implies $\rho_n\to \rho_\infty$ and then, since $\Phi_{n+1}^*(z)/\Phi_n^*(z)=\rho_n \varphi_{n+1}^*(0)/\varphi_n^*(0)$, we get the $\Phi_n^*$ ratio limits. The same is true here, but alas, the analog of $z=0$ for OPUC is $z=\infty$ here. The following captures the idea, without the need for hypotheses (b) of Theorem~\ref{T4.2}. \begin{theorem} \lb{T4.3} Suppose $a_n$ and $\abs{b_n}$ are bounded and $r_n(z)$ converges to a nonzero limit as $n\to \infty$ for all $z$ in a small neighborhood of $z_0\in\bbC\backslash \bbR$. Then $a_n\to a$ and $b_n\to b$ for some $a\neq 0$, $b\in\bbR$. \end{theorem} \begin{proof} By \eqref{4.4}, \begin{equation} \lb{4.9} r_n(z) = \f{z}{a_{n+1}} -\f{b_{n+1}}{a_{n+1}} + O\biggl(\f{1}{z^{-1}}\biggr) \end{equation} Thus \begin{equation} \lb{4.9a} r_n (z)^{-1} = \f{a_{n+1}}{z} + \f{b_{n+1}}{z^2} + O\biggl( \f{1}{z^3}\biggr) \end{equation} If $\supp (d\mu)\subset [-c,c]$ (take $c=\sup_n \abs{b_n} + 2\sup \abs{a_n}$), then for $\rho >c$, by the Cauchy formula for every $\ell\in\bbZ$ and the fact that $r_n$ has its zeros in $[-c,c]$, $\f{1}{2\pi i} \oint_{\abs{z}=\rho} r_n(z)^{-1} z^\ell \, dz$ is $\rho$ independent for $\rho >c$. Taking $\rho$ to infinity, we get, by \eqref{4.9}, \begin{align} \f{1}{a_{n+1}} &= \f{1}{2\pi i} \oint_{\abs{z}=c+1} r_n(z)^{-1} \, dz \lb{4.10} \\ -\f{b_{n+1}}{a_{n+1}} &=\f{1}{2\pi i} \oint_{\abs{z}=c+1} r_n(z)^{-1} z\, dz \lb{4.11} \end{align} Thus uniform convergence of $r_n(z)$ to a limit on $\abs{z}=c+1$ implies convergence of $a_n$ and $b_n$. Therefore, we are reduced to showing convergence of $r_n^{-1}$ on a single compact subset of $\bbC\backslash\bbR$ implies convergence on all compact subsets of $\bbC\backslash [-c,c]$. By \eqref{2.3} and \eqref{2.4} and $x_{i,n}\in [-c,c]$, we have \[ \abs{R_n(z)^{-1}} \leq \sup_{x\in [-c,c]} \, \abs{z-x}^{-1} \] and thus, by \eqref{4.2}, \begin{equation} \lb{4.12} \abs{r_n(z)^{-1}} \leq \bigl[\sup_n \, \abs{a_n}\big] \sup_{x\in [-c,c]} \, \abs{z-x}^{-1} \end{equation} Thus convergence of $r_n(z)$ on a compact implies, by Vitali's theorem and \eqref{4.12}, uniform convergence of $r_n(z)^{-1}$ on compact sets in $\bbC\backslash [-c,c]$. \end{proof} \bigskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Calculation of $d\rho_{b;a,c}(x)$} \lb{s5} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% In this section, we compute the weak limit $\rho_{b;a,c}$ of $p_n^2 \, d\mu$ for $\mu$, the measure corresponding to the Jacobi matrix $a_{2n} = c$, $a_{2n+1}=a$, $b_n=b$. We will also find $d\mu_{b;a,c}$, the measure associated to the Jacobi matrix \begin{equation} \lb{5.1} J = \begin{pmatrix} b & a \\ a & b & c \\ {} & c & b & a \\ {} & {} & a & b & c \\ {} & {} & {} & \ddots & \ddots & \ddots \end{pmatrix} \end{equation} We begin by computing \begin{equation} \lb{5.2} f(z;b;a,c) = \int \f{d\mu_{b;a,c}(x)}{z-x} \end{equation} and \begin{equation} \lb{5.3} G(z;b;a,c) = \int \f{d\rho_{b;a,c}(x)}{z-x} \end{equation} This calculation is not unrelated to calculations in Khrushchev \cite{Khr} for the measure associated to a period $2$ Verblunsky coefficient. As he does, we could ask for the Jacobi coefficients for the measure $d\rho$ and show they converge exponentially fast to those for $d\mu$ (with a need to interchange $a$ and $c$ depending on the sign of $a-c$). We begin with a result about a finite Jacobi matrix \begin{equation} \lb{5.4} J^{[1,n]} = \begin{pmatrix} b_1 & a_1 \\ a_1 & b_1 & a_2 \\ {} & \ddots & \ddots & \ddots \\ {} & {} & \ddots & \ddots & \ddots \\ {} & {} & {} & \ddots & \ddots & a_{n-1} \\ {} & {} & {} & {} & a_{n-1} & b_n \end{pmatrix} \end{equation} We let $J^{[j,k]}$ for $1\leq j \leq k\leq n$ denote the $(k-j+1) \times (k-j+1)$ matrix we get by keeping rows and columns between $j$ and $k$ (inclusive). We refer to the row number of $J^{[j,k]}$ as $j,j+1, \dots$ so, for example, $(J^{[j,k]})_{jj}=b_j$. Here is a key lemma that appears in Gesztesy-Simon \cite{GS} although closely related formulae have appeared elsewhere; in particular, the $k=1,n$ results go back to Jacobi: \begin{proposition}\lb{P5.1} For $2\leq k\leq n-1$, \begin{equation} \lb{5.5} \begin{split} [& (z-J^{[1,n]})^{-1}]_{kk} \\ &= 1\big/\{z-b_k -a_{k-1}^2 [(z-J^{[1,k-1]})^{-1}]_{k-1,k-1} - a_k^2 [(z-J^{[k+1,n]})^{-1}]_{k+1, k+1}\} \end{split} \end{equation} For $k=1$, \begin{equation} \lb{5.6} [(z-J^{[1,n]})^{-1}]_{11} = 1\big/\{z-b_1 - a_1^2 [(z-J^{[2,n]})^{-1}]_{22}\} \end{equation} \end{proposition} \begin{proof} Let $2\leq k\leq n-1$. Here is a proof that is more direct than that in \cite{GS}, although the essence is the same. If row and column $k$ are removed, the resulting matrix is $J^{[1,k-1]}\oplus J^{[k+1,n]}$ so, by Cramer's rule, \begin{equation} \lb{5.7} [(z-J^{[1,n]})^{-1}]_{kk} = \f{\det(z-J^{[1,k-1]})\det (z-J^{[k+1,n]})}{\det (z-J^{[1,n]})} \end{equation} Expanding $\det (z-J^{[1,n]})$ in minors in row $k$, \begin{equation} \lb{5.8} \det (z-J^{[1,n]})=(z-b_k) d_1 - a_{k-1}^2 d_2 - a_k^2 d_3 \end{equation} where \begin{align} d_1 &= \det (z-J^{[1,k-1]})\det(z-J^{[k+1,n]}) \lb{5.9} \\ d_2 &= \det (z-J^{[1,k-2]})\det(z-J^{[k+1,n]}) \lb{5.10} \\ d_3 &= \det (z-J^{[1,k-1]})\det(z-J^{[k+2,n]}) \lb{5.11} \end{align} (where $\det (z-J^{[1,0]})$ and $\det (z-J^{[n+1,n]})$, which occur if $k=2$ or $k=n-1$, are interpreted as $1$). Finally, note that, by Cramer's rule again, \begin{align} [(z-J^{[1,k-1]})^{-1}]_{k-1,k-1} &=\f{d_2}{d_1} \lb{5.12} \\ [(z-J^{[k+1,n]})^{-1}]_{k+1,k+1} &= \f{d_3}{d_1} \lb{5.13} \end{align} \eqref{5.7}--\eqref{5.13} imply \eqref{5.5}. \eqref{5.6} is proven in a similar way (but, e.g., the analog of \eqref{5.8} has only two terms). \end{proof} \begin{corollary}\lb{C7.2} For the functions $f$ and $G$ of \eqref{5.2}/\eqref{5.3}, we have that for $z\in\bbC\backslash\bbR$, \begin{equation} \lb{5.14} [G(z;b;a,c)]^{-1} = z-b-c^2 f(z;b;a,c) - a^2 f(z;b;c,a) \end{equation} \end{corollary} {\it Remark.} This formula makes it evident once again that $G$ is symmetric in $a$ and $c$. \begin{proof} Let $J$ be given by \eqref{5.1}. On $\ell^2 (\bbZ^+)$, $J^{[1,n]}\oplus 0$ converges strongly to $J$. Thus \begin{equation} \lb{5.15} f(z;b;a,c) = [(z-J)^{-1}]_{11} = \lim_{n\to\infty} \, [(z-J^{[1,n]})^{-1}]_{11} \end{equation} Moreover, \begin{equation} \lb{5.16} \int \f{p_k(x)^2 \, d\mu_{b;a,c}(x)}{z-x} = [(z-J)^{-1}]_{kk}= \lim_{n\to\infty} \, [(z-J^{[1,n]})^{-1}]_{kk} \end{equation} so \begin{equation} \lb{5.17} G(z;b;a,c) = \lim_{k\to\infty} \, \bigl[\, \lim_{n\to\infty} [(z-J^{[1,n]})^{-1}]_{kk}\bigr] \end{equation} \eqref{5.14} follows by using \eqref{5.5}, \eqref{5.15}, and \eqref{5.17} together with the structure of $J$ (e.g., $J^{[1+2\ell, n+2\ell]}=J^{[1,n]}$ and $J^{[1+2\ell +1, n+2\ell +1]}$ is $\ti J^{[1,n]}$ for $\ti J$, the matrix with $a$ and $c$ reversed). \end{proof} {\it Remark.} The limit of \eqref{5.5} as $n\to\infty$ is a precise analog of Khrushchev's formula \cite{Kh2000} that for the unit circle case, the Schur function of $\abs{\varphi_n}^2\, d\mu$ is $b_nf_n$. It would be interesting to see if one could translate our proof here to a proof of Khrushchev's formula using the CMV matrix \cite{CMV,Sib} in place of the Jacobi matrix. \begin{corollary}\lb{C7.3} \begin{equation} \lb{5.15x} [f(z;b;a,c)]^{-1} = z-b-a^2 f(z;b;c,a) \end{equation} \end{corollary} \begin{proof} This is identical to the last proof using \eqref{5.6} in place of \eqref{5.5}. Of course, this is a special case of the well-known Stieltjes continued fraction expansion for the Stieltjes transform of the measure associated to a Jacobi matrix. \end{proof} Henceforth, for simplicity, we take $b=0$. Since $G(z;b;a,c)=G(z-b;0;a,c)$, and similarly for $f$, it is easy to go from this case to the case of general $b$. As a warmup, consider the case $a=c$. Then \eqref{5.15x} becomes \begin{equation} \lb{5.16x} 1=f (z-a^2 f) \end{equation} which is solved by \begin{equation} \lb{5.17x} f(z;0;a,a) = \f{z-\sqrt{z^2-4a^2}\,}{2a^2} \end{equation} where the branch of the square root is taken (with $\sqrt{\cdots}=z+O(1/z)$ as $\abs{z}\to\infty$ consistent with $f(z)\sim 1/z + O(z^{-2})$). By \eqref{5.14}, we get \begin{equation} \lb{5.18} G(z;0;a,a) = \f{1}{\sqrt{z^2 -4a^2}\,} \end{equation} $\lim_{\veps\downarrow 0} \Ima G(x+i\veps;0;a,a)$ is only nonzero for $z\in [-2a,2a]$ so \begin{equation} \lb{5.19} d\rho_{b;a,a}(x) = \f{1}{\pi} \lim_{\veps\downarrow 0} \, \Ima G(x+i\veps; b;a,c)\, dx \end{equation} implies the well-known \begin{equation} \lb{5.20} d\rho_{b;a,a}=\f{1}{\pi\sqrt{4a^2 - (x-b)^2}\,} \, \chi_{[b-2a, b+2a]}(x)\, dx \end{equation} consistent with Nevai's conjecture. Here is the main result of this section: \begin{theorem}\lb{T5.4} Define \begin{equation} \lb{5.21} I(z;a,c) = (c^2 - a^2 + z^2)^2 -4z^2 c^2 \end{equation} Then \begin{alignat}{2} &\text{\rm{(i)}} \qquad f(z;0;a,c) &&= \f{c^2 - a^2 + z^2 -\sqrt{I(z;a,c)}\,}{2c^2 z} \lb{5.22} \\ &\text{\rm{(ii)}} \qquad G(z;0;a,c) &&= \f{1}{\sqrt{I(z;a,c)} + \sqrt{I(z;c,a)}\,} \lb{5.23} \\ &\text{\rm{(iii)}} \qquad d\rho_{b;a,c,}(x) &&= [\chi_{J_2}(x) + \chi_{J_2}(x)] w(x)\, dx \lb{5.24} \end{alignat} where \begin{equation} \lb{5.25} J_1 = [b+\abs{c-a}, b+a+c] \qquad J_2 = -J_1 \end{equation} and for $x\in J_1 \cup J_2$, \begin{equation} \lb{5.26} w(x) = \pi^{-1} \bigl[\sqrt{-I(x-b;a,c)} + \sqrt{-I(x-b;c,a)}\,\bigr]^{-1} \end{equation} \end{theorem} \begin{proof} (i) Iterating \eqref{5.15} once, we get a quadratic equation for $f(z;0;a,c)$ whose solution (the one with $f=1/z + O(1/z^2)$ at infinity) is \eqref{5.22}. \smallskip (ii) Follows from \eqref{5.14} and \eqref{5.22}. \smallskip (iii) $I(z;a,c)=0$ if and only if $c^2 - a^2 + z^2 =\pm 2zc$ if and only if $(z\pm c)^2 =a^2$ if and only if $z=\pm a \pm c$ (independent $\pm$) which say $\sqrt{I}$ has a branch cut on $J_1 \cup J_2$ and $G$ is purely imaginary there. \eqref{5.19} completes the proof. \end{proof} We note there is another proof of the theorem using Floquet theory and the theory of periodic whole-line Jacobi matrices. \medskip %%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{thebibliography}{100} %%%%%%%%%%%%%%%%%%%%%%%%%%%%% \smallskip % \bi{AkhB} N. I. Akhiezer, \textit{The Classical Moment Problem and Some Related Questions in Analysis}, Hafner, New York, 1965; Russian original, 1961. % \bi{CMV} M. J. Cantero, L. Moral, and L. Vel\'azquez, {\it Five-diagonal matrices and zeros of orthogonal polynomials on the unit circle}, Linear Algebra Appl. {\bf 362} (2003), 29--56. % \bi{Gelf} A. O. Gel'fond, \textit{Calculus of Finite Differences}, International Monographs on Advanced Mathematics and Physics, Hindustan Publishing, Delhi, 1971. % \bi{GBk} Ya. L. Geronimus, \textit{Orthogonal Polynomials: Estimates, Asymptotic Formulas, and Series of Polynomials Orthogonal on the Unit Circle and on an Interval}, Consultants Bureau, New York, 1961. % \bi{GS} F. Gesztesy and B. Simon, {\it $m$-functions and inverse spectral analysis for finite and semi-infinite Jacobi matrices}, J. d'Analyse Math. {\bf 73} (1997), 267--297. % \bi{Kh2000} S. Khrushchev, {\it Schur's algorithm, orthogonal polynomials, and convergence of Wall's continued fractions in $L^2 (\bbT)$}, J. Approx. Theory {\bf 108} (2001), 161--248. % \bi{Khr} S. Khrushchev, {\it Classification theorems for general orthogonal polynomials on the unit circle}, J. Approx. Theory {\bf 116} (2002), 268--342. % \bi{KS} R. Killip and B. Simon, {\it Sum rules for Jacobi matrices and their applications to spectral theory}, to appear in Ann. of Math. % \bi{Nev79} P. Nevai, {\it Orthogonal polynomials}, Mem. Amer. Math. Soc. {\bf 18} (1979), no. 213, 185 pp. % \bi{Nev89} P. Nevai, {\it Research problems in orthogonal polynomials}, in ``Approximation Theory VI, Vol. II" (College Station, TX, 1989), pp. 449--489, Academic Press, Boston, MA, 1989. % \bi{Poin} H. Poincar\'e, {\it Sur les equations lin\'eaires aux differentielles ordinaires et aux diff\'erences finies}, Am. J. Math. {\bf 7} (1885), 203-258. % \bi{S270} B. Simon, {\it The classical moment problem as a self-adjoint finite difference operator}, Adv. in Math. {\bf 137} (1998), 82--203. % \bi{Sib} B. Simon, \textit{Orthogonal Polynomials on the Unit Circle}, AMS Colloquium Publications Series, expected 2004. % \bi{Szb} G. Szeg\H{o}, \textit{Orthogonal Polynomials}, Amer. Math. Soc. Colloq. Publ., Vol. 23, American Mathematical Society, Providence, R.I., 1939; 3rd edition, 1967. \end{thebibliography} \end{document} ---------------0307211319805--