Content-Type: multipart/mixed; boundary="-------------0702211429314" This is a multi-part message in MIME format. ---------------0702211429314 Content-Type: text/plain; name="07-41.keywords" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="07-41.keywords" Generalized eigenvector, Jacobi matrix, asymptotic behavior of solutions, spectrum, subordinacy theory, WKB asymptotics ---------------0702211429314 Content-Type: application/x-tex; name="JanasNabokoSheronova.tex" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="JanasNabokoSheronova.tex" %\documentclass{gen-j-l} \documentclass[12pt]{article} \usepackage{amsfonts} \usepackage{amsmath} \usepackage{amsthm} \usepackage{amssymb} \usepackage{latexsym} \usepackage[latin1]{inputenc} \usepackage[T1]{fontenc} \usepackage{boxedminipage} \usepackage{multicol} \usepackage{graphicx} \usepackage{euscript} \usepackage{pstricks} \usepackage{pst-node} \newcommand{\discr}{\operatorname{discr}} \newcommand{\tr}{\operatorname{tr}} \newtheorem{thm}{Theorem}[section] \newtheorem{Proposition}[thm]{Proposition} \theoremstyle{definition} \newtheorem{definition}[thm]{Definition} \newtheorem{example}[thm]{Example} \newtheorem{xca}[thm]{Exercise} \theoremstyle{remark} \newtheorem{remark}[thm]{Remark} \numberwithin{equation}{section} % Absolute value notation \newcommand{\abs}[1]{\lvert#1\rvert} % Blank box placeholder for figures (to avoid requiring any % particular graphics capabilities for printing this document). \newcommand{\blankbox}[2]{% \parbox{\columnwidth}{\centering % Set fboxsep to 0 so that the actual size of the box will match the % given measurements more closely. \setlength{\fboxsep}{0pt}% \fbox{\raisebox{0pt}[#2]{\hspace{#1}}}% }% } \title{Jacobi matrices arising in the spectral phase transition phenomena : asymptotics of generalized eigenvectors in the "double root" case.} \author{J.Janas, S.Naboko and E.Sheronova} %\thanks{J.J. and S.N. were supported by INTAS 05-1000008-7883} %\address{Institute of Mathematics, PAN, ul.Sw.Tomasza 30, 31-027 Krakow, POLAND} %\email{najanas@cyf-kr.edu.pl} %\author{S.Naboko} %\thanks{S.N. was supported in part by the grant RFBR-06-01-00249.} %\address{Department of Math.Physics, Institute of Physics(NIIF). %St.Petersburg State University. Ulianovskaia 1, 198504 St.Petergof, %St.Petersburg, RUSSIA} \email{naboko@snoopy.phys.spbu.ru} %\author{E.Sheronova} %\address{Structural Dynamics and Coupled Systems Department, ONERA, BP 72-29, avenue de la Division Leclerc, 92322 Ch\^atillon Cedex, FRANCE} %\email{sheronov@onera.fr} %\date{today} %\keywords{Generalized eigenvector, Jacobi matrix, asymptotic %behavior of solutions, spectrum, subordinacy theory, WKB %asymptotics} \subjclass {[2000] 39A10, 47B25} \begin{document} \date{} \maketitle \begin{abstract} This paper is concerned with asymptotic behavior of generalized eigenvectors of a class of Hermitian Jacobi matrices $J$ in the critical case. The last means that the fraction $q_n/\lambda_n $ generated by the diagonal entries $q_n$ of $J$ and its subdiagonal elements $\lambda_n$ has the limit $\pm2$. In other word, the limit transfer matrix as $n\to\infty$ contains a Jordan box (=double root in terms of Birkhoff-Adams theory). This is the situation where the asymptotic Levinson theorem does not work and one has to elaborate more special methods for asymptotic analysis. It should be mentioned that the critical case exactly corresponds to spectral phase transition phenomena, where the spectral structure changes dramatically (from discreet spectrum to pure absolutely continuous one) whenever the parameters in matrix entries cross singular surfaces \cite{JN02}. Jordan box is the limit transfer matrix for all values of spectral parameter $\lambda$ simultaneously, it describes the "moment" of spectral phase transition. Application to the case of $\lambda_n = n^{\alpha}(1+r_n)$, $q_n=-2n^{\alpha}(1+p_n)$ with small perturbations $r_n$, $p_n$ and $\alpha\in(0,1]$ is studied. \end{abstract} \section{Introduction} In the last ten years appeared several papers devoted to spectral analysis of unbounded, self-adjoint Jacobi matrices \cite{BdMNS06}, \cite{DN06}, \cite{D92}, \cite{D04}, \cite{DP02}, \cite{DJMP04}, \cite{DP02a}, \cite{GBV04}, \cite{JM03}, \cite{JN01}, \cite{JN02}, \cite{JN04}, \cite{JNS04}, \cite{JL99}, \cite{LNS03}, \cite{M03}, \cite{S07}, \cite{SW05}, \cite{Si07}, \cite{St94}. Given a sequence $\{\lambda_n\}$ of positive numbers and a sequence $\{q_n\}$ of real numbers the Jacobi operator $J$ is defined in $l^2=l^2(\mathbb{N})$ by \begin{equation}\label{PervIntrZero} \begin{split} (Ju)_n &= \lambda_{n-1}u_{n-1}+q_nu_n+\lambda_n u_{n+1},\qquad n>1\\ (Ju)_1 &= q_1u_1+\lambda_1 u_2.\\ \end{split} \end{equation} More precisely, on its maximal domain $J$ is always symmetric and sometimes selfadjoint. In the case when there exist limits (as $n$ tends to infinity) of $q_n/\lambda_n$ and $\lambda_{n-1}/\lambda_n$ or sequences $\{\lambda_n\}$ and (or) $\{q_n\}$ are periodically perturbed, spectral analysis of $J$ has been partially done in \cite{JM03}, \cite{JN04}, \cite{JNS04}. This analysis was based on Gilbert-Pearson subordinacy theory \cite{GP87} (due to Khan and Pearson \cite{KP92}) combined with various variants of discrete versions of Levinson theorem, see \cite{E89}, \cite{JM03}, \cite{R99}, \cite{S04}, \cite{S07}. Especially interesting situation appears if $\lim\limits_n q_n/\lambda_n =\pm 2$ which corresponds to the phase transition phenomena. If $|\lim\limits_n q_n/\lambda_n| < 2$, then (under some regularity assumptions on $q_n$, $\lambda_n$) the spectrum of $J$ is absolutely continuous; and when $\lim\limits_n |q_n|/\lambda_n > 2$ the spectrum of $J$ is discrete \cite{JN02}. In this work we consider a special class of sequences $\{\lambda_n\}$ and $\{q_n\}$ given by \begin{equation}\label{dvaIntr} \lambda_n=n^{\alpha}(1+x_n),\qquad q_n=-2n^{\alpha}(1+y_n), \end{equation} where $\alpha\in(0,1]$ and $n^{\alpha/2}x_n$, $n^{\alpha/2}y_n$ belong to $l^1$ the standard space of summable sequences. This corresponds to the critical situation (double root) where ${\lim\limits_{n\to\infty}q_n/\lambda_n=-2}$ (the most difficult case for investigation). Generally speaking, formulae (\ref{dvaIntr}) represent a very special case of the critical situation mentioned above. However, the aim of our paper is to demonstrate a new technique for treating the spectral phase transition point. The Jacobi matrix (\ref{dvaIntr}) is one of the simplest non-trivial model satisfying this aim. Note that for $q_n=2n^{\alpha}(1+y_n)$ one can make a change of variables (diagonal unitary transformation) reducing to the above mentioned situation. The critical case corresponds to the situation where the limit of the transfer matrix for the recurrent equation (\ref{PervIntrZero}) is given by the Jordan box. In other words it means the appearance of the irregular singular point with double root of the characteristic equation related to (\ref{PervIntrZero}) in Birkhoff-Adams theory \cite{A28}, \cite{B11}. Recall that the case $\lambda_n=n+a$, $q_n=-2n$ related to the birth and death processes \cite{KM58} was already studied in \cite{JN01}. However, even in that paper spectral analysis of $J$ was carried out only partially. The reason for this was due to difficulties in the study of asymptotic behavior of generalized eigenvectors of $J$ i.e, the solutions of the infinite system of equations \begin{equation}\label{pervIntr} (Ju)_n=\lambda u_n, \qquad n=2,3,\ldots \end{equation} for $\lambda> -1$. Later this problem was solved in unpublished work \cite{S01}. It turns out that the analysis of asymptotic behavior of solutions of (\ref{pervIntr}) depends on the sign of $\lambda$. Namely for $\lambda<0$ so-called "Ansatz" idea is used, while for $\lambda>0$ a combination of the WKB approach with detailed analysis of products of the transfer matrices is applied (see Section 2 for details). In order to avoid some cumbersome formulae we shall present our results only for $\alpha\in(\frac{1}{2},\frac{2}{3})$. But the methods we propose work for arbitrary $\alpha\in(0,1]$ (see for some comments below). Note that the case $\alpha=1$ can be easily deduced from the Birkhoff-Adams theory \cite{E99}. Unfortunately, this theory does not apply for $\alpha<1$. We think that the ideas used in this work can also be efficient for other sequences $\{\lambda_n\}$, $\{q_n\}$ corresponding to the critical case $\lim\limits_n q_n/\lambda_n=\pm2$. Finally, we mention about still another approach to the critical case given by the first name author in \cite{JJ}. This approach relies on some ideas found by W.Kelley in \cite{K94}.\\ The paper consists of four sections. {\bf Section 2} contains necessary notions and notations and explains the WKB approach to asymptotic analysis of solutions of (\ref{pervIntr}). {\bf Section 3} presents asymptotic formula for a basis of solutions of (\ref{pervIntr}) with $\lambda>0$ (hyperbolic case). In turn {\bf Section 4} does the same for $\lambda<0$ (elliptic case). The {\bf last Section} contains applications of asymptotic results to spectral analysis of $J$. \section{Preliminaries} First note that the operator $J$ defined by the sequences $\lambda_n=n^{\alpha}(1+x_n)$ and $q_n=-2n^{\alpha}(1+y_n)$, $\alpha\in(0,1]$ is self-adjoint provided it acts on the maximal domain $D(J):=\{f\in l^2 :\{(Jf)_n\}\in l^2\}$. This is clear by the Carleman condition $\sum_k\frac{1}{\lambda_k}=+\infty$, \cite{YUM65}. As usual we rewrite the system (\ref{pervIntr}) in the form \begin{equation}\label{vtorSect2} \vec u_{n+1}=B_n(\lambda)\vec u_n, \end{equation} where $ \vec u_n=\begin{pmatrix} u_{n-1}\\u_n\end{pmatrix}$ and $B_n(\lambda)=\begin{pmatrix}0&1\\-\frac{\lambda_{n-1}}{\lambda_n}&\frac{\lambda-q_n}{\lambda_n}\end{pmatrix}$. The matrix $B_n(\lambda)$ is called the transfer matrix of $J$. Therefore asymptotic behavior of solutions of (\ref{pervIntr}) is equivalent to asymptotic behavior of arbitrary long products of $B_k(\lambda)$'s. This idea was frequently used in many works on spectral properties of Jacobi operators. However, we start with another approach to the problem of asymptotic behavior. Our approach is based on the idea of application of the WKB asymptotic formula for solutions of a suitable differential equation related to (\ref{pervIntr}). It should be mentioned that the idea to replace a difference relation by a proper continuous differential equation has been already used in theory of orthogonal polynomials. We describe below how to find this differential equation. Dividing (\ref{pervIntr}) by $n^{\alpha}$ and disregarding lower order terms: $$ \frac{x_nu_{n+1}}{n^{\alpha}}, \quad \frac{x_{n-1}u_{n-1}}{n^{\alpha}}, \quad \frac{y_n u_n}{n^{\alpha}} \quad \text{and} \quad O\left(\frac{1}{n^2}\right)u_{n-1} $$ we rewrite (\ref{pervIntr}) approximately as \begin{equation}\label{tretSect2} u_{n+1}+u_{n-1}-2u_n-\frac{\alpha}{n}u_{n-1}-\frac{\lambda}{n^{\alpha}}u_n\approx 0 \end{equation} for large $n$. Denoting $\Delta f(n):=f(n+1)-f(n)$ and changing $u_n$ by a continuous function $u(n)$ for $n\in \mathbb{R}^+$ we have $$\Delta^2u(n-1)+\frac{\alpha}{n}\Delta u(n-1)-\left(\frac{\lambda}{n^{\alpha}}+\frac{\alpha}{n}\right)u(n)\approx 0$$ Replacing $\Delta^2u(n-1)$ and $\Delta u(n-1)$ by $u''(n)$ and $u'(n)$ respectively, we obtain \begin{equation}\label{chetSect2} u''(n)+\frac{\alpha}{n}u'(n)-\left(\frac{\lambda}{n^{\alpha}}+\frac{\alpha}{n}\right)u(n)\approx 0 \end{equation} for $n\gg 1$. The change of $u(n)$ by $n^{-\alpha/2}v(n)$ allows to rewrite (\ref{chetSect2}) as: $$ v''(n)+\left[\frac{\alpha}{2}\left(\frac{\alpha}{2}+1\right)\frac{1}{n^2} -\frac{\alpha^2}{2n^2} -\frac{\alpha}{n} -\frac{\lambda}{n^{\alpha}}\right]v(n)\approx 0 $$ Finally, the above heuresis leads to the equation \begin{equation}\label{pjatSect2} v''(n)-\left(\frac{\lambda}{n^{\alpha}}+\frac{\alpha}{n}\right)v(n)=0. \end{equation} Denote $Q(n):=\frac{\lambda}{n^{\alpha}}+\frac{\alpha}{n}$. Applying to (\ref{pjatSect2}) the standard WKB formula \cite{O97} we find that (\ref{pjatSect2}) has a base of linearly independent solutions $v_{\pm}(\cdot)$ with the asymptotic given by $$v_{\pm}(n)\sim Q(n)^{-\frac{1}{4}}\exp\left[\pm\int_1^n Q(t)^{\frac{1}{2}}dt\right], \quad n\to\infty.$$ Therefore one could make a reasonable "Ansatz" on the asymptotic formula for solutions of (\ref{pervIntr}), \begin{multline}\label{shestSect2} u_n=n^{-\frac{\alpha}{2}}v(n)\sim n^{ -\alpha/4}\exp\left[\pm\int_1^n \left(\frac{\lambda}{t^{\alpha}} +\frac{\alpha}{t}\right)^{1/2}dt\right] \sim \\ \sim n^{ -\alpha/4}\exp\left(\pm\left[a_1n^{1-\alpha/2} +\lambda^{-\frac{1}{2}}n^{\alpha/2} -\int_1^n \left(a_2t^{\frac{3\alpha}{2}-2} +O\left(t^{\frac{5\alpha}{2}-3}\right)\right)dt\right]\right) \end{multline} where $a_1=\sqrt{\lambda}\left(1-\frac{\alpha}{2}\right)^{-1}$, $a_2=\alpha^2/8\lambda^{3/2}$. In particular for $\alpha<2/3$ both terms $a_2t^{\frac{3\alpha}{2}-2}$ and $O\left(t^{\frac{5\alpha}{2}-3}\right)$ are integrable over $(t_0, +\infty)$, for any $t_0>0$. By the way, this fact is one of the reasons to put an additional condition $\alpha<2/3$. As we shall see below (Section 3, Theorem 3.2) formula (\ref{shestSect2}) is not correct. This phenomenon is due to inaccuracy between continuous approximation and the differential equation. Nevertheless, the essential part of (\ref{shestSect2}) remains valid! For example the leading term of (\ref{shestSect2}) given by \begin{equation}\label{semSect2} n^{ -\alpha/4}exp\left(\pm\sqrt{\lambda}\left(1-\frac{\alpha}{2}\right)^{-1}n^{1-\alpha/2}\right) \end{equation} is correct. As it was mentioned above if $\alpha=1$ asymptotic behavior of solutions of (\ref{pervIntr}) can be deduced by applying classical result of Birkhoff-Adams (\cite{E99}, Th.8.36). This is no longer possible for $\alpha\in(0,1)$ as one can easily verify by checking the assumption of Th.8.36 in \cite{E99}. \section{Case of positive $\lambda$ and $\alpha\in(\frac{1}{2},\frac{2}{3})$: hyperbolic situation.} In what follows $l^1$, to avoid tedious notations, will also denote the space of vectors or matrices whose norms are summable sequences and we hope that this will not lead to misunderstanding. Since \begin{equation}{\label{3.1}} \lambda_n=n^{\alpha}(1+x_n),\qquad q_n=-2n^{\alpha}(1+y_n), \end{equation} \label{p.1} where $n^{\alpha/2}x_n$ and $n^{\alpha/2}y_n$ belong to $l^1$, it follows that \begin{equation*} \frac{\lambda_{n-1}}{\lambda_{n}}=1-\frac{\alpha}{n}+r_n^{(1)}, \qquad \frac{\lambda}{\lambda_n}=\frac{\lambda}{n^{\alpha}}+r_n^{(2)}, \end{equation*} and \begin{equation*} \frac{q_n}{\lambda_n}=-2+r_n^{(3)}, \qquad \text{where}\qquad \{r_n^{(i)}n^{\alpha/2}\}\in l^1, \qquad i=1,2,3 \end{equation*} Therefore the transfer matrix \begin{equation}{\label{3.2}} B_n(\lambda)=\begin{pmatrix} 0&1\\-1&2 \end{pmatrix}+\frac{1}{n^{\alpha}}\begin{pmatrix}0&0\\\phi_n&\lambda \end{pmatrix}+R_n \end{equation} with $\phi_n=\alpha n^{\alpha -1}$ and $\|R_n\|=O\left(|x_{n-1}|+|x_n|+|y_n|\right)+O\left(\frac{1}{n^2}\right)$, as $n\to\infty$. The leading term of $B_n(\lambda)$,\\ \begin{center}the matrix \,$\begin{pmatrix} 0&1\\-1&2 \end{pmatrix}$,\, is similar to the matrix \,$\begin{pmatrix} 1&1\\0&1 \end{pmatrix}$.\\ \end{center} The Jordan box appearing in the leading term ($\lim\limits_{n\to\infty}B_n(\lambda)$) is the main difficulty of the analysis. As we tried to explain in the {\bf Introduction} the following sequences are essential below \begin{equation}{\label{3.3}} \begin{split} z_k &:= k^{-\frac{\alpha}{4}}\exp(\rho k^{\beta}) \\ \tilde z_k &:= k^{-\frac{\alpha}{4}}\exp(-\rho k^{\beta})\\ \end{split} \end{equation} here $\rho=(1-\frac{\alpha}{2})^{-1}\sqrt{\lambda}$, $\beta=1-\frac{\alpha}{2}$ ($\sqrt{\lambda}>0$). Below we shall find asymptotic formula for generalized eigenvectors for the above fixed $\lambda >0$. This formula will be computed in four steps.\\ {\bf Step 1. Introducing the Ansatz}\\ Consider the matrix $S_n$ given by \begin{equation}{\label{3.4}} S_k= \begin{pmatrix} \tilde z_{k-1}&z_{k-1}\\ \tilde z_k&z_k \end{pmatrix} \end{equation} The matrix $S_n$ appears naturally for the system (\ref{pervIntr}).This will become clear later. It is obvious that $$ \prod_{k=2}^n B_k(\lambda)=S_{n+1}\left\{ \prod_{k=2}^n (S_{k+1}^{-1}B_k(\lambda)S_k)\right\}S_2^{-1}. $$ This allows us to reduce the product of transfer matrices $B_k(\lambda)$ to the product of matrices $(S_{k+1}^{-1}B_k(\lambda)S_k)$ which might be much simpler due to the proper choice of matrices $S_k$ (\ref{3.4}). \begin{Proposition}\label{theorem3.1} In the above notations we have for $\alpha\in(\frac{1}{2},\frac{2}{3})$: \begin{equation*} S^{-1}_{k+1}B_k(\lambda)S_k=(A_{ij}(k))+R_k^{(1)}, \end{equation*} where $ R_k^{(1)}= S^{-1}_{k+1} R_k S_k $, $\psi_k=e^{\rho k^{\beta}}$ and for $k$ large enough matrix elements $A_{ij}(k)$ satisfy the relations \begin{equation*} \begin{split} A_{11}(k) &= 1-\frac{\alpha}{2\sqrt{\lambda}}k^{\frac{\alpha}{2}-1} +\frac{(\sqrt{\lambda})^3}{4!}k^{-\frac{3\alpha}{2}}+O\left(k^{ -\frac{2+\alpha}{2}}\right)\\ A_{12}(k) &= \psi^2_k\left(-\frac{\alpha}{2\sqrt{\lambda}}k^{\frac{\alpha}{2}-1} +\frac{(\sqrt{\lambda})^3}{4!}k^{-\frac{3\alpha}{2}}+O\left(k^{ -\frac{2+\alpha}{2}}\right)\right)\\ A_{21}(k) &= \psi^{-2}_k\left(\frac{\alpha}{2\sqrt{\lambda}}k^{\frac{\alpha}{2}-1} -\frac{(\sqrt{\lambda})^3}{4!}k^{-\frac{3\alpha}{2}}+O\left(k^{ -\frac{2+\alpha}{2}}\right)\right)\\ A_{22}(k) &= 1+\frac{\alpha}{2\sqrt{\lambda}}k^{\frac{\alpha}{2}-1} -\frac{(\sqrt{\lambda})^3}{4!}k^{-\frac{3\alpha}{2}}+O\left(k^{ -\frac{2+\alpha}{2}}\right)\\ \end{split} \end{equation*} \end{Proposition} \begin{proof} Using (\ref{3.3}) we have \begin{multline*} S^{-1}_{k+1}B_k(\lambda)S_k = (\det S_{k+1})^{-1} \begin{pmatrix}z_{k+1}&-z_k\\ -\tilde z_{k+1}&\tilde z_k\end{pmatrix} \left[\begin{pmatrix}0&1\\-1&2\end{pmatrix} +\right.\\+\left.\frac{1}{k^{\alpha}}\begin{pmatrix}0&0\\ \phi_k&\lambda\end{pmatrix} +R_k\right] \begin{pmatrix}\tilde z_{k-1}&z_{k-1}\\ \tilde z_k&z_k\end{pmatrix} = (\det S_{k+1}\cdot k^{\alpha})^{-1}\begin{pmatrix}z_{k+1}&-z_k\\ -\tilde z_{k+1}&\tilde z_k\end{pmatrix} \\ \left[k^{\alpha} \begin{pmatrix}0&1\\-1&2\end{pmatrix} + \begin{pmatrix}0&0\\ \phi_k&\lambda\end{pmatrix} + k^{\alpha} R_k\right]\cdot \begin{pmatrix}\tilde z_{k-1}&z_{k-1}\\ \tilde z_k&z_k\end{pmatrix} \end{multline*} Denote by $$ (a_{ij}(k)) := \begin{pmatrix}z_{k+1}&-z_k\\ -\tilde z_{k+1}&\tilde z_k\end{pmatrix}\begin{pmatrix}0&1\\-1&2\end{pmatrix} \begin{pmatrix}\tilde z_{k-1}&z_{k-1}\\ \tilde z_k&z_k\end{pmatrix} $$ Then tedious but straightforward computation based on the expansion of $e^x=\sum\limits_0^5\frac{x^k}{k!}+O(x^6)$ up to the fifth term shows that \begin{multline*} a_{11}(k)=k^{-\alpha}\left[2\sqrt{\lambda}+\lambda k^{-\frac{\alpha}{2}} +\frac{(\sqrt{\lambda})^3}{3}k^{-\alpha} +\frac{\lambda^2}{12}k^{-\frac{3\alpha}{2}} +\right.\\+\left.\frac{2(\sqrt{\lambda})^5}{5!}k^{-2\alpha} +O\left(k^{-\frac{2+\alpha}{2}}\right)\right] \end{multline*} $$ a_{12}(k)=k^{-\alpha}\psi^2_k\left[\lambda k^{-\frac{\alpha}{2}} +\frac{\lambda^2}{12}k^{-\frac{3\alpha}{2}} -\frac{\alpha\sqrt{\lambda}}{k}+O\left(k^{-\frac{2+\alpha}{2}}\right)\right] $$ $$ a_{21}(k)=k^{-\alpha}\psi^{-2}_k\left[-\lambda k^{-\frac{\alpha}{2}} -\frac{\lambda^2}{12}k^{-\frac{3\alpha}{2}} -\frac{\alpha\sqrt{\lambda}}{k}+O\left(k^{-\frac{2+\alpha}{2}}\right)\right] $$ \begin{multline*} a_{22}(k)=k^{-\alpha}\left[2\sqrt{\lambda}-\lambda k^{-\frac{\alpha}{2}} +\frac{(\sqrt{\lambda)^3}}{3}k^{-\alpha} -\frac{\lambda^2}{12}k^{-\frac{3\alpha}{2}} +\right.\\+\left.\frac{2(\sqrt{\lambda})^5}{5!}k^{-2\alpha} +O\left(k^{-\frac{2+\alpha}{2}}\right)\right]. \end{multline*} Again direct calculation leads to the formula \begin{multline}{\label{3.5}} (\det S_{k+1}\cdot k^{\alpha})^{-1}=\\= \frac{1}{2\sqrt{\lambda}}\left[1+\frac{\alpha}{2k} -\frac{\lambda}{3!}k^{-\alpha}-\frac{\lambda^2}{5!}k^{-2\alpha} +\frac{\lambda^2}{36}k^{-2\alpha}\right]+O(k^{-\frac{2+\alpha}{2}}) \end{multline} Let $$ (b_{ij}(k)):= \begin{pmatrix}z_{k+1}&-z_k\\ -\tilde z_{k+1}&\tilde z_k\end{pmatrix}\begin{pmatrix}0&0\\ \phi_k&\lambda\end{pmatrix} \begin{pmatrix}\tilde z_{k-1}&z_{k-1}\\ \tilde z_k&z_k\end{pmatrix} $$ We have (using definitions of $z_k$, $\tilde z_k$) $$ b_{11}(k)=-\lambda z_k\tilde z_k-z_k\tilde z_{k-1}\phi_k =-\lambda k^{-\frac{\alpha}{2}} -\alpha k^{\frac{\alpha}{2}-1}-\frac{\alpha\sqrt{\lambda}}{k} +O\left(\frac{1}{k^{1+\frac{\alpha}{2}}}\right)$$ $$ b_{12}(k)=-\lambda z_k^2-z_k z_{k-1}\phi_k =-\psi^2_k\left[\lambda k^{-\frac{\alpha}{2}} +\alpha k^{\frac{\alpha}{2}-1}-\frac{\alpha\sqrt{\lambda}}{k} +O\left(\frac{1}{k^{1+\frac{\alpha}{2}}}\right)\right]$$ $$ b_{21}(k)=\psi^{-2}_k\left[\lambda k^{-\frac{\alpha}{2}} +\alpha k^{\frac{\alpha}{2}-1}+\frac{\alpha\sqrt{\lambda}}{k} +O\left(\frac{1}{k^{1+\frac{\alpha}{2}}}\right)\right]$$ $$ b_{22}(k)=\lambda k^{-\frac{\alpha}{2}} +\alpha k^{\frac{\alpha}{2}-1}-\frac{\alpha\sqrt{\lambda}}{k} +O\left(\frac{1}{k^{1+\frac{\alpha}{2}}}\right)$$ Combining the above equalities (for $a_{ij}(k)$ and $b_{ij}(k)$) we obtain \begin{multline*} k^{\alpha}a_{11}(k)+b_{11}(k)=2\sqrt{\lambda}\left[1+\frac{\lambda}{6}k^{-\alpha} -\frac{\alpha}{2\sqrt{\lambda}}k^{\frac{\alpha}{2}-1} -\frac{\alpha}{2k} +\right.\\+\left.\frac{(\sqrt{\lambda})^3}{4!}k^{-\frac{3\alpha}{2}} +\frac{\lambda^2}{5!}k^{-2\alpha}\right] +O\left(k^{-\frac{2+\alpha}{2}}\right) \end{multline*} $$ k^{\alpha}a_{12}(k)+b_{12}(k)=\psi^2_k\left[-\alpha k^{\frac{\alpha}{2}-1} +\frac{\lambda^2}{12}k^{-\frac{3\alpha}{2}} +O\left(k^{-\frac{2+\alpha}{2}}\right)\right] $$ $$ k^{\alpha}a_{21}(k)+b_{21}(k)=\psi^{-2}_k\left[\alpha k^{\frac{\alpha}{2}-1} -\frac{\lambda^2}{12}k^{-\frac{3\alpha}{2}} +O\left(k^{-\frac{2+\alpha}{2}}\right)\right] $$ \begin{multline*} k^{\alpha}a_{22}(k)+b_{22}(k)=2\sqrt{\lambda}\left[1+\frac{\lambda}{6}k^{-\alpha} +\frac{\alpha}{2\sqrt{\lambda}}k^{\frac{\alpha}{2}-1} -\frac{\alpha}{2k} -\right.\\-\left.\frac{(\sqrt{\lambda})^3}{4!}k^{-\frac{3\alpha}{2}} +\frac{\lambda^2}{5!}k^{-2\alpha}\right] +O\left(k^{-\frac{2+\alpha}{2}}\right) \end{multline*} Finally, using (\ref{3.5}) and the above four equalities we verify the thesis of Proposition 3.1 The proof is complete. \end{proof} {\bf Step 2. Estimate of the remainder: reducing to $l^1$ error terms.}\\ Observe that $S^{-1}_{k+1}B_k(\lambda)S_k$ has the form $$ I+\begin{pmatrix}a_k&\psi^2_ka_k\\-\psi_k^{-2}a_k&-a_k\end{pmatrix}+R_k^{(2)}, $$ $$\text{where}\qquad a_k:=-\frac{\alpha}{2\sqrt{\lambda}}k^{\frac{\alpha}{2}-1} +\frac{(\sqrt{\lambda})^3}{4!}k^{-\frac{3\alpha}{2}},$$ $$R_k^{(2)}:=R_k^{(1)}+\begin{pmatrix}O(k^{-1-\frac{\alpha}{2}})&\psi_k^2 O(k^{-1-\frac{\alpha}{2}})\\ \psi_k^{-2} O(k^{-1-\frac{\alpha}{2}})&O(k^{-1-\frac{\alpha}{2}})\end{pmatrix}+O(k^{-\frac{2+\alpha}{2}})$$ as $k\to\infty$ . Write $ R_k^{(1)}:=(r_{ij}^{(1)}(k)).$ \, Due to definitions we check that \begin{equation*} \begin{split} r_{11}^{(1)}(k) &= O\left(k^{\frac{\alpha}{2}}\|R_k\|\right)\\ r_{12}^{(1)}(k) &= \psi^2_k O\left(k^{\frac{\alpha}{2}}\|R_k\|\right)\\ r_{21}^{(1)}(k) &= \psi^{-2}_k O\left(k^{\frac{\alpha}{2}}\|R_k\|\right)\\ r_{22}^{(1)}(k) &= O\left(k^{\frac{\alpha}{2}}\|R_k\|\right)\\ \end{split} \end{equation*} Therefore, for $r_{ij}^{(2)}(k)$ we have: \begin{equation*} \begin{split} r_{11}^{(2)}(k) &= \left(O\left(k^{-1-\frac{\alpha}{2}}\right) +O\left(k^{\frac{\alpha}{2}}\|R_k\|\right)\right)\in l^1\\ r_{12}^{(2)}(k) &= \psi^2_k \left(O\left(k^{-1-\frac{\alpha}{2}}\right) +O\left(k^{\frac{\alpha}{2}}\|R_k\|\right)\right)\\ r_{21}^{(2)}(k) &= \psi^{-2}_k\left(O\left(k^{-1-\frac{\alpha}{2}}\right)+ O\left(k^{\frac{\alpha}{2}}\|R_k\|\right)\right)\\ r_{22}^{(2)}(k) &= \left(O\left(k^{-1-\frac{\alpha}{2}}\right) +O\left(k^{\frac{\alpha}{2}}\|R_k\|\right)\right)\in l^1\\ \end{split} \end{equation*} Note that the elements $a_k\psi_k^2$ and probably $r_{12}^{(2)}(k)$ grow to infinity as $k\to\infty$, and this makes a serious problem in the analysis of the product $$ \prod_k (S_{k+1}^{-1}B_k(\lambda)S_k). $$ We try to kill this growth by finding suitable diagonal matrices $$ X_k=\begin{pmatrix}x_k&0\\0&y_k\end{pmatrix}$$ such that $ X_{k+1}^{-1}S_{k+1}^{-1}B_k(\lambda)S_kX_k$ (the reasoning for appearance of matrices $X_{k+1}^{-1}$ and $X_k$ is similar to one for $S_k$) will be a bounded sequence. The right choice of $X_k$ is given by $$X_k:=\begin{pmatrix}1&0\\0&\psi_k^{-2}\end{pmatrix}.$$ This choice of $X_k$ is determined by the factorization : $$ \begin{pmatrix}1&\psi_k^{2}\\-\psi_k^{-2}&-1\end{pmatrix}= \begin{pmatrix}1&0\\0&\psi_k^{-2}\end{pmatrix} \begin{pmatrix}1&1\\-1&-1\end{pmatrix} \begin{pmatrix}1&0\\0&\psi_k^{-2}\end{pmatrix}^{-1}$$ It follows that \begin{equation}{\label{3.6}} X_{k+1}^{-1}a_k\begin{pmatrix}1&\psi_k^2\\-\psi_k^{-2}&-1\end{pmatrix}X_k= a_k\begin{pmatrix}1&1\\-\left(\frac{\psi_{k+1}}{\psi_k}\right)^2 &-\left(\frac{\psi_{k+1}}{\psi_k}\right)^2\end{pmatrix} \end{equation} and $$ R_k^{(3)}:=X_{k+1}^{-1}R_k^{(2)}X_k =\begin{pmatrix}r_{11}^{(2)}(k)& r_{12}^{(2)}(k)\psi_k^{-2} \\ \psi_{k+1}^{2}r_{21}^{(2)}(k)& r_{22}^{(2)}(k)\left(\frac{\psi_{k+1}}{\psi_k}\right)^2\end{pmatrix}\in l^1 $$ In this way we have proved that \begin{multline}{\label{3.7}} X_{k+1}^{-1}S_{k+1}^{-1}B_k(\lambda)S_kX_k =\\= \begin{pmatrix}1&0\\0& \left(\frac{\psi_{k+1}}{\psi_k}\right)^2\end{pmatrix} +a_k\begin{pmatrix}1&1\\ -\left(\frac{\psi_{k+1}}{\psi_k}\right)^2& -\left(\frac{\psi_{k+1}}{\psi_k}\right)^2 \end{pmatrix} +R_k^{(3)} \end{multline} where $\|R_k^{(3)}\|\in l^1$.\\ {\bf Step 3. Asymtotics of solutions for an auxiliary linear system.}\\ Denote by $p_k:=\left(\frac{\psi_{k+1}}{\psi_k}\right)^2-1$, where $p_k\sim 2\sqrt{\lambda}k^{-\frac{\alpha}{2}}$ as $k\to\infty$. Then we can rewrite (\ref{3.7}) as follows: \begin{equation}{\label{3.8}} X_{k+1}^{-1}S_{k+1}^{-1}B_k(\lambda)S_kX_k = I+ p_kV(k)+R_k^{(3)}, \end{equation} where $$ V(k):=\begin{pmatrix}a_kp_k^{-1}&a_kp_k^{-1}\\ -\left(\frac{\psi_{k+1}}{\psi_k}\right)^2a_kp_k^{-1}& -\left(\frac{\psi_{k+1}}{\psi_k}\right)^2a_kp_k^{-1}+1\end{pmatrix}.$$ Note that $a_kp_k^{-1}\sim\frac{\alpha}{4\lambda}k^{\alpha-1}$, $k\to\infty$. Therefore, the original system of equations can be written as: \begin{equation}\label{++} \vec u(n+1)= S_{n+1}X_{n+1}\left\{\prod_{k=2}^n\left(I+p_kV(k)+R_k^{(3)}\right)\right\}X_2^{-1}S_2^{-1}\vec u_2 \end{equation} Consider the auxiliary linear system: \begin{equation}\label{3.9} \vec w(n+1)=\left(I+p_nV(n)+R_n^{(3)}\right)\vec w(n) \end{equation} Observe that the sequence $\{V(n)\}$ is of bounded variation, i.e. $$\sum\limits_k\|V(k+1)-V(k)\|<+\infty$$ (This fact can be verified by using definition of $V(k)$; namely both $a_kp_k^{-1}$ and $\frac{\psi_{k+1}}{\psi_k}$ are of bounded variations). Let $\sigma(V(k))=\{\mu_1(k),\mu_2(k)\}$ be the spectrum of $V(k)$, i.e. $$\mu_1(k)=\frac{\tr V(k)-\sqrt{\discr V(k)}}{2},$$ $$\mu_2(k)=\frac{\tr V(k)+\sqrt{\discr V(k)}}{2},$$ where $\discr V:=(\tr V)^2-4\det V$ is the discriminant of $V$. Hence \begin{equation}\label{3.10} \mu_1(k)=a_kp_k^{-1}+O\left(\left(\frac{a_k}{p_k}\right)^2\right) \end{equation} \begin{equation}\label{3.11} \mu_2(k)=1-\frac{a_k}{p_k}(1+p_k)+O\left(\left(\frac{a_k}{p_k}\right)^2\right) \end{equation} (by definition of $V(k)$). Since $p_k\sim 2\sqrt{\lambda}k^{-\frac{\alpha}{2}}$, $k\to\infty$, using (\ref{3.10}) and (\ref{3.11}) we have \begin{equation}{\label{3.12}} p_k\mu_1(k)=a_k+O\left(\frac{1}{k^{2-3\alpha/2}}\right) \end{equation} \begin{equation}\label{3.13} p_k\mu_2(k)=p_k-a_k(1+p_k)+O\left(\frac{1}{k^{2-3\alpha/2}}\right) \end{equation} Due to our assumption $\alpha <\frac{2}{3}$ all $O\left(\frac{1}{k^{2-3\alpha/2}}\right)$ terms in the above equations are summable. Note that $V(n)\xrightarrow[n\to\infty]{}V_{\infty}=\begin{pmatrix}0&0\\0&1\end{pmatrix}$. Trivially, $$ V_{\infty}\vec e_1=0\vec e_1 \qquad \text{and}\qquad V_{\infty}\vec e_2=\vec e_2,$$ where $\vec e_1=(1,0)$ and $\vec e_2=(0,1)$. Applying Theorem 1.7(b) of \cite{JM03} we obtain a basis $\vec w_s$, $s=1,2$ of solutions of (\ref{3.9}) having the asymptotic form: \begin{equation}\label{3.14} \vec w_1(n)=\left\{\prod_{k=2}^{n-1}(1+p_k\mu_1(k))\right\}(\vec e_1+o(1)) \end{equation} \begin{equation}\label{3.15} \vec w_2(n)=\left\{\prod_{k=2}^{n-1}(1+p_k\mu_2(k))\right\}(\vec e_2+o(1)) \end{equation} Using (\ref{3.12}) and (\ref{3.13}) we find \begin{equation}\label{3.16} \prod_{k=2}^{n-1}(1+p_k\mu_1(k))=F(n)\prod_2^{n-1}(1+a_k) \end{equation} \begin{equation}\label{3.17} \prod_{k=2}^{n-1}(1+p_k\mu_2(k))=\left\{\prod_2^{n-1}\left(\frac{\psi_{k+1}}{\psi_k}\right)^2(1-a_k)\right\}\cdot G(n) \end{equation} where $F(n)$, $G(n)$ converge to some positive constants. Formally speaking in the above products in formulae (\ref{3.14}), (\ref{3.15}) one has began to calculate the products not from $k=2$ but rather from $k=k_0\gg 1$ to avoid any "occasional" zeros in the product factors. Let us ignore this inessential "problem" to avoid new tedious notations.\\ {\bf Step 4. Returning to the original linear system: obtaining the asymptotics of the solutions.}\\ Combining (\ref{3.14}), (\ref{3.15}), (\ref{3.16}) and (\ref{3.17}) we find that (see (\ref{++})) there exists a basis $\vec u_s(n)$ of solutions of original system given by \begin{multline}\label{*} \vec u_1(n+1)=S_{n+1}X_{n+1}F(n)\left\{\prod_2^{n-1}(1+a_k)\right\}(\vec e_1+o(1)) =\\= \tilde F(n)\exp\left(\Sigma_2^{n-1}a_k\right)\left[\begin{pmatrix}\tilde z_n\\ \tilde z_{n+1}\end{pmatrix}+\begin{pmatrix}\tilde z_n&z_n\psi_{n+1}^{-2}\\ \tilde z_{n+1}& z_{n+1}\psi_{n+1}^{-2}\end{pmatrix}o(1)\right]=\\= \tilde F(n)\exp\left(\Sigma_2^{n-1}a_k\right) \tilde z_{n+1}\left[\begin{pmatrix}1\\1\end{pmatrix}+o(1)\right], \end{multline} for some $\tilde F(n)$ convergent to a positive $F$. Similarly for the second solution $\vec u_2(.)$ we have \begin{multline}\label{**} \vec u_2(n+1)=\left(\frac{\psi_n}{\psi_2}\right)^2\left\{\prod_2^{n-1}(1-a_k)\right\}G(n) \begin{pmatrix}\tilde z_n&z_n\psi_{n+1}^{-2}\\ \tilde z_{n+1}& z_{n+1}\psi_{n+1}^{-2}\end{pmatrix}\cdot\\\cdot\left[\vec e_2+o(1)\right]= \tilde G(n)\left(\frac{\psi_n}{\psi_2}\right)^2\exp\left(-\Sigma_2^{n-1}a_k\right)\tilde z_{n}\left[\begin{pmatrix}1\\1\end{pmatrix}+o(1)\right], \end{multline} for some convergent $ \tilde G(n) $ to a positive constant. Using the Euler summation formula we can rewrite (\ref{*}) and (\ref{**}) as: \begin{multline} \vec u_1(n+1)=F_1(n)n^{-\alpha/4}\exp\left[-\rho n^{\beta}- \frac{n^{\alpha/2}}{\sqrt{\lambda}} +\frac{(\sqrt{\lambda})^3}{4!(1- \frac{3\alpha}{2})}n^{1-\frac{3\alpha}{2}}\right]\cdot\\\cdot\left[\begin{pmatrix}1\\1\end{pmatrix}+o(1)\right] \end{multline} \begin{multline} \vec u_2(n+1)=G_1(n)n^{-\alpha/4}\exp\left[\rho n^{\beta}+ \frac{n^{\alpha/2}}{\sqrt{\lambda}} -\frac{(\sqrt{\lambda})^3}{4!(1- \frac{3\alpha}{2})}n^{1-\frac{3\alpha}{2}}\right]\cdot\\\cdot\left[\begin{pmatrix}1\\1\end{pmatrix}+o(1)\right] \end{multline} where $F_1(n)$, $G_1(n)$ are convergent to positive constants. Summing up we have proved \begin{thm}\label{Theorem 3.2} Let $\lambda_n=n^{\alpha}(1+x_n)$, and $q_n=-2n^{\alpha}(1+y_n)$, where $\alpha\in(\frac{1}{2},\frac{2}{3})$, $\{x_nn^{\alpha/2}\}$ and $\{y_nn^{\alpha/2}\}$ belong to $l^1$. Fix $\lambda>0$. Then the system of equations $$\lambda_{n-1}u_{n-1}+q_nu_n+\lambda_nu_{n+1}=\lambda u_n,\quad n>1$$ has two linearly independent solutions $u_1(\cdot)$ and $u_2(\cdot)$ with the asymptotic given by $$u_1(n)\sim n^{-\alpha/4}\exp\left[-\rho n^{1-\frac{\alpha}{2}}-\frac{1}{\sqrt\lambda}n^{\alpha/2}+\eta n^{1-\frac{3\alpha}{2}}\right]\left(1+o(1)\right),$$ $$u_2(n)\sim n^{-\alpha/4}\exp\left[\rho n^{1-\frac{\alpha}{2}}+\frac{1}{\sqrt\lambda}n^{\alpha/2}-\eta n^{1-\frac{3\alpha}{2}}\right]\left(1+o(1)\right),$$ with ${\rho=\sqrt\lambda\left(1-\frac{\alpha}{2}\right)^{-1}}$ and ${\eta=\lambda^{3/2}\left[4!(1-\frac{3\alpha}{2})\right]^{-1}}$. \end{thm} \begin{remark} One can formulate extensions of the above formulae to the whole interval $\alpha\in (0,1)$ but the form of them becomes more cumbersome as $\alpha$ tends to zero or one. Remind that the aim of the paper is just to demonstrate a new technique in critical situation (the Jordan box).\\ Note that the asymptotic formulae from Theorem \ref{Theorem 3.2} collapse as $\alpha$ approaches $2/3$. Therefore, in order to preserve the form of asymptotics our condition $\alpha<2/3$ is essential. Hence, for other regions of $\alpha$ our approach works but gives another form of asymptotics. \end{remark} \section{The case of negative $\lambda$ : elliptic situation.} It turns out that analysis of asymptotic behavior of solutions of (\ref{pervIntr}) for $\lambda <0$ can be done in a similar way. Assumptions on $x_n$ and $y_n$ are also the same as for $\lambda>0$. The reasoning presented in the first two steps in the proof of Theorem \ref{Theorem 3.2} remains unchanged. On the other hand the matrix $V(k)$ in equation (\ref{3.8}) has now complex entries and the scalars $p_k$ are complex as well. Therefore formally we can not use the above evoked Theorem 1.7 from \cite{JM03} because this theorem concerns only real valued matrices $V(k)$ and real sequences $p_k .$ One can extend Theorem 1.7 to the complex case and then verify that the matrices $V(k)$ from equation (\ref{3.8}) satisfy assumption of the above mentioned extension of Theorem 1.7. We do not want to use this approach here for two reasons. Firstly, it would require presentation of the analysis of "the dichotomy condition" from discrete variant of the Levinson theorem (see \cite{JM03}). Secondly, (more essential reason), the "Ansatz" approach we will use below for negative $\lambda$ may be of some interest for itself, as an alternative method in asymptotic analysis of generalized eigenvectors of Jacobi matrices. In what follows we assume that \begin{equation}{\label{4.1}} \{x_nn^{\alpha/2}\} \text{ and } \{y_nn^{\alpha/2}\} \text{ belong to } l^1. \end{equation} The idea of the proof is based on right "Ansatz" for the asymptotic form for solutions of (\ref{pervIntr}). This approach has been successfully used in \cite{JN01} (for negative $\lambda$ and $\alpha=1$) and in \cite{DN06} for a different model. Surely the form of the Ansatz we make below is inspired by Theorem \ref{Theorem 3.2} and the WKB approach (see Section 2). \begin{thm}\label{theorem4.1} Let $\alpha \in(\frac{1}{2},\frac{2}{3})$. Suppose that $\lambda _n$ and $q_n$ are given by $\lambda _n=n^{\alpha}(1+x_n)$, $q_n=-2n^{\alpha}(1+y_n)$ . If $x_n$ and $y_n$ satisfy (\ref{4.1}), then for any $\lambda <0$ there are two linearly independent solutions $\vec{u}_{\pm}(n)$ of \begin{equation}\label{4.2} \vec{u}(n+1)=B_n(\lambda)\vec u(n) \end{equation} with the asymptotic given by $$ u_{\pm}(n)=n^{-\alpha/4}\exp\left[\pm i (Dn^{1-\alpha/2}+ En^{\alpha/2}+Fn^{1-3\alpha/2})\right](1+o(1)) $$ as $n\to\infty$, where $$ \vec u(n):=\begin{pmatrix}u(n-1)\\u(n)\end{pmatrix}, $$ $$ D:=\sqrt{-\lambda}(1-\alpha/2)^{-1},\quad E:=-(\sqrt{-\lambda})^{-1}, \quad F:=\frac{(\sqrt{-\lambda})^3}{24}(1-3\alpha/2)^{-1}.$$ \end{thm} \begin{proof} We make the Ansatz (its summation form is convenient for the calculations below): $$ z_n=n^{\gamma}\exp i\left[\sum\limits_1^n(Ak^{\delta}+Bk^{\epsilon}+Ck^{\theta})\right], $$ where $-1\le\theta<\epsilon<\delta<0$, and $A$, $B$, $C$, $\gamma $ are some real numbers. Define the matrix corresponding to the Ansatz $$ S_n=\begin{pmatrix}\bar {z}_{n-1}& z_{n-1}\\\bar z_n & z_n\end{pmatrix}, $$ where $\bar z_n$ denotes the complex conjugate of $z_n$ as usual. We want to choose $A$, $B$, $C$, $\gamma$, $\epsilon$, $\delta$, $\theta$ such that: \begin{equation}\label{4.3} S_{n+1}^{-1}B_n(\lambda)S_n=I+R_n \end{equation} for some matrices $R_n$ with $\{\Vert R_n\Vert \}\in l^1$. The reason for the appearance of the product $S_{n+1}^{-1}B_n(\lambda)S_n$ is the same as in Section 3. It follows that an arbitrary solution of (\ref{4.2}) has the form: $$ \vec u_{n+1}=S_{n+1}\vec w_n $$ where $\vec w_n$ is a sequence of vectors which tends to a non-zero vector. Therefore the form of the asymptotics of $\vec u_{n}$ will be determined by the matrix $S_{n}$, i.e. by the parameters $A$, $B$, $C$, $\gamma$, $\delta$, $\epsilon$, $\theta$. In what follows we will see that $$A=\pm\sqrt{-\lambda},\quad B=\mp\frac{\alpha}{2\sqrt{-\lambda}},\quad C=\pm\frac{(\sqrt{-\lambda})^3}{24},$$ $$\delta=-\frac{\alpha}{2},\quad \epsilon=\frac{\alpha}{2}-1,\quad \theta=-\frac{3\alpha}{2},\quad \gamma=-\frac{\alpha}{4},$$ where all signs should be chosen correspondingly. Note that the choice of parameters $\gamma$, $\delta$, $\epsilon$, $\theta$ can be easily deduced from the WKB asymptotic formula. However, we plan to derive the values from the explicit calculations on the basis of cancellation of terms. \begin{remark} Note that the parameters $\gamma$, $D$ and $E$ from the asymptotic formula in Theorem \ref{4.1} coincide with ones appearing in (\ref{shestSect2}) after formal substitution ${\sqrt{\lambda}=i\sqrt{-\lambda}}$. However, "$F$-term" is different here. See the discussion of the situation in the Section 2. \end{remark} Denote for fixed $\lambda<0$ $$ \varphi(n):=1-\lambda_{n-1}\lambda_n^{-1},\quad \psi(n):=\lambda\lambda_n^{-1}+2(1+y_n)(1+x_n)^{-1}-2,$$ where both sequences are real. Then $$ B_n(\lambda)=\begin{pmatrix}0&1\\-1&2\end{pmatrix}+\begin{pmatrix}0&0\\\varphi(n)&\psi(n)\end{pmatrix}$$ We have \begin{equation}\label{4.5} S_{n+1}^{-1}B_n(\lambda)S_n=(\det S_{n+1})^{-1}\left[\begin{pmatrix}\rho_n&\eta_n\\-\bar\eta_n&-\bar\rho_n\end{pmatrix} +\begin{pmatrix}s_n&t_n\\-\bar t_n&-\bar s_n\end{pmatrix}\right] \end{equation} where \begin{equation*} \begin{split} \rho_n &:= \vert z_n\vert^2(\bar z_{n-1}({\bar z_n})^{-1}+z_{n+1}z_n^{-1}-2),\\ \eta _n &:= z_n^2(z_{n-1}z_n^{-1}+z_{n+1}z_n^{-1}-2),\\ s_n &:= \vert z_n\vert^2(-\psi(n)-\varphi(n)\bar z_{n-1}(\bar z_{n})^{-1}),\\ t_n &:= z_n^2(-\psi(n)-\varphi(n)z_{n-1}z_n^{-1}).\\ \end{split} \end{equation*} {\bf Step 1. Calculation of the off-diagonal term}\\ Below we shall estimate the off-diagonal element $(\det S_{n+1})^{-1}(\eta_n+t_n)$ of \linebreak $S_{n+1}^{-1}B_n(\lambda)S_n$. We compute \begin{multline}\label{4.6} z_{n-1}z_n^{-1}=\left(1-\frac{\gamma}{n}+O\left(\frac{1}{n^2}\right)\right)\left[1-i\Big(An^{\delta}+Bn^{\epsilon}+Cn^{\theta}\Big) -\right.\\-\frac{1}{2}\Big(An^{\delta}+Bn^{\epsilon}+Cn^{\theta}\Big)^2+\frac{i}{3!}\Big(An^{\delta}+Bn^{\epsilon}+Cn^{\theta}\Big)^3 +\\+\left.\frac{1}{4!}\Big(An^{\delta}+Bn^{\epsilon}+Cn^{\theta}\Big)^4+O\left(n^{5\delta}\right)\right], \end{multline} \begin{multline}\label{4.7} z_{n+1}z_n^{-1}=\left(1+\frac{\gamma}{n}+O\left(\frac{1}{n^2}\right)\right)\left[\vphantom{\frac{1}{2}}1+i\Big(A(n+1)^{\delta}+ B(n+1)^{\epsilon}+ \right.\\+C(n+1)^{\theta}\Big)- \frac{1}{2}\Big(A(n+1)^{\delta}+B(n+1)^{\epsilon}+C(n+1)^{\theta}\Big)^2 -\frac{i}{3!}\Big(A(n+1)^{\delta}+B(n+1)^{\epsilon}+\\\left.+C(n+1)^{\theta}\Big)^3 +\frac{1}{4!}\Big(A(n+1)^{\delta}+B(n+1)^{\epsilon}+C(n+1)^{\theta}\Big)^4+O\left(n^{5\delta}\right)\right] \end{multline} Hence using (\ref{4.6}), (\ref{4.7}) and the form of $\lambda_n$ we have (after straightforward calculations) \begin{multline*} \eta_n+t_n=z_n^2\Bigg\{-2 + \left(1-\frac{\gamma}{n}+O\left(\frac{1}{n^2}\right)\right) \bigg[1-i\Big(An^{\delta}+Bn^{\epsilon}+Cn^{\theta}\Big)-\\ -\frac{1}{2}\Big(An^{\delta}+Bn^{\epsilon}+Cn^{\theta}\Big)^2 +\frac{i}{3!}\Big(An^{\delta}+Bn^{\epsilon}+Cn^{\theta}\Big)^3 +\frac{1}{4!}\Big(An^{\delta}+Bn^{\epsilon}+Cn^{\theta}\Big)^4+\\+O\left(n^{5\delta}\right)\bigg]+ \left(1+\frac{\gamma}{n}+O\left(\frac{1}{n^2}\right)\right)\bigg[1+i\Big(A(n+1)^{\delta}+ B(n+1)^{\epsilon}+ C(n+1)^{\theta}\Big) -\\- \frac{1}{2}\Big(A(n+1)^{\delta}+B(n+1)^{\epsilon}+C(n+1)^{\theta}\Big)^2 -\frac{i}{3!}\Big(A(n+1)^{\delta}+B(n+1)^{\epsilon}+\\+ C(n+1)^{\theta}\Big)^3 +\frac{1}{4!}\Big(A(n+1)^{\delta}+B(n+1)^{\epsilon}+C(n+1)^{\theta}\Big)^4+O\left(n^{5\delta}\right)\bigg]-\\ -\left(\frac{\alpha}{n}+O\left(\frac{1}{n^2}\right)+O\left(|x_{n-1}|+|x_n|\right)\right)\bigg[1 -iAn^{\delta}+O\left(\frac{1}{n^2}\right) +O\left(n^{\epsilon}\right)\bigg] -\\-\bigg[\frac{\lambda}{n^{\alpha}} +O\left(|y_n|+|x_n|\right)\bigg]\Bigg\} \end{multline*} Elementary calculations shows that the last expression is equal to \begin{multline*} \eta_n+t_n=z_n^2\left\{O\left(\frac{1}{n^2}\right) + O\left(n^{5\delta}\right) +O\left(|x_{n+1}|+|x_n|+|y_n|\right) +O\left(\frac{1}{n^{1-\epsilon}}\right)\right.+\\ +O\left(\frac{1}{n^{1-2\delta}}\right) +O\left(n^{2\epsilon}\right) +O\left(n^{3\delta+\epsilon}\right) +\frac{2i\gamma A}{n^{1-\delta}} +\frac{iA\delta}{n^{1-\delta}} -A^2n^{2\delta}-\\\left. -2ABn^{\delta+\epsilon} -2ACn^{\delta+\theta} +\frac{1}{12}A^4n^{4\delta} -\frac{\alpha}{n} +\frac{iA\alpha}{n^{1-\delta}} -\frac{\lambda}{n^{\alpha}}\right\} \end{multline*} Thus \begin{multline*} \eta_n+t_n=z_n^2\left\{ O\left(\frac{1}{n^2}\right) +O\left(n^{5\delta}\right) +O\left(|x_{n+1}|+|x_n|+|y_n|\right) +O\left(\frac{1}{n^{1-\epsilon}}\right)\right.+\\ +O\left(\frac{1}{n^{1-2\delta}}\right) +O\left(n^{2\epsilon}\right) +O\left(n^{3\delta+\epsilon}\right) -\left(A^2n^{2\delta}+\frac{\lambda}{n^{\alpha}}\right)+\\\left. +\left(\frac{1}{12}A^4n^{4\delta} -2ACn^{\delta+\theta}\right) -\left(\frac{\alpha}{n}+2ABn^{\delta+\epsilon}\right) +\frac{iA}{n^{1-\delta}}\left(\delta+2\gamma+\alpha\right)\right\}. \end{multline*} Grouping in pairs above (presumably the terms of the same order) was based on the expecting values (from WKB approach) of the parameters $\alpha$, $\delta$, $\theta$ and $\epsilon$. Now put the condition that all four brackets in the formula below are equal to zero separately. It immediately gives the values of parameters: $$\text{From 1st bracket}\quad\delta=-\frac{\alpha}{2}\quad\text{and}\quad A=\pm\sqrt{-\lambda},$$ $$\text{From 2nd bracket}\quad\theta=-\frac{3\alpha}{2}\quad\text{and}\quad C=\pm\frac{\sqrt{-\lambda}^3}{24},$$ $$\text{From 3rd bracket}\quad\epsilon=\frac{\alpha}{2}-1\quad\text{and}\quad B=\mp\frac{\alpha}{2\sqrt{-\lambda}},$$ $$\text{From 4th bracket}\quad\gamma=-\frac{\alpha}{4}.$$ Hence, substituting the values of powers $\delta$, $\theta$ and $\epsilon$ \begin{multline*} \vert\eta_n+t_n\vert=\vert z_n\vert^2\left\{O\left(\frac{1}{n^2}\right) +O\left(n^{-\frac{5\alpha}{2}}\right) +O\left(|x_{n+1}|+|x_n|+|y_n|\right)\right.+\\\left. +O\left(\frac{1}{n^{2-\frac{\alpha}{2}}}\right) +O\left(\frac{1}{n^{1+\alpha}}\right) +O\left(\frac{1}{n^{2-\alpha}}\right)\right\}=\\=\vert z_n\vert^2\left\{ O\left(n^{-\frac{5\alpha}{2}}\right) +O\left(|x_{n+1}|+|x_n|+|y_n|\right)\right\}. \end{multline*} {\bf Step 2. Calculation of the determinant.}\\ Explicit calculation of $\det S_{n+1}$ gives \begin{multline*} \det S_{n+1}=\bar{z}_nz_{n+1}-z_n\bar{z}_{n+1}=2i\Im( \bar{z}_nz_{n+1})=\\=2in^{-\frac{\alpha}{2}}\left(1+O\left(\frac{1}{n^2}\right)\right)\sin\left(A(n+1)^{\delta}+B(n+1)^{\epsilon} +C(+1)n^{\theta}\right)=\\= \pm n^{-\alpha}2i\sqrt{-\lambda}\left(1+O\left(\frac{1}{n^{1-\alpha}}\right)\right), \end{multline*} since $\alpha>1/2$. Therefore \begin{equation}\label{detobr} (\det S_{n+1})^{-1}=\mp\frac{in^{\alpha}}{2\sqrt{-\lambda}}\left(1+O\left(\frac{1}{n^{1-\alpha}}\right)\right), \end{equation} which gives extra-multiple of order $n^{\alpha}$ in formula (\ref{4.5}). Therefore by (\ref{detobr}) one gets \begin{multline}\label{4.9} \vert(\det S_{n+1})^{-1}(\eta_n+t_n)\vert= O\left(n^{\alpha}\right) |z_n|^2\left\{ O\left(n^{-\frac{5\alpha}{2}}\right) +\right.\\\left.+O\left(|x_{n+1}|+|x_n|+|y_n|\right)\vphantom{n^{-\frac{5\alpha}{2}}}\right\} =O\left(n^{\frac{\alpha}{2}}\right)\left\{ O\left(n^{-\frac{5\alpha}{2}}\right) +O\left(|x_{n+1}|+|x_n|+|y_n|\right)\right\}\\=O\left(n^{-2\alpha}\right) +O\left(n^{\frac{\alpha}{2}}(|x_{n+1}|+|x_n|+|y_n|)\right), \end{multline} since $z_n^2=O\left(n^{-\frac{\alpha}{2}}\right)$ due to $\gamma=-\alpha/4$. Thanks to $\alpha>\frac{1}{2}$ and conditions (\ref{4.1}) the right-hand side term in (\ref{4.9}) belongs to $l^1$. Now it is clear that the off-diagonal elements of $S_{n+1}^{-1}B_n(\lambda)S_n$ are summable under conditions of the Theorem \ref{theorem4.1}.\\ {\bf Step 3. Estimate of the diagonal elements.}\\ Concerning the diagonal element $(\det S_{n+1})^{-1}(\rho_n+s_n)$ note that \begin{multline}\label{4.10} \vert\rho_n+s_n-\det S_{n+1}\vert=\\=\vert z_n\vert^2\Big\vert\bar z_{n-1}(\bar z_n)^{-1}+z_{n+1}z_n^{-1}-2- \psi(n)- \varphi(n)\bar z_{n-1}(\bar z_n)^{-1}-(z_{n+1}z_n^{-1}-\bar z_{n+1}(\bar z_n)^{-1})\Big\vert \\=\vert z_n\vert^2\Big\vert \bar z_{n-1}(\bar z_n)^{-1}+\bar z_{n+1}(\bar z_n)^{-1}-2-\psi(n)-\varphi(n)\bar z_{n-1}(\bar z_n)^{-1}\Big\vert=\\ = \vert z_n\vert^2\Big\vert( z_{n-1} z_n^{-1}+z_{n+1}z_n^{-1}-2-\psi(n)-\varphi(n) z_{n-1} z_n^{-1})\Big\vert = \vert\eta_n+t_n\vert \end{multline} Combining (\ref{4.10}) and (\ref{4.9}) we conclude the proof of (\ref{4.3}) and the statement of Theorem. It is enough to know that the second diagonal element (in formula (\ref{4.5})) estimate follows from the estimate of the first one because $\det S_{n+1}$ is pure imaginary. \end{proof} \section{An application to a class of Jacobi matrices} We mentioned in the Introduction the class of Jacobi matrices (studied in \cite{JN01}) given by: $$\lambda_n=n+a,\qquad q_n=-2n.$$ Below we also consider a much more general class of Jacobi matrices related to the ones from the theory of birth and death processes \cite{JN01}, \cite{KM58}. The entries of such matrices must satisfy the identity: \begin{equation}\label{odinSect5} q_n+\lambda_{n-1}+\lambda_n=0,\quad n\ge1 \end{equation} To be precise the right-hand side term in formula (\ref{odinSect5}) should be equal to $1$, but standard shift of the spectral parameter brings zero instead of $1$. If $\lambda_n=n^{\alpha}$, $\alpha\in(0,1)$ then using (\ref{odinSect5}) we find $ q_n=-2n^{\alpha}\left(1-\frac{\alpha}{2n}+O\left(\frac{1}{n^2}\right)\right)$ for large $n$. It follows that ${y_n=-\frac{\alpha}{2n}+O\left(\frac{1}{n^2}\right)}$ does not satisfy our assumption ${\{n^{\alpha/2}y_n\}\in l^1}$. Therefore, to avoid extra tedious calculations, we modify slightly the above definitions to obtain the cancellation of terms. Using the technique of present paper one could be able to consider above mentioned model without any corrections, but it would force us to consider asymptotic formulae in more details. Remind that the aim of our paper is just to demonstrate the technique in the critical case.\\ Let ${\lambda_k=k^{\alpha}(1+r_k)}$, ${\alpha\in(\frac{1}{2},\frac{2}{3})}$, ${r_k=O\left(\frac{1}{k^{1+x}}\right)}$, $x>\alpha/2$ ($ r_k\ne -1$ for any $k$). We claim also that $\lambda_k>0$ for any $k$. Define the diagonal $q_k$ by: \begin{equation}\label{triSect5} q_k+\lambda_k+\lambda_{k-1}=-\frac{\alpha}{k^{1-\alpha}}+d_kk^{\alpha}, \end{equation} for arbitrary real sequence $d_k$ satisfying the condition $d_k=O\left(k^{-1-x}\right)$, ${x>\alpha/2}$. Thus $$q_k=-2k^{\alpha}\left[1+\frac{1}{2}(r_k+r_{k-1}-d_k)-\alpha r_{k-1}k^{-1}+O\left(k^{-2}\right)\right]$$ Note that new $$y_k:=\frac{1}{2}(r_k+r_{k-1}-d_k)-\alpha r_{k-1}k^{-1}+O\left(k^{-2}\right) $$ fulfills the assumption $\{k^{\alpha/2}y_k\}\in l^1$. Without lost of generality let us put the assumption that all new ${\lambda_k>0}$. Therefore our asymptotic formulae are applicable. Consequently, we obtain the following spectral picture of Jacobi matrix $J_0$ defined by the entries given in formula (\ref{triSect5}). \begin{thm}\label{proposition5.1} The half-line $(-\infty,0)$ is contained in the pure absolutely continuous spectrum of $J_0$ and its local multiplicity is equal to one, a.e. ${\lambda\in(-\infty,0)}$. The spectrum of $J_0$ in the interval $(0,+\infty)$ is discreet and finite. Moreover, the number of eigenvalues of $J_0$ is less or equal to $N$ provided that (see (\ref{triSect5})) $\alpha k^{-1}-d_k\ge0$, $k>N$, and the corresponding eigenvectors decay exponentially. \end{thm} \begin{proof} Fix $\lambda<0$. Applying Theorem \ref{theorem4.1} we know that for any solutions $\vec u(n)$ of the system (4.2) we obtain the estimate $$\| \vec u(n)\|^2\le Cn^{-\alpha/2}\le C_1/\lambda_n $$ for some constants $C$, $C_1$ (depending on $\lambda$) and all $n$. Applying standard result \cite{YUM65}, \cite{JN99} (generalized Behncke-Stolz lemma) we conclude that $\lambda$ belongs to the support of the spectral measure of $J_0$ and so ${(-\infty,0)\subset\sigma_{ac}(J_0)}$. Moreover, the spectrum on the interval $(-\infty,0)$ is pure absolutely continuous and its local multiplicity (Lebesgue measure) a.e. is equal to one (i.e. non-zero). The last result follows from Gilbert-Pearson subordinacy theory \cite{KP92}. Concerning the point spectrum of $J_0$ note that it may appear on the semi-axis $\lambda\ge0$ only. Actually, by the subordinacy theory positive spectrum is pure point. Moreover, using technique of the paper \cite{S07} one can prove its discreetness. However, in our special case the discreetness can be proved easily. Indeed, for any $f\in D(J_0)$ we have ($f_0:=0$) $$(Jf,f)=\sum_{k=1}^{\infty}-k^{\alpha}(\alpha k^{-1}-d_k)\vert f_k\vert^2-\sum_{k=0}^{\infty}\lambda_k\vert f_k-f_{k+1}\vert^2\le0. $$ Therefore (by Glazman lemma \cite{AG93}) \begin{equation}\label{svoistvoPjat} \text{the number of}\quad \{\lambda\in\sigma_p(J_0)\}\le N, \end{equation} where $N$ has been chosen to satisfy the inequality: $\alpha k^{-1}-d_k\ge0$ for any $k>N$. The final statement of the Theorem \ref{proposition5.1} follows from asymptotic formula (3.23) which gives the precise form for the eigenvectors asymptotics. \end{proof} \begin{remark} Another approach to similar class of Jacobi matrices based on the generalization of ideas of W.Kelley \cite{K94} (whose paper concerns the "double root" (= the Jordan box) case for Jacobi matrices whose matrix entries are rational functions of $n$) was given by the first named author in \cite{JJ}. Our approach and the one of the paper \cite{JJ} are complementary and seem to have different areas of applications. \end{remark} \begin{remark} If the choice of right-hand side terms in (\ref{triSect5}) (and therefore the choice of a few first values of $q_k$ and $\lambda_k$) is so that for some integer $N$ $$ \sum_{k=1}^N k^{\alpha}(-\alpha k^{-1}+d_k)>\lambda_N+\lambda_1= N^{\alpha} (1+r_N)+\lambda_1$$ then ${\sigma_p(J_0)\ne\varnothing }$. In fact, for $\tilde f:=(1,\ldots,1,0,0,\ldots)$ only the first $N$ coordinates are equal to 1. Therefore we have $$ (J_0\tilde f,\tilde f)=\left(\sum_{k=1}^N k^{\alpha}(-\alpha k^{-1}+d_k)-N^{\alpha} (1+r_N)-\lambda_1\right)>0$$ \end{remark} J.J. and S.N. were supported by INTAS 05-1000008-7883, S.N. was supported in part by the grant RFBR-06-01-00249. Authors would like expresse their gratitude to S.Simonov for attentive reading of the manuscript and useful remarks. J.J. and S.N. are thankful to Lund Technical University, where part of this work has been done, for the hospitality. \bibliographystyle{amsplain} \begin{thebibliography}{10} \bibitem{A28} C.R.Adams, On the irregular cases of the linear ordinary difference equation, {\it Trans. Amer. Math. Soc. 30 (1928), no. 3, 507--541}. \bibitem{AG93} N.I.Akhiezer, I.M.Glazman, Theory of linear operators in Hilbert space. {\it Translated from the Russian and with a preface by Merlynd Nestell. Reprint of the 1961 and 1963 translations. Two volumes bound as one. Dover Publications, Inc., New York, 1993} \bibitem{YUM65} Ju.M.Berezanskii, Expansions in Eigenfunctions of Selfadjoint Operators, {\it Translated from the Russian by R.Bolstein, J.M.Danskin, J.Rovnyak and L.Shulman. Translations of Mathematical Monographs, Vol.17 American Mathematical Society, Providence, R.I. 1968 ix+809 pp. 47.65 (34.00)} \bibitem{B11}G.D.Birkhoff, General theory of linear difference equations,{\it Trans. Amer. Math. Soc. 12 (1911), no. 2, 243--284} \bibitem{BdMNS06} A.Boutet de Monvel, S.Naboko and L.O.Silva, The asymptotic behavior of eigenvalues of a modified Jaynes-Cummings model, {\it Assymptot. Anal. 47 (2006), no. 3-4, 291-315} \bibitem{DN06} D.Damanik and S.Naboko, A First-Order Phase Transition in a Class of Unbounded Jacobi Matrices: Critical Coupling ({\it to appear in J. Approx. Theory 2006}) \bibitem{D92} J.Dombrowski, Absolutely continous measures for systems of orthogonal polynomials with unbounded reccurence coefficients, {\it Constr. Approx. 8 (1992), n.2, 161-167} \bibitem{D04} J.Dombrowski, Eigenvalues and spectral gaps related to periodic perturbations of Jacobi matrices, {\it Spectral methods for operators of mathematical physics, 91-100, Oper. Theory Adv. Appl., 2004} \bibitem{DP02} J.Dombrowski and S. Pedersen, Absolute continuity for unbounded Jacobi matrices with constant row sum, {\it J. Math. Anal. Appl. 267 (2002) 695-713} \bibitem{DJMP04} J.Dombrowski, J.Janas, M. Moszynski and S.Pedersen, Spectral gaps resulting from periodic perturbations of a class of jacobi operators, {\it Constr. Approx. 20 (2004), no. 4, 585-601} \bibitem{DP02a} J.Dombrowski and S.Pedersen, Spectral transition parameters for a class of Jacobi matrices, {\it Studia Math. 152 (2002), no.3, 217-229} \bibitem{E89} M.S.P.Eastham, The assymptotic solution of linear differential systems. Applications of the Levinson theorem, {\it London Mathematical Society Monographs. New Series, 4. Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1989} \bibitem{E99} S.Elaydi, An Introduction to Difference Equations, {\it Springer-Verlag, New York, 2nd\ edn, 1999} \bibitem{GBV04} J.S.Geronimo, O.Bruno and W.Van Assche, WKB and turning point theory for second-order difference equations, {\it Spectral methods for operators of mathematical physics, 101-138, Oper. Theory Adv. Appl., 2004} \bibitem{GP87} D.J.Gilbert and D.B.Pearson, On subordinacy and analysis of the spectrum of one-dimensional Shr\:odinger operator, {\it J. Math. Anal. Appl. 128 (1987), no.1, 30-56} \bibitem{JJ} J.Janas, The asymptotic analysis of generalized eigenvectors of some Jacobi operators. Jordan box case, {\it J.Difference Eq.Appl.12(2006 ,)no.6, 597-618} \bibitem{JM03} J.Janas and M.Moszynski, Spectral properties of Jacobi matrices by asymptotic analysis, {\it J.Approx. Theory 120 (2003) 309-336} \bibitem{JN99} J.Janas and S.Naboko, Jacobi matrices with power-like weights---grouping in blocks approach, {\it J. Funct. Anal. 166 (1999), no. 2, 218--243}. \bibitem{JN01} J.Janas and S.Naboko, Spectral properties of selfadjoint Jacobi matrices coming from birth and death processes, {\it Operator Theory, Advances and Application,Birkhauser-Verlag, 127 (2001) 387--397} \bibitem{JN02} J.Janas and S.Naboko, Spectral Analysis of Selfadjoint Jacobi Matrices with Periodically Modulated Entries, {\it J. Funct. Anal. 191 (2002) 318--342} \bibitem{JN04} J.Janas and S.Naboko, Infinite Jacobi matrices with unbounded entries: asymptotics of eigenvalues and the transformation operator approach,{\it SIAM J. Math. Anal. 36, No. 2 (2004) 643-658} \bibitem{JNS04} J.Janas, S.Naboko and G. Stolz, Spectral theory for a class of analysis of periodically perturbed unbounded Jacobi matrices: elementary methods, {\it Journal of Computational and Applied Mathematics {\bf 171} no. 1--2 (2004), 265--276} \bibitem{JL99} S.Jitomirskaja and Y.Last, Power-law subordinacy and singular spectra,I.Half-line operators, {\it Acta Math., 183 (1999) 171-189} \bibitem{KM58} S.Karlin and J.McGregor, Linear growth, birth and death processes, {\it Jour. Math. Mechan. Vol 7, No 4 (1958) 643-662} \bibitem{K94} W.Kelley, Asymptotic Analysis of Solutions in the "Double Root" Case, {\it Computers Math. Applic. 28, No.1-3 (1994) 167--173} \bibitem{KP92} S.Khan and D.B.Pearson, Subordinacy and spectral theory for infinite matrices, {\it Helv. Phys. Acta 65 (1992) 505--527} \bibitem{LNS03} A.Laptev, S. Naboko, and O.Safronov, On new relations between spectral properties of Jacobi matrices and their coefficients. {\it Comm. Math. Phys. 241 (2003), no. 1, 91--110.} \bibitem{M03} M.Moszynski, Spectral properties of some Jacobi matrices with double weights, {\it J.Math.Anal.Appl. 280 (2003) 400-412} \bibitem{O97} F.W.J.Olver, Asymptotics and special functions {\it Reprint of the 1974 original [Academic Press, New York] AKP Classics. A.K. Peters, Ltd., Wellesley,MA, 1997} \bibitem{R99}Ch.Remling, Spectral analysis of high-order differential operators. II. Fourth-order equations, {\it J.London Math Soc. (2) 59 (1999), no.1, 188-206} \bibitem{S01} E.Sheronova, Asymptotics of eigenvectors of Jacobi matrices for a model of spectral phase transition, ({\it M.S. Thesis, 2001}) \bibitem{S04} L.O.Silva, Uniform Levinson Type Theorems for Discrete Linear Systems, {\it Spectral methods for operators of mathematical physics, 203-218. Operator Th., Adv. Appl.,Vol.154 (2004) 203-218 } \bibitem{S07} L.O.Silva, Uniform and Smooth Benzaid-Lutz Type Theorem and Applications to Jacobi Matrices, ({\it to appear in Operator Theory: Advances and Appliactions, 2007}) \bibitem{SW05} L.O.Silva and R.Weder, On the Two Spectra Inverse Problem for Semi-Infinite Jacobi Matrices, ({\it preprint, 2005}) \bibitem{Si07} S.Simonov, An example of spectral phase transition phenomenon in a class of Jacobi matrices with periodically modulated weights, ({\it to appear in Operator Theory: Advances and Applications, 2007}) \bibitem{St94} G.Stolz, Spectral theory for slowly oscillating potentials. I. Jacobi matrices. {\it Manuscripta Math. 84 (1994), no. 3-4, 245--260} \end{thebibliography} \end{document} ---------------0702211429314--