\font\bigtenrm=cmr5 scaled 2075
\def\R{\hbox{I\kern-.1667em R}}
\def\C{\hbox{C \kern-0.9em $\mid$}}
\baselineskip=0.75cm plus 5pt minus 5pt
\hsize=16.5cm
\vsize=24.0cm
\centerline{\bf CHEBYSHEV'S RECURSION --- SOME ANALYTICAL, COMPUTATIONAL \/}
\centerline{\bf AND APPLIED ASPECTS \/}
\vskip 1cm
{
\bigtenrm
S.A.~Korzh ${}^1$,~I.E.~Ov\^carenko ${}^1$ and
Roman Ugrinovsky ${}^2$
${}^1$ { Department of Mechanics and Mathematics,
Kharkov State University,
310077 Kharkov, Ukraine }
${}^2$ { Institute of Applied Physics,
Russian Academy of Sciences,
46 Uljanov Str.,
603600 Nizhny Novgorod, Russia
tel:~(8312) 38-42-81, fax:~(8312) 36-72-07, telex:~151129~FIZIK~SU,
E-mail:~roman@hydro.nnov.su }
\vskip 1.5cm
\beginsection{ Summary}
We consider polynomials orthogonal with respect to some scalar product on
spaces of polynomials on the real line and unit circle. A basic problem
in the constructive theory of such polynomials is to determine their
three-term recurrence relations. Depending on what is known about
the measure
corresponding to a given scalar product, there are different ways to proceed.
If, as is typical in applications, one knows the measure only through its
moment information, the appropriate procedure is an algorithm that goes
back to Chebyshev. The algorithm in effect implements the nonlinear map
from the given moments or modified moments to the desired recurrence
coefficients. We study some algebraic and computational structures
connected with scalar products on spaces of polynomials on the real line and
unit circle and Chebyshev's algorithm. To illustrate up-to-date versions
of the Chebyshev algorithm various applied problems are considered.
\vskip 1.0cm
{\it Subject Classifications: AMS(MOS): Primary 65D15; Secondary
33A65; CR:G1.2 }
{\it Running title: Chebyshev's recursion --- some aspects }
\vskip 1.0cm
\beginsection{ Contents }
\item{1.} Introduction.
\item{2.} Scalar products on polynomial space.
\item{2.1.} Orthogonal systems of polynomials.
\item{2.2.} Recurrence relations for the $< , >$-orthogonal system of
polynomials.
\item{2.3.} The Chebyshev matrix, modified Chebyshev algorithm,
relations between two sets of orthogonal polynomials for different
scalar products.
\item{2.4.} Algebraic definition of the scalar product.
\item{2.5.} $`` \alpha ''$--definition as multiplication by function.
\item{3.} Scalar product on the space of trigonometric polynomials.
\item{4.} Test of the positive definiteness of the Hankel and Toeplitz
quadratic forms.
\item{5.} The symmetric and Hermitian forms technique.
\item{5.1.} Test for the number of different real roots of a real polynomial.
Borchardt's criterion.
\item{5.2.} Separation of real roots of the real polynomials.
\item{6.} Test of function range.
\item{6.1.} Univariate functions; discretization of the estimation problem.
\item{6.2.} Case of two variables.
\item{6.3.} The Hausdorff differences and non-negative test.
\item{7.} Moment method for a Schr\"odinger equation and the modified Chebyshev
algorithm.
\item{8.} Hilbert spaces of entire functions and the modified Chebyshev
algorithm.
\item{9.} References.
\beginsection{ 1.Introduction}
In this article, we suggest numerical procedures based on
the consistent application of a number of fundamental propositions
of the moment problem and the theory of orthogonal polynomials.
We do not seek to compare the efficiency of these procedures
with that of other possible procedures in solving the same problems.
We were simply interested in the fact that these procedures, based on
fundamental theoretical propositions, were efficient to some extent.
This efficiency may be suggestive of some unknown potential
that the methods of the moment problem may possess,
and may serve to clarify its essence.
In 1859 P.L.~Chebyshev proposed an effective technique to construct
the system of polynomials orthogonal with respect to some measure
on the real line by its power moments [2].
The Chebyshev algorithm actually realizes a transition from
one system of polynomials with three-term recurrence relations
to the other system of polynomials orthogonal with respect to some
given measure on~$\R ^1$.
It turned out that the Chebyshev idea is applicable to different
problems. Algorithms based on this idea have been used for
applications in numerical analysis and in theoretical physics
and chemistry with great effect ([6,21]).
In the following we will give a brief survey on the constructive
theory of orthogonal polynomials and consider some analytical,
computational and applied procedures connected with the scalar
product~$< , >$ on the space~$\{ {\cal P\/}_{R^1} \}$ of polynomials on
the real line~$\R ^1$ and the
Chebyshev algorithm. The scalar product $< , >$ is assumed to have
the property~$
=
$, and it is not
necessary definite.
We also study the scalar product on the space~$\{ {\cal P\/}_C \}$ of
trigonometric polynomials and the corresponding modifications of the
Chebyshev algorithm.
To illustrate up-to-date variants of the Chebyshev technique
(~(m)-T-recursion and its modifications~), we consider some applied
problems such as the test of positive definiteness of Hankel and
Toeplitz quadratic forms and the calculation of arbitrary-order
determinants of Hankel -- Hadamard and Toeplitz.
The (m)-T-recursion, i.e. the Chebyshev recursion
modification, and the classical moment theory [1] enable us to
construct effective algorithms for checking positivity of
functions of one and many variables on given sets by its moments.
We also consider the application of the modified Chebyshev technique to
the method of symmetric and Hermitian forms in the
theory of the separation of roots of algebraic equations
expounded in [13].
To apply the Chebyshev idea to another circle of problems, we
consider the moment method in the study of the Schr\"odinger equation and
construct an algorithm based on this technique to estimate the eigenvalue
boundaries and the solutions of the Schr\"odinger equation with a
special kind of potential.
Some results of this work have been given in [16,17,19]
and have been announced in [11].
\beginsection{ 2. Scalar products on polynomial space}
\proclaim{2.1}. {Orthogonal systems of polynomials }
Let~$\{ {\cal P\/}_{R^1} \}$ be the linear space of polynomials on the real
line $\R ^1$,~$< , >$ denote a scalar product on the
space~$\{ {\cal P\/}_{R^1} \}$, which is real for real polynomials and
has the property~$$
=
. \eqno (A)$$
The scalar product $< , >$ is not assumed to be definite. It is
assumed
below that all
scalar products considered on the space~$\{ {\cal P\/}_{R^1} \}$
have the property (A). We also assume that the scalar product~$< , >$ is
nontrivial, i.e., all the Hankel -- Hadamard determinants
$$H_{<,>}(n)=\det\left| \, \matrix{
{<1,1>}\hfill&{<\lambda,1>}\hfill&\ldots&{<\lambda^n,1>}\hfill\cr
{<1,\lambda>}\hfill&{<\lambda,\lambda>}\hfill&\ldots&{<\lambda
^n,\lambda>}\hfill\cr
\vdots&\vdots&\ddots&\vdots\cr
{<1,\lambda ^n>}\hfill&{<\lambda,\lambda ^n>}\hfill&\ldots&{<\lambda
^n,\lambda ^n>}\hfill\cr
}\right|\neq0 .\eqno(2.1)$$
Letting
$$s_k=<\lambda^k,1>$$
we obtain the
traditional notation for Hankel -- Hadamard determinants ~$H_{<,>}(n)$:
$$H_{<,>}(n)=\det\|s_{i+j}\|_{i,j=0}^n .$$
For the scalar product considered there exists an $< , >$-orthogonal system
of
polynomials, i.e., a system of polynomials
${\{P_n(\lambda)\}}_{n=0}^\infty$, such that
{\vskip 0.5cm
\vbox{\hbox{\qquad { 1)} $deg P_n(\lambda)=n$}
\hbox{\qquad { 2) the highest-degree coefficient is positive}}
\hbox{\qquad { 3)} $=0,$ if~ $ n\neq m .$}}
Two systems of the $< , >$-orthogonal polynomials are in use:
\qquad a) the set of monic orthogonal polynomials
$$\eqalign{ P_n(\lambda)&=
{1\over H_{n-1}} \det \left| \, \matrix{
{<1,1>}&{<\lambda,1>}&\ldots&{<\lambda^n,1>}\cr
{<1,\lambda>}&{<\lambda,\lambda>}&\ldots&{<\lambda ^n,\lambda>}\cr
\vdots&\vdots&\ddots&\vdots\cr
{<1,\lambda^{n-1}>}&{<\lambda,\lambda^{n-1}>}&\ldots&{<\lambda^n,\lambda^{n-1}
>}\cr
{1}&{\lambda}&\ldots&{\lambda^n}\cr
}\right|\cr
&=\lambda^n+\hbox{lower degree terms}}.\eqno(2.2)$$
\qquad b) in the case of a definite scalar product the orthonormal system
of the
$< , >$-orthogonal polynomials is of special interest. This system is
defined by the conditions~$=1, n=0,~1,~\dots$
and
$$ \widetilde P_n(\lambda)=
{1\over \sqrt{H_{n-1} H_n} } \det \left| \, \matrix{
{<1,1>}&{<\lambda,1>}&\ldots&{<\lambda^n,1>}\cr
{<1,\lambda>}&{<\lambda,\lambda>}&\ldots&{<\lambda ^n,\lambda>}\cr
\vdots&\vdots&\ddots&\vdots\cr
{<1,\lambda^{n-1}>}&{<\lambda,\lambda^{n-1}>}&\ldots&{<\lambda^n,\lambda^{n-1}
>}\cr
{1}&{\lambda}&\ldots&{\lambda^n}\cr
}\right| .\eqno(2.3)$$
The relations (2.2),~(2.3) are fundamental in the constructive theory of
orthogonal polynomials. However, direct calculations of the orthogonal
polynomials by these determinant formulas are difficult and even impossible
for large~$n$.
\vskip 2cm
\proclaim{2.2}. {Recurrence relations for the $< , >$-orthogonal system of
polynomials}
The set of monic $< , >$-orthogonal polynomials~%
${\{P_n(\lambda)\}}_{n=0}^\infty$ satisfies the well-known three-term
recurrence relation
$$P_{n+1}(\lambda)=(\lambda-a_n)P_n(\lambda)-b_nP_{n-1}(\lambda),\eqno(2.4)$$
$$P_{-1}(\lambda)\equiv0 ,\qquad P_{0}(\lambda)\equiv1.$$
Note that in fact~$b_0$ does not take part in the recurrence relation~(2.4),
but it is advisable to take $b_0=<1,1>.$
Furthermore, in order to establish this relation, we expand
$\lambda P_{n-1}(\lambda)$
as a linear combination of the polynomials~$\{ P_n(\lambda)\}:$
$$\lambda P_{n-1}(\lambda)=\sum_{k=0}^n \alpha_{n,k}P_k(\lambda).
\eqno(2.5)$$
The $< , >$-orthogonality properties show that
$$<\lambda P_{n-1}(\lambda),P_k(\lambda)>=h_k\alpha_{n,k} ,$$
where $$h_k={H_k \over H_{k-1}} .$$
However
$$<\lambda P_{n-1}(\lambda),P_k(\lambda)>=
,$$
which shows that $\alpha_{n,k}$ is equal to zero if $k ,
\eqno(2.6)$$ and
$$h_{n-1}\alpha_{n,{n-1}}=<\lambda P_{n-1}(\lambda),P_{n-1}(\lambda)> ,$$
$$h_{n-2}\alpha_{n,{n-2}}=<\lambda P_{n-1}(\lambda),P_{n-2}(\lambda)>=
<\lambda P_{n-2}(\lambda),P_{n-1}(\lambda)> .$$
In view of (2.6), we get $$h_{n-2}\alpha_{n,{n-2}}=h_{n-1} .$$
Introducing the notations $$a_{n-1}=\alpha_{n,n-1} ,
\qquad b_{n-1}=\alpha_{n,n-2} ,$$
we get (2.4) with
$$a_n={<\lambda P_n(\lambda),P_n(\lambda)> \over }
,\eqno(2.7)$$
$$b_n={ \over }
.\eqno(2.8)$$
>From the relation~(2.8) it follows that
$$=b_0 b_1 \ldots b_n \eqno(2.9)$$
and
$$H_n=b_0^{n+1} b_1^n \ldots b_{n-1}^2 b_n \eqno(2.10)$$
Note that according to~(2.10), the two scalar products $< , >_1$ and
$< , >_2$ on the polynomial space~$\{ {\cal P\/}_{R^1} \}$ have the same
Hankel -- Hadamard determinants if and only if the sets of coefficients
~$\{ b_k \}$ in recurrence relations~(2.4) that are corresponding to
scalar products are equal.
\vskip 0.34cm
The following theorem due to Favard holds:
\proclaim{ Theorem 2.1}. {The scalar product $< , >$ on the polynomial
space~$\{ {\cal P\/}_{R^1} \}$ is definite
$$\forall P \in \{ {\cal P\/}_{R^1} \} \quad \geq 0 ,$$
if and only
if $b_k$ is positive for all non-negative integer $k$. }
{\it Proof. \/}
If a scalar product is definite, then from~(2.8) we have that $b_k$ is
positive
for all $k=0,~1,~2,\dots~.$ Next we expand an arbitrary
polynomial~$P(\lambda)$
as a linear combination of the polynomials~$\{ P_k(\lambda)\}$
$$P(\lambda)=\sum_{k=0}^n \xi_k P_k(\lambda) $$
Then $$
=\sum_{k=0}^n |\xi_k|^2 \prod_{l=0}^k b_l$$
and if $b_k$ is positive for all $k \in Z_+$, we obtain $$
\geq 0 .$$
The proof is finished.
A set of polynomials~$\{ P_k(\lambda)\}$ is closely connected with the Haar
matrix $\| C_{j,k}^r\|$. This matrix is a "multiplication table" for
polynomials and can be obtained from the relation
$$P_j(\lambda)P_k(\lambda)=\sum_{|j-k|}^{j+k} C_{j,k}^r P_r(\lambda) .
\eqno (2.11)$$
It is clear, that
$$C_{j,k}^r= . \eqno (2.12)$$
That is why $$C_{j,k}^r=C_{k,r}^j=C_{r,j}^k .$$
We note also the relation $$C_{j,k}^r=\|P_r({\cal J})\|_{j,k} ,\eqno(2.13)$$
where ${\cal J}$ is the Jacobi matrix of the normal system of polynomials.
\vskip 1cm
\proclaim {2.3}. {The Chebyshev matrix, modified Chebyshev algorithm,
relations between two sets of orthogonal polynomials for different
scalar products}
Let $< , >$ denote a scalar product on the space~$\{ {\cal P\/}_{R^1} \}$,
$\{ \widehat P_k(\lambda)\}$ be a set of the $< , >$-orthogonal monic
polynomials.
They satisfy a three-term recurrence relation
$$\widehat P_{n+1}(\lambda)=(\lambda-\widehat a_n)\widehat P_n(\lambda)-
\widehat b_n \widehat P_{n-1}(\lambda),\eqno(2.14)$$
$$\widehat P_{-1}(\lambda)\equiv0 ,\qquad \widehat P_{0}(\lambda)\equiv1.$$
The coefficients $\{ \widehat a_k ,\widehat b_k \}$ are not assumed to be
known.
At the same time we consider a set of monic polynomials $\{ P_n(\lambda)
\} ,$
which is given by the system of polynomials satisfying the three-term
recurrence relation
$$P_{n+1}(\lambda)=(\lambda-a_n) P_n(\lambda)-b_n P_{n-1}(\lambda),$$
$$P_{-1}(\lambda)\equiv0 ,\qquad P_{0}(\lambda)\equiv1,$$
where the constants $a_k, b_k$ are known.
The sets of coefficients $\{ a_k , b_k \}$
and~$\{ \widehat a_k,\widehat b_k \}$
are connected with each other by a remarkable relation that goes back to
P.L.~Chebyshev [2]. The algorithm presented below is a direct
generalization of the original Chebyshev algorithm. In stating the modified
Chebyshev algorithm we will in principal follow [6].
\proclaim{Definition}. { }{The matrix $T_{\widehat P,P}$ of the modified
moments, where
$$\| T_{\widehat P,P} \|_{k,l}=\sigma_{k,l} \buildrel \rm def \over
=<\widehat P_k,P_l>,\eqno(2.15)$$
will be called {\it a Chebyshev matrix\/}.}
If $k>l$, then $\sigma_{k,l}=0$, i.e., the Chebyshev matrix is triangular.
In particular, $$\sigma_{k+1,k-1}=0,\qquad\sigma_{k+1,k}=0.$$
We get from~(2.15)
$$\eqalignno
{0=\sigma_{k+1,k-1}&=<\widehat P_{k+1},P_{k-1}>
=<(\lambda-\widehat a_k)\widehat P_k-
\widehat b_k \widehat P_{k-1},P_{k-1}>\cr
&=<\lambda \widehat P_k,P_{k-1}>-
\widehat b_k \sigma_{k-1,k-1}
=<\widehat P_k,\lambda P_{k-1}>-
\widehat b_k \sigma_{k-1,k-1} \cr
&=<\widehat P_k,P_k>+a_{k-1}<\widehat
P_k,P_{k-1}>+
b_{k-1}<\widehat P_k,P_{k-2}>-\widehat b_k \sigma_{k-1,k-1}
\cr
&=\sigma_{k,k}-\widehat b_k \sigma_{k-1,k-1}\cr}$$
Thus $$\widehat b_k={\sigma_{k,k} \over \sigma_{k-1,k-1}}.\eqno(2.16)$$
Next
$$\eqalignno
{0=\sigma_{k+1,k}&=<\widehat P_{k+1},P_k>
=<(\lambda-\widehat a_k)\widehat P_k-
\widehat b_k \widehat P_{k-1},P_k>\cr
&=<\lambda \widehat P_k,P_k>-
\widehat a_k \sigma_{k,k}-\widehat b_k \sigma_{k-1,k}
=<\widehat P_k,\lambda P_k>-
\widehat a_k \sigma_{k,k}-\widehat b_k \sigma_{k-1,k} \cr
&=<\widehat P_k,P_{k+1}+a_k P_k+b_k P_{k-1}>-
\widehat a_k \sigma_{k,k}-\widehat b_k \sigma_{k-1,k} \cr
&=\sigma_{k,k+1}+a_k \sigma_{k,k}-
\widehat a_k \sigma_{k,k}-\widehat b_k \sigma_{k-1,k}\cr}$$
Therefore we have
$$\widehat a_k=a_k+{\sigma_{k,k+1} \over \sigma_{k,k}}-
{\sigma_{k-1,k} \over \sigma_{k-1,k-1}}.\eqno(2.17)$$
As one can see from relations~(2.16),~(2.17), to calculate
the values $\widehat a_k$ and~$\widehat b_k$
for any fixed index~$k$ it is enough to know two entries
of row $(k-1)$ and row $k$ of the matrix~$T_{\widehat P,P}$
in addition to the known parameters $a_k$ and~$b_k$ .
If the scalar product $< , >$ on the
space~$\{ {\cal P\/}_{R^1} \}$ and initial data are known,
we can sequentially
generate all rows of the matrix~$T_{\widehat P,P}$. Relations~(2.16) and~(2.17)
determine the coefficients~$\widehat a_k$ and~$\widehat b_k$,
which provides the most stable algorithm that we known to construct the
system~$\{ \widehat P_k(\lambda)\}$
of the $< , >$-orthogonal polynomials. The quantities~$\sigma_{k,l}=<\widehat
P_k,P_l>$
can also be obtained by the recursion. Namely, on the basis of the
property~(A)
of the scalar product $< , >$, we get
$$\eqalignno
{\sigma_{k,l}&=<\widehat P_k,P_l>
=<(\lambda-\widehat a_{k-1})\widehat P_{k-1}-
\widehat b_{k-1} \widehat P_{k-2},P_l>\cr
&=<\lambda \widehat P_{k-1},P_l>-
\widehat a_{k-1}<\widehat P_{k-1},P_l>-\widehat b_{k-1}<\widehat
P_{k-2},P_l> \cr
&=<\widehat P_{k-1},P_{l+1}+a_l P_l+b_l P_{l-1}>-
\widehat a_{k-1}<\widehat P_{k-1},P_l>-\widehat b_{k-1}<\widehat
P_{k-2},P_l> \cr
&=\sigma_{k-1,l+1}+a_l \sigma_{k-1,l}+b_l \sigma_{k-1,l-1}-
\widehat a_{k-1} \sigma_{k-1,l}-\widehat b_{k-1}
\sigma_{k-2,l}.\cr} $$
Thus
$$\sigma_{k,l}=\sigma_{k-1,l+1}+a_l \sigma_{k-1,l}+b_l \sigma_{k-1,l-1}-
\widehat a_{k-1} \sigma_{k-1,l}-\widehat b_{k-1}
\sigma_{k-2,l}.\eqno(2.18)$$
The relations
$$\eqalignno{
\sigma_{k,l}&=\sigma_{k-1,l+1}-(\widehat a_{k-1}-a_l) \sigma_{k-1,l}-
\widehat b_{k-1} \sigma_{k-2,l}+b_l \sigma_{k-1,l-1},\cr
\widehat a_k&=a_k+{\sigma_{k,k+1} \over \sigma_{k,k}}-
{\sigma_{k-1,k} \over \sigma_{k-1,k-1}},\cr
\widehat b_k&={\sigma_{k,k} \over \sigma_{k-1,k-1}},&(m)-T\cr}$$
with obvious initial conditions
$$\sigma_{-1,l}=0,\qquad\sigma_{0,l}=<\widehat P_0,P_l>,$$
$$a_0=a_0+{\sigma_{0,1} \over \sigma_{0,0}},\qquad \widehat
b_0=\sigma_{0,0}$$
form the recurrence process, the so-called the modified Chebyshev algorithm.
These relations give us an opportunity to organize a computational cycle for
the sequential calculation of the $\sigma_{k,l}$ by the
recurrence relations~(2.18)
and then of the $\widehat a_k,\widehat b_k$ by relations~(2.16) and~(2.17).
Thus, having the coefficients~$\{ a_k,b_k \}$ and having the
well-conditioned algorithm of constructing the set~$\{P_k(\lambda)\}$
we can calculate the first $n$
coefficients~$\widehat a_k,\widehat b_k,k=0,~1,\ldots,n-1$
from the first~$2n$ quantities~$<1,P_l>,l=0,~1,\ldots,2n-1$,
the modified moments, using the modified Chebyshev algorithm.
>From (2.16),~(2.9) and~(2.10) we get
$$<\widehat P_k(\lambda),P_k(\lambda)>=\sigma_{k,k},\eqno(2.19)$$
$$ H_{<,>}(k)=\prod_{l=0}^k \sigma_{l,l}.\eqno(2.20)$$
Note that formula~(2.20) expresses the Hankel--Hadamard determinant
by product of the diagonal entries of the Chebyshev matrix.
The formula connecting the set of monic polynomials satisfying the
recurrence relation~(2.4) and the monic system of polynomials corresponding
to the scalar product~$< , >$ is given by the following proposition:
\proclaim{Theorem 2.2}. {Let
$P=[P_0(\lambda),P_1(\lambda),\ldots,P_n(\lambda),\ldots]^T$
and~$\widehat P=[\widehat P_0(\lambda),\widehat P_1(\lambda),
\ldots,\widehat P_n(\lambda),\ldots]^T$ be the vectors consisting of the
polynomials satisfying the recurrence relations~(2.4) and~(2.14),
respectively, and let~$T_{\widehat P,P}$ be the Chebyshev matrix. Then
$$P=T_{\widehat P,P}^T \|\hbox{diag}(\sigma_{k,k}^{-1})\|\widehat P.$$ }
\vskip 1cm
\proclaim{2.4}. {Algebraic definition of the scalar product}
The transition from one scalar product on the polynomial space to another
scalar product is necessary in various problems. This transition
can be formally described by an algebraic definition of the scalar product
on the polynomial space~$\{ {\cal P\/}_{R^1} \}$.
Let $< , >$ be a scalar product on the polynomial space~$\{{\cal
P\/}_{R^1}\}$;
$\{ P_k(\lambda)\}$ be the orthonormal set of polynomials corresponding
to the scalar product $< , >$:
$$P_j(\lambda)=\sum_{l=0}^j \gamma_{l,j} \lambda^l .\eqno(2.21)$$
Then any sequence $\{\alpha\}=\alpha_0,\alpha_1,\ldots,\alpha_n,\ldots$
of real numbers defines a new scalar product~$< , >_\alpha$ (with the property
$_\alpha=
_\alpha$) on the polynomial
space~$\{ {\cal P\/}_{R^1} \}$. Namely let us define the scalar product as
$$
_\alpha=\sum_{j=0}^{m+n} \gamma_j \alpha_j,\eqno(2.22)$$
where $m$ is the degree of~$P(\lambda)$, $n$ is the degree of~$Q(\lambda)$,
and $\gamma_j$ are the coefficients of the polynomial
$$P \overline{Q}=\sum_{j=0}^{m+n} \gamma_j P_j.\eqno(2.23)$$
Note that $$_\alpha=\alpha_r.\eqno(2.24)$$
Thus, any scalar product on the polynomial
space~$\{ {\cal P\/}_{R^1} \}$ is defined in a unique way as
$< , >_\alpha$-scalar product,
if the scalar product $< , >$ on polynomial space~$\{ {\cal P\/}_{R^1} \}$
and the $< , >$-orthogonal system of polynomials are fixed.
At the same time the initial scalar product~$< , >$ can also be represented
through
the $< , >$-orthogonal system in an algebraic manner by means of the sequence
$$\buildrel (-1) \over {\alpha_r} =<\widehat P_r,1>.\eqno(2.25)$$
We consider the following {\it local property \/} to be very important
and fundamental:
{\it to compute the first $n$ elements of the
sequence~$\{ \buildrel (-1) \over {\alpha_k} \}$
it is sufficient to know first $n$ elements of the sequence~$\{\alpha_k\}$
\/}.
This follows from relation~(2.25) and Theorem~2.2.
The kernel
$${\cal K}_\alpha(j;k)=_\alpha \eqno(2.26)$$
is connected with the transition from the scalar product~$< , >$
to~$< , >_\alpha$.
This kernel can be represented through the Haar
matrix~$\|C_{j,k}^r\|$ of the system $\{P_j\}$ and the
sequence~$\{\alpha_k\}$
as $${\cal K}_\alpha(j;k)=\sum_{|j-k|}^{j+k} C_{j,k}^r \alpha_r.\eqno(2.27)$$
If $\cal J$ is the Jacobi matrix of the system~$\{P_j(\lambda)\}$,
then thanks to~(2.13) we have
$${\cal K}_\alpha(j;k)=\|\sum_{|j-k|}^{j+k} \alpha_rP_k({\cal J})\|_{j,k}.
\eqno(2.28)$$
The next theorem is self-evident:
\proclaim{Theorem 2.3}. {
The following properties are equivalent:
a) the scalar product~$< , >_\alpha$ is defined, i.e.
$$\forall P\in\{ {\cal P}_{R^1}\}:\qquad _\alpha\geq0;$$
b) the kernel~${\cal K}_\alpha(j;k)$ is positive definite;
c) the sequence $$s_\alpha(n)=<\lambda^n,1>_\alpha \eqno(2.29)$$
forms a positive definite kernel~$s_\alpha(j+k)$;
d) the sequence~$\{\alpha_k\}$ is a positive modified moment sequence with
regard to the system~$\{P_j(\lambda)\}$, i.e. there exists a non-negative
measure~$\mu_\alpha(d\lambda)$ having moments of all orders, such that
$$\alpha_j=\int \limits_{-\infty}^\infty P_j(\lambda)\,\mu_\alpha(d\lambda);
\eqno(2.30)$$
e) the form defined on the dense in ${\cal H}_{<,>}$ set of polynomials by means
of
the relation $$\alpha({\cal J})\,(P,Q)=
_\alpha$$
is positive. The Hilbert space~${\cal H}_{<,>}$ is a closed set of polynomials
with respect to the norm~$\sqrt{
}$.}
Note that
$$\alpha({\cal J})\,(P_j,P_k)={\cal K}_\alpha(j;k).\eqno(2.31)$$
If $\alpha(\lambda)$ is a polynomial, then $\alpha({\cal J})$ is an operator
on the space~${\cal H}_{<,>}$ from a dense set in~$\{ {\cal P\/}_{R^1} \}$
onto~$\{ {\cal P\/}_{R^1} \}$:
$$\alpha({\cal J})\colon P(\lambda)\rightarrow\alpha(\lambda)P(\lambda).$$
\proclaim{Theorem 2.4}. {
Let $< , >_\alpha$ be the $``\alpha''$-scalar
product on the space~$\{ {\cal P\/}_{R^1} \}$
defined by~(2.22)--(2.23) and $
_\alpha \geq0.$ Then
the set~$\Sigma_\alpha$ of measures~$\mu_\alpha(d\lambda) \geq0$, such that
$$\alpha_j=\int\limits_{-\infty}^\infty
P_j(\lambda)\,\mu_\alpha(d\lambda),$$
and the set~$\Sigma_{s_\alpha}$ of measures, such that
sequence~$s_\alpha(n)=<\lambda^n,1>_\alpha$ is represented in the form
$$s_\alpha(n)=\int\limits_{-\infty}^\infty \lambda^n\,\mu_\alpha(d\lambda),
\qquad \mu_\alpha(d\lambda)\geq0, \eqno(2.32)$$
are identical. The sequence~$\{s_\alpha(n)\}$ can be obtained from the
triangular system of linear equations
$$\sum_{l=0}^j \gamma_{l,j}s_\alpha(l)=\alpha_j, \qquad j\in Z_+.\eqno(2.33)$$
}
\vskip 1cm
\proclaim{2.5}. {$``\alpha''$-definition as multiplication by function}
The function
$$\alpha(\gamma)=\sum_{j=0}^\infty \alpha_j P_j(\gamma)\eqno(2.34)$$
can be formally compared with a sequence~$\{\alpha_j\}$ for a fixed sequence
of polynomials~$\{P_j(\gamma)\}$. If~$\varrho(d\lambda)$ is
a measure having moments of all orders, the integral
$$\int_{-\infty}^\infty P(\lambda)\,\alpha(\lambda)
\,\varrho(d\lambda),$$
where $P(\lambda)$ is a polynomial, can be rewritten
in the following way:
$$\int\limits_{-\infty}^\infty
P(\lambda)\,\alpha(\lambda)\,\varrho(d\lambda)
=\sum_{j=0}^{\hbox{deg}P}\gamma_j\alpha_j.\eqno(2.35)$$
Here, $\gamma_j$ are the coefficients of the expansion $P(\lambda)$ as a
linear combination of the polynomials~$\{ P_j(\gamma) \}$.
\proclaim{Theorem 2.5}. {
Let $< , >_1$ and~$< , >_2$ be the scalar products on the polynomial
space~$\{ {\cal P\/}_{R^1} \}$; $\{P_j(\gamma)\}$ be the $< , >_2$-orthogonal
system of polynomials;
$$\alpha_j=_1 \eqno(2.36)$$
be the modified moments. Then
$$_1=
_{2,\alpha}\ ,\eqno(2.37)$$
and if $< , >_2=< , >_{\varrho(d\lambda)}$, then
$$
_1=
_{\alpha(\lambda)\varrho(d\lambda)}\ .\eqno(2.38)$$ }
\beginsection{3. Scalar product on the space of trigonometric polynomials}
In this section we shall consider polynomials of one independent
complex variable $z$ on the unit circle~$\C=\{z\colon |z|=1\}$ which
are defined by their recurrence relations
$$\eqalign {u_{n+1}&=a_{n+1}zu_n+b_{n+1}v_n,\cr
v_{n+1}&={\overline b}_{n+1}zu_n+a_{n+1}v_n,
\qquad n=0,~1,~2,\ldots\cr}$$
and the initial conditions
$$u_0(z)\equiv1,\qquad v_0(z)\equiv1.$$
Assuming that $${a_n={\overline a}_n,}\qquad {a_n^2-|b_n|^2\neq 0,}
\qquad n=1,~2,\ldots$$
we will develop a technique analogous to the one expounded for the polynomial
space~$\{ {\cal P}_{R^1}\}$ on $\R^1$.
Let $\{ {\cal P}_C\}$ be a space of polynomials with complex coefficients
on the unit circle~$\C=\{z\colon |z|=1\}$,
i.e. a space of trigonometric polynomials, and let~$< , >$ denote a
scalar product on the space~$\{ {\cal P}_C\}$ with the property
$$
=
.$$
We shall assume that the Toeplitz determinants are nonzero:
$$\Delta_n=\det\left| \, \matrix{
{<1,1>}&{}&\ldots&{}\cr
\overline {}&{<1,1>}&\ldots&{}\cr
\vdots&\vdots&\ddots&\vdots\cr
\overline {}&\overline {}&\ldots&{<1,1>}\cr
}\right|\neq0 ,\qquad n=0,~1,\ldots .$$
It is well known (see for example [1]) that there exists a
$< , >$-orthogonal system of polynomials~$\{P_n(z)\}$ such that
{\vskip 0.5cm
\vbox{\hbox{\qquad { 1)} $deg P_n(z)=n$}
\hbox{\qquad { 2) the highest-degree coefficient is positive}}
\hbox{\qquad { 3)} $=0,$ if~$ n\neq m.$ } }
As in Section~2, we will also consider two sets of $< , >$-orthogonal
polynomials: the monic system and the normalized system.
The monic polynomial is characterized by the highest degree coefficient
being equal to $1$
and in terms of determinants is represented as
$$P_n(z)=
{1\over \Delta_{n-1}} \det \left| \, \matrix{
{<1,1>}&{}&\ldots&{}\cr
\overline{}&{<1,1>}&\ldots&{}\cr
\vdots&\vdots&\ddots&\vdots\cr
\overline{}&\overline{}&\ldots&{}\cr
{1}&{ z}&\ldots&{z^n}\cr
}\right|. $$
The $n$-th degree normalized polynomial~$P_n(z)$ is
$$\widetilde P_n(z)=
{1\over \Delta_{n}} \det \left| \, \matrix{
{<1,1>}&{}&\ldots&{}\cr
\overline{}&{<1,1>}&\ldots&{}\cr
\vdots&\vdots&\ddots&\vdots\cr
\overline{}&\overline{}&\ldots&{}\cr
{1}&{ z}&\ldots&{z^n}\cr
}\right|. $$
The polynomials~$\{P_n(z)\}$ and~$\{\widetilde P_n(z)\}$ satisfy the
recurrence relations for the orthogonal sets of polynomials:
$$P_{n+1}(z)=zP_n(z)+b_{n+1}P_n^*(z),\eqno(3.1)$$
$$\widetilde P_{n+1}(z)=\widetilde a_{n+1}z\widetilde P_n(z)+
\widetilde b_{n+1}\widetilde P_n^*(z),\eqno(3.2)$$
where~$R^{*}(z) \buildrel \rm def \over = z^k \overline R(1/z)$ for any
polynomial~$R(z)$ of degree~$k$; $\widetilde a_k$ and~$\widetilde b_k$ are
some constants for which formal analytic relations in determinant
form can be obtained. Note that $b_0$ does not take part in the
recursion~(3.1), however, below it will be convenient to define
$$|b_0|^2=1-<1,1>.$$
>From recurrence relations~(3.1)--(3.2) the following three-term recurrence
relations immediately follow:
$$b_{n+1}P_{n+2}(z)=(zb_{n+1}+b_{n+2})P_{n+1}(z)-
b_{n+2}(1-|b_{n+1}|^2)zP_n(z),\eqno(3.3)$$
$$\widetilde b_{n+1}\widetilde P_{n+2}(z)=
(z\widetilde a_{n+2}b_{n+1}+\widetilde a_{n+1}\widetilde b_{n+2})\widetilde
P_{n+1}(z)-
\widetilde b_{n+2}(\widetilde a_{n+1}^2-|\widetilde
b_{n+1}|^2)z\widetilde P_n(z).
\eqno(3.4)$$
We will use $h_k$ defined for $k\in Z_+$ by
$$h_k= ,\qquad k=0,~1,\ldots~.$$
It is easy to check [1] that
$$h_k={\Delta_k \over \Delta_{k-1} }$$
and $$h_k=h_{k-1}(1-|b_k|^2).\eqno(3.5)$$
Then, in complete analogy to the real line case,
$$=(1-|b_0|^2)(1-|b_1|^2)\ldots(1-|b_k|^2) \eqno(3.6)$$
and
$$\Delta_k={(1-|b_0|^2)}^k{(1-|b_1|^2)}^{k-1}\ldots(1-|b_k|^2). \eqno(3.7)$$
The relation~(3.6) allows us to normalize the monic orthogonal polynomials.
The trigonometric version of the Favard theorem it as follows:
\proclaim {Theorem 3.1}. {The scalar product $< , >$ on the polynomial
space~$\{ {\cal P}_C\}$ is definite:
$$ \forall P \in \{ {\cal P}_C\} :\qquad \geq0$$
if and only if $$|b_k|^2<1,\qquad k=0,~1,~2,\ldots~.\eqno(3.8)$$}
Omitting details, we now give the trigonometric version of the modified
Chebyshev algorithm.
Let $\{P_n(z)\}$ be a set of monic orthogonal polynomials satisfying
recurrence relations~(3.3), and let $\{\pi_n(z)\}$ be a set of
monic $< , >$-orthogonal polynomials with the following recurrence relations
$$\beta_{n+1}\pi_{n+2}(z)=(z\beta_{n+1}+\beta_{n+2})\pi_{n+1}(z)-
\beta_{n+2}(1-|\beta_{n+1}|^2)z\pi_n(z) \ .\eqno(3.9)$$
The coefficients~$\{\beta_n\}$ are not assumed to be known.
\proclaim{Definition}. { The matrix~$T_{\pi,P}$ of modified moments,
where $$\|T_{\pi,P}\|_{k,l}=\sigma_{k,l}{\buildrel \rm def \over =}
<\pi_k,P_l>$$
will be called a {\it Chebyshev matrix}.}
Note that $\sigma_{k,l}=0$ for all~$k>l$, i.e. the Chebyshev matrix is
triangular.
The entries of the Chebyshev matrix~$T_{\pi,P}$ and the coefficients of
the three-term recurrence formulas (3.3) and~(3.9) are connected by
relations that can also be considered as a recursion process to determine
entries of the Chebyshev matrix and unknown elements~$\beta_k$. Writing
it down in a recursive manner, we have:
{{\it the initial conditions:}~$\sigma_{-1,l}=0,\quad\sigma_{0,l}=<1,P_l>,
\quad l=0,~1,\ldots,n ,$
$\qquad\qquad\qquad\qquad\qquad\beta_0=1$
{\it and for all} $k=1,~2,\ldots,~n$
$$\beta_k={1\over\beta_{k-1}}\bigl[-{\sigma_{k-1,k}\over\sigma_{k-1,k-1}}
+{\overline b_k\over\overline b_{k-1}}
+(1-|\beta_{k-1}|^2){\sigma_{k-2,k-1}\over\sigma_{k-1,k-1}}
-(1-|\beta_{k-1}|^2){\overline b_k\over\overline b_{k-1}}
(1-|b_{k-1}|^2){\sigma_{k-2,k-2}\over\sigma_{k-1,k-1}}
\big]$$
\rightline {{~}~$(m)-T_C$}
$$\sigma_{k,l}=\left\{\eqalign{
& 0, \qquad l=0,~1,\ldots,~n; \cr
& {\overline b_l\over\overline b_{l-1}}\sigma_{k,l-1}+
{\beta_k\over\beta_{k-1}}\sigma_{k-1,l}-
{\beta_k\over\beta_{k-1}}{\overline b_l\over\overline b_{l-1}}
\sigma_{k-1,l-1}+\sigma_{k-1,l-1}\cr
&-{\overline b_l\over\overline b_{l-1}}(1-|b_{l-1}|^2)\sigma_{k-1,l-2}-
{\beta_k\over\beta_{k-1}}(1-|\beta_{k-1}|^2)\sigma_{k-2,l-1}\cr
&+{\beta_k\over\beta_{k-1}}(1-|\beta_{k-1}|^2)
{\overline b_l\over\overline b_{l-1}}(1-|b_{l-1}|^2)\sigma_{k-2,l-2},
\qquad l=k,~k+1,\ldots,~n \cr} \right. $$
}
In the case~$P_k(z)=z^k$, i.e. $b_k=0$ for~$k=0,~1,\ldots,$ the
$(m)-T_C$-recurrence relations are transformed into the well-known Levinson
algorithm [14]:
{\qquad {\it step~1:} $\pi_0(z)=1,\qquad\sigma_{0,0}=<1,1>,$
\qquad {\it step~2:} for~$k=1,~2,\ldots,~n$
$$\eqalignno {
\beta_k&=-{1\over\sigma_{k-1,k-1}} \cr
\sigma_{k,k}&=(1-|\beta_k|^2)\sigma_{k-1,k-1},&(3.10)\cr
\pi_k(z)&=z\pi_{k-1}(z)+\beta_k\pi_{k-1}^*(z).\cr}$$
It is obvious by analogy with Section~2 that the relations~(3.10) can be
considered as an effective way to transform~$\{z^k\}$ into~$\{\pi_k(z)\}$.
\proclaim{Theorem 3.2}. {Let~$P=[P_0(z),P_1(z),\ldots,P_n(z),\ldots]^T$
and~$\pi=[\pi_0(z),\pi_1(z),\ldots,\pi_n(z),\ldots]^T$ be the vectors
consisting of the polynomials satisfying the recurrence relations~(3.3)
and~(3.9) respectively
and let~$T_{\pi,P}$ be the Chebyshev matrix. Then
$$P=T_{\pi,P}^T\|diag(\sigma_{k,k}^{-1})\|\pi.$$ }
We will not consider the questions concerning the algebraic definition
of the scalar product on the space $\{ {\cal P}_C\}$ of trigonometric
polynomials, which are entirely analogous to the case of the scalar product
on the space~$\{ {\cal P}_{R^1}\}$ of polynomials on the real line~$\R^1$.
\beginsection{4. Test of the positive definiteness of the Hankel and Toeplitz
quadratic forms}
The question of checking the positive definiteness of the Hankel
and Toeplitz quadratic forms is not only of theoretical interest, but
it is also very important in signal processing problems [15,19] and
studying quantum-mechanical systems [7--9,20].
Let~$\|s_{j+k}\|_{j,k=0}^\infty$ be a Hankel
(respectively~$\|c_{j-k}\|_{j,k=0}^\infty$ be a Toeplitz) matrix of a
quadratic form. Theoretically the question about its positive definiteness
is solved by finding the signature of a form or, what is equivalent,
by finding the signs of the determinants in the sequence
of~$H_k=\det\|s_{j+m}\|_{j,m=0}^k$ ($\Delta_k=\|c_{j-m}\|_{j,m=0}^k$),
$k=0,~1,\ldots .$ However, due to the above reasons the
calculation and even the determination of the sign of~$H_k$
( $\Delta_k$ ) is a
very difficult problem and it becomes even more difficult in cases
where the entries of~$H_k$ ( $\Delta_k$ ) depend on some parameter being
subject to decision.
On the other hand, the Favard theorem allows one to find the definition
index of the scalar product~$< , >_s$ (~$< , >_c$~) on the polynomial
space~$\{ {\cal P}_{R^1} \}$ (~$\{ {\cal P}_C \}$~) and in this way the
question about positive definiteness of the corresponding quadratic form
can be solved.
Namely, by relations~(2.9) and~(3.6)
$$=\prod_{k=0}^m \widetilde b_k\qquad
(~~~=\prod_{k=0}^m (1-|\widetilde \beta_k|^2)~~~)~,$$
where by definition $\widetilde b_0$ is equal to~$s_0 \quad (~c_0~)$,
the remaining~$\widetilde b_k$ are determined by the modified Chebyshev
algorithm with $a_k=b_k=0$ (accordingly, the Levinson algorithm in
the trigonometrical case) and the initial data~$\sigma_{0,l}=s_l \quad
( \sigma_{0,l}=c_l )$.
Testing the positive definiteness of the quadratic forms is
connected with problems of special integral representation of the
sequences with a lacuna or a lacuna set in the spectrum of the representative
measure.
Below the case of one lacuna in the spectrum of the representative
measure is considered. In essence, it is comprehensive.
\vskip 1cm
a) {\it Stieltjes case \/}
It is well known [1] that the question of special integral representation
of the sequence~$\{s_k\}$ with the measure defined on the right-hand half-axis
is equivalent to positive definiteness of two quadratic forms with
matrices~$S=\|s_{j+k}\|_{j,k=0}^\infty$
and~$S^{(1)}=\|s_{j+k+1}\|_{j,k=0}^\infty$. By the modified Chebyshev
algorithm it can be reformulated as follows:
\proclaim{Theorem 4.1}. {Let~$\{\widehat b_k\}$ and~$\{\widehat b_k^{(1)}\}$
be the parameters calculated by two modified Chebyshev
algorithms with the initial conditions~$\sigma_{0,l}=s_l$
and~$\sigma_{0,l}=s_{l+1}$
respectively. Then the sequence~$\{s_k\}$ is a Stieltjes one if and
only if $\widehat b_k >0$ and~$\widehat b_k^{(1)} >0$ for all $k\in Z_+$. }
\vskip 0.5cm
b) {\it Case of a fixed bounded lacuna~$(a,b)$ \/}
\proclaim {Theorem 4.2}. { The sequence~$\{s_k\}$ can be represented by
an integral with a measure defined on the real line~$\R^1$ with the
lacuna~$(a,b) \quad (-\infty \buildrel \rm def \over = s_{k+l}$ determine the scalar
product~$< , >$ on the polynomial space~$\{ {\cal P\/}_{R^1} \}$, which is
real for real polynomials and has the property~(A).
As it can be seen from the Borchardt criterion, the investigated problem
is reduced to study of the definition index of the scalar product introduced
on the polynomial space~$\{ {\cal P\/}_{R^1} \}$.
In terms of the Chebyshev matrix, the Borchardt criterion can be reformulated
in the following manner:
\proclaim {Theorem 5.2}. {The number of the complex conjugate roots of the real
polynomial~$f(z)$ is equal to the number of negative elements in the sequence
of the diagonal entries of the Chebyshev
matrix~$\sigma_{0,0},\sigma_{1,1},\ldots,\sigma_{r,r}$. Accordingly,
the number of distinct real roots of the real polynomial is equal to
the surplus of the number of positive elements over the number
of negative elements in the sequence of the diagonal entries of the
Chebyshev matrix corresponding to the sequence of Newton sums~$s_p$
added by unity:~$1,\sigma_{0,0},\sigma_{1,1},\ldots,\sigma_{r,r}.$ }
\vskip 1cm
\proclaim {5.2}. {Separation of real roots of the real polynomial}
\proclaim {Theorem 5.3}. {(Joachimsthal) The number of distinct real roots
of the real polynomial~$f(z)$ in the interval~$(a,b)$ is equal to the loss
of the number of sign alternation when going from $a$ to~$b$ in
the sequence of polynomials
$$1,\delta_1(t),\delta_2(t),\ldots,\delta_r(r),$$
where
$$\eqalignno{
\delta_k(t)&=\det\left| \, \matrix{
{s_0t-s_1}&{s_1t-s_2}&\ldots&{s_{k-1}t-s_k}\cr
{s_1t-s_2}&{s_2t-s_3}&\ldots&{s_kt-s_{k+1}}\cr
\vdots&\vdots&\ddots&\vdots\cr
{s_{k-1}t-s_k}&{s_kt-s_{k+1}}&\ldots&{s_{2k-2}t-s_{2k-1}}\cr
}\right| \cr &=
\det\left| \, \matrix{
{s_0}\hfill&{s_1}\hfill&\ldots&{s_k}\hfill\cr
{s_1}\hfill&{s_2}\hfill&\ldots&{s_{k+1}}\hfill\cr
\vdots&\vdots&\ddots&\vdots\cr
{s_{k-1}}\hfill&{s_k}\hfill&\ldots&{s_{2k-1}}\hfill\cr
{1}&{t}&\ldots&{t^k}\cr
}\right| ,\qquad k=1,2,\ldots,r \cr}$$
and in this sequence there are no two successive zeros for $t=a$ and $t=b$;
$s_p$ are the Newton sums of the polynomial~$f(z)$.}
Considering the value $s_p$ as $$ we can see that the
functions~$\delta_k(t), k=1,~2,\ldots,~r$ are the sequence of orthogonal
polynomials
with regard to the scalar product~$< , >$. Therefore if the determinants~$H_k$
(for example, they have been calculated in the above way) and the
system of monic $< , >$-orthogonal polynomials are known, then the
system~$\{\delta_k(t)\}$ is known too (with a collection of the
coefficients of the recurrence relation of the system). The presence of
elements of the sets~$\{\widehat a_k;\widehat b_k\}$ permits to calculate
$\{\delta_k(t)\}$ for any value~$t$ in a stable way, and then the
Joachimsthal criterion can be applied.
\beginsection{6. Test of function range }
\proclaim{6.1} . {Univariate functions;
discretization of the estimation problem.}
In this section we will consider the problems connected with the
test of the function range by its moments.
First of all, let us give some heuristic considerations. Let
$\varrho(d\lambda)\geq0$ be some measure on $\R^1$
having moments of all orders;
$f(\lambda) \in L_\varrho^2$ and~$f(\lambda)\geq0$ for all $\lambda\in\R^1$.
Obviously the sequence~$\{s_f(n)\}$, where
$$s_f(n)=\int\limits_{-\infty}^\infty \lambda^n f(\lambda)\varrho(d\lambda),
\eqno(6.1)$$
is positive definite. Generally speaking, the inverse proposition, i.e.
$$(~\{s_f(n)\} \quad\hbox{is positive definite}~)\quad \Longrightarrow
\quad (~f(n)\geq0 \qquad\varrho-a.e.~), \eqno(6.2)$$
is not true. This can be easily seen in the case where the
measure~$\varrho(d\lambda)$
generates an indeterminate moment sequence, and this is also true for a
measure~$\varrho(d\lambda)$ corresponding to determined moment sequence.
Let the measure~$\varrho(d\lambda)$ satisfying~(6.2) be chosen and the
sequence~$\{s_f(n)\}$ be positive definite, i.e. the
sequence~$\{s_f(n)\}$ generates a definite scalar product on
the polynomial
space~$\{ {\cal P}_{R^1} \}$. Then $f(n)\geq0\quad \varrho-a.e.$~.
Obviously, this way
can be put into practice to find different double-side function
ranges.
\proclaim{Definition}. { } {Let~$\varrho(d\lambda)\geq0$ be a measure
having moments of all orders. It will be called an {\it $U.p.$-measure \/} if
the moment sequence corresponding to $\varrho(d\lambda)$
is positive definite and
implication~(6.2) is true.}
As is known, $U.p.$-measures were first considered in [12], but were not
given any terminological designation.
We will confine ourselves to two classes of $U.p.$-measures:
{\vskip 0.5cm
\vbox{\hbox{\quad { 1)} measures with bounded support}
\hbox{\quad { 2) measures for which there exists $a>0$ such
that
$\int_{-\infty}^{\infty}\exp{(a|\lambda|)}\varrho(d\lambda)<\infty$.}}}
Let $\varrho(d\lambda)$ be some $U.p.$-measure; $\{ P_n(\lambda)\}$ be the
system of monic $< , >_{\varrho(d\lambda)}$-orthogonal polynomials. If
$\varrho(d\lambda)$ is one of the standard measures, then, as it was
shown above, the polynomial set~$\{ P_n(\lambda)\}$ and, what is the same,
the coefficients of the recurrence relation, can be found by means of
the modified Chebyshev algorithm.
The monic set of $< , >_{f(\lambda)\varrho(d\lambda)}$-orthogonal polynomials
is found by the modified Chebyshev algorithm, where $a_k$, $b_k$ are the
coefficients of the recurrence relation of the
$< , >_{\varrho(d\lambda)}$-orthogonal system; the initial data of the
recursion are
$$\sigma_{-1,l}=0,\qquad
\sigma_{0,l}=
{_{\varrho(d\lambda)}}.$$
In our case the condition~$f(n)\geq0 \quad\varrho-a.e.$ is equivalent to
the definiteness of the scalar product
\hbox{$< , >_{f(\lambda)\varrho(d\lambda)}$}
on the polynomial space~$\{ {\cal P}_{R^1} \}$.
\proclaim{Theorem 6.1}. { Let~$\varrho(d\lambda)$ be some~$U.p.$-measure;
$f(\lambda)\in L_\varrho^2$ and let the entries~$\sigma_{k,k}$ of the Chebyshev
matrix obtained by the modified Chebyshev algorithm with initial conditions
$$\sigma_{-1,l}=0,\qquad
\sigma_{0,l}=
{_{\varrho(d\lambda)}}.$$
be positive. Then~$f(\lambda)\geq0\quad\varrho-a.e.~.$ }
Note that the system~$\{ \widehat P_k(\lambda)\}$ of
$< , >_{f(\lambda)\varrho(d\lambda)}$-orthogonal polynomials is found
by the modified Chebyshev algorithm, which is useful for many problems
(see Section 7 below).
The formulated theorem gives the criterion of positivity for the
function~$f(\lambda)$. At the same time it is often necessary
to know whether $f(\lambda)$ has negative values. Obviously, the indicator of
negative values is diagonal entries of the Chebyshev matrix and, first
of all, the first negative element in the sequence of the coefficients~$b_k$.
Let us illustrate this on examples \footnote{$^1$}{ All computations reported
on were carried out on the PC AT in single precision}.
\vskip 1cm
{\it Example 1. \/} Let
$f(\lambda)=(3-\lambda^2)^5(\lambda-0.5)^2-\varepsilon$.
Obviously, this function takes negative values on some
interval~$(a,b)\subset [-1,1]$, where $b-a$ is a function of $\varepsilon$.
Therefore the numerical computations by the modified Chebyshev algorithm
must give negative values in the diagonal entries of the Chebyshev
matrix and in the coefficients~$\widehat b_k$ of the recurrence relation.
The results of such calculations is given in Table~1, where the index
of the first negative coefficients~$\widehat b_k$ for distinct
values~$\varepsilon$ are given. For this, the set of Chebyshev
polynomials of the second type with the
weight~$\varrho(d\lambda)=\sqrt{1-\lambda^2}$
was chosen for the initial $< , >_{\varrho(d\lambda)}$-orthogonal system.
{\rm
\vskip 1cm
{\bf Table~1.}
{\vskip 6pt
\hrule
\settabs 9 \columns
\vskip 6pt
\+&$\varepsilon$& 1 & 0.5 & 0.2 & 0.1 & 0.05 & 0.02 & 0.01 \cr
\vskip 6pt
\hrule
\vskip 6pt
\+&$n$& 14 & 22 & 35 & 52 & 74 & 118 & 169 \cr
\vskip 6pt
\hrule
\vskip 1cm }
}
\vskip 1cm
{\it Example 2 (lacuna case).\/}
Let $f(\lambda)=(\lambda-a)(\lambda-b),\quad(a_{\varrho(d\lambda)}, \eqno(6.5)$$
and~$\|C_{j,k}^r\|$ be the Haar matrix of the system~$\{ P_l(\lambda) \}$.
\proclaim{Theorem 6.2}. {The inequality~$P(\lambda,\mu)\bigm|_D\geq0$ is
true if and only if the kernel
$${\cal K}_{P,\varrho}(j,k;\mu)=\|\sum_{|j-k|}^{j+k} C_{j,k}^rQ_r(\mu)\|
\eqno(6.6)$$
is positive definite for~$0\leq\mu\leq1$.}
According to the Silvester criterion, the test of positive definiteness
of the kernel~${\cal K}_{P,\varrho}(j,k;\mu)$ is reduced to the test
of positivity of a known sequence of
determinants~$d_0(\mu),d_1(\mu),\ldots,d_1(\mu),\ldots~$. The kernel~(6.6)
application for testing the condition~$P(\lambda,\mu)\bigm|_D\geq0$ is
simplified if~$P(\lambda,\mu)$ is a polynomial. In this case the
kernel~${\cal K}_{P,\varrho}(j,k;\mu)$ is a band matrix. In addition the
simplification can sometimes be achieved by a special choice of the Haar
matrix~$\|C_{j,k}^r\|$. However, the troubles concerning with the calculation
of the higher-order determinants do not disappear.
\vskip 1cm
\proclaim {6.3}. { The Hausdorff differences and non-negativity test }
In Theorem~6.2 we did not use the limited size of the rectangle~$D$
\footnote{$^2$}{The reasoning given can be easily modified for cases of
non-trivial unbounded regions}.
Let us now apply the specificity of the Hausdorff moment problem. From~(6.5)
and~(6.6) we have
$${\cal K}_{P,\varrho}(j,k;\mu)
=_{P(\lambda,\mu)\varrho(d\lambda)}
=\int\limits_0^1 P_j(\lambda)P_k(\lambda)P(\lambda,\mu)\varrho(d\lambda)
.$$
The one-dimensional moment sequence corresponding to the
kernel~${\cal K}_{P,\varrho}(j,k;\mu)$ is
$$s_\mu(n)=\int\limits_0^1P(\lambda,\mu)\varrho(d\lambda).\eqno(6.7)$$
Having formed the sequence of the Hausdorff differences
$$\eqalignno{
\Delta_{k,r}s_\mu&=s_\mu(r)-C_k^1s_\mu(r+1)+C_k^2s_\mu(r+2)+\cdots+
(-1)^ks_\mu(r+k) \cr
&=\int\limits_0^1 \lambda^r(1-\lambda)^k
P(\lambda,\mu)\varrho(d\lambda)&(6.8) \cr}$$
for the sequence~$\{s_\mu(n)\}$
we obtain the following proposition by the well-known Hausdorff criterion.
\proclaim{Theorem 6.3}. { The following conditions are equivalent
$$\eqalign{
{1.}\quad &P(\lambda,\mu)\bigm|_D\geq0,\cr
{2.}\quad &\Delta_{k,r}s_\mu\geq0,\qquad k,r=0,~1,\ldots~.\cr}
\eqno(6.9)$$}
Note that the test of the ``two-dimensional'' sequence~$\Delta_{k,r}s_\mu$
of the functions from one variable can be carried out in the
above way through an application of the modified Chebyshev algorithm.
The applications of the modified Chebyshev algorithm also facilitate the
computation of the integrals~(6.8) in any complicated cases.
The above constructions are particularly clear if $P(\lambda,\mu)$
is a polynomial. In this case all the functions~$\Delta_{k,r}s_\mu$ are
polynomials.
\vskip 1cm
{\it Example~4.\/} To illustrate this, let us give the numerical results
for the polynomial~$Q(\lambda,\mu)\ -1\leq\lambda\leq1,0\leq\mu\leq1,$ where
$$Q(\lambda,\mu)=(\lambda-0.6)^2+4(\lambda-0.6)(\mu-0.1)+4.1(\mu-0.1)^2
+\varepsilon,\quad \varepsilon=-0.07 .$$
The smallest $n$, such that $\widehat b_n<0$ and~$\widehat b_{n-1}<0$,
are given in Table~4 for $k,r=0,~1,\ldots,~10$.
{\rm
\vskip 1cm
{\bf Table~4.}
{\vskip 6pt
\hrule
\settabs 13 \columns
\vskip 6pt
\+&$k\ \backslash\ r$&0&1&2&3&4&5&6&7&8&9&10\cr
\vskip 6pt
\hrule
\vskip 6pt
\+&0&35&19&15&13&13 &11&11&11&11&11&11\cr
\vskip 6pt
\+&1&19&16&14&14&12&12&12&11&11&11&11\cr
\vskip 6pt
\+&2&15&15&14&14&12&12&12&12&12&11&11\cr
\vskip 6pt
\+&3&14&13&13&13&12&12&12&12&12&12&10\cr
\vskip 6pt
\+&4&12&13&13&13&12&12&12&12&12&12&12\cr
\vskip 6pt
\+&5&12&12&13&13&11&11&12&12&12&12&12\cr
\vskip 6pt
\+&6&12&12&12&12&11&11&11&11&11&11&11\cr
\vskip 6pt
\+&7&12&12&12&11&11&11&11&11&11&11&11\cr
\vskip 6pt
\+&8&11&12&12&11&11&11&11&11&11&11&11\cr
\vskip 6pt
\+&9&11&12&12&11&11&11&11&11&11&11&11\cr
\vskip 6pt
\+&10&11&12&12&11&11&11&11&11&11&11&11\cr
\vskip 6pt
\hrule
\vskip 1cm }
}
{\it Note:\/} the test of the $k$-dimensional function given in a
hyperrectangle is reduced to the check of non-negativity of a
$2^{k-1}$-dimensional countable sequence of one-dimensional functions
by the procedure given above.
\beginsection{7. Moment method for a Schr\"odinger equation
and the modified Chebyshev algorithm}
In recent works [7-9,20] to investigate the Schr\"odinger equation with
a semi-bounded potential, the moment method has been used with great
success. The main reason of the successful application of the moment
method to this problems is a phenomenon of exponential decrease
of the solution of the Schr\"odinger equation with a given potential.
Roughly, the exponential decrease is :
Let $$-\Delta\psi +(V-E)\psi=0\eqno(7.1)$$ be the Schr\"odinger equation with
a semi-bounded potential~$V$. Then the values~$E$ for which there exist
non-trivial solutions~$\psi$,
are subdivided into two classes: the continuous spectrum and
the discrete spectrum arranged on the left of it. The solutions
corresponding to the
discrete spectrum decrease exponentially. The rate of decrease
depends on the distance between the considered eigenvalues and the continuous
spectrum.
If we consider the ground state solution~$\psi_{g.s.}(x)$
of the Schr\"odinger equation corresponding to the smallest energy
value~$E_{g.}$,
then from physical considerations it will follow that
$\psi_{g.s.}(x)\geq0$.
The non-negativity of $\psi_{g.s.}(x)$ and the property of exponential
decrease allow us to transform the Schr\"odinger equation~(7.1)
for $\psi_{g.s.}(x)$
with a linear fractional semi-bounded potential by means of
comparison of $\psi_{g.s.}(x)$
and its power moment $\mu_n$ sequence into the equivalent recurrence relation
$$\mu_n=M(\widetilde\mu_0,\widetilde\mu_1,\ldots,\widetilde\mu_s,\mu_{n-k},E),
\eqno(7.2)$$
where $E$ is a parameter to be determined.
Assuming that $E_{g.}$ and the set of missing
moments~$\{\widetilde\mu_{n_1},\widetilde\mu_{n_2},\ldots,
\widetilde\mu_{n_k}\}$
are known, we can calculate all moments of the function~$\psi_{g.s.}(x)$
by the recurrence relation~(7.2) and, hence, taking into account the nature of
the decrease of $\psi_{g.s.}(x)$ we can reconstruct $\psi_{g.s.}(x)$
in principle.
However, this cannot be done immediately because the power system is very far
from the orthogonal system of polynomials corresponding to
the weight~$\psi_{g.s.}(x)$.
In this section we limit ourselves to considering the one-dimensional
equation~(7.1).
First of all, let us note that, as it was shown in [7-9,20], the sequential
upper and lower energy bounds for the ground state energy~$E_{g.}$ are defined
by the conditions
$$\det\|\mu_{i+j}(E)\|_{i,j=0}^n>0.$$
Moreover, in the considered model examples it is sufficient to restrict
ourselves to the half-axis case.
Next, let
$$\psi(x)\geq0\quad (0\leq x<\infty),
\qquad\exists k>0:\quad\psi(x)\leq Ce^{-kx}\quad (x>R),$$
and all moments
$$m_n=\int\limits_0^\infty x^n\psi(x)dx$$
are known.
Now we introduce the auxiliary weight~$w(x)=e^{\gamma x}, \gamma<2R$
with the known $<,>_w$-orthogonal system of monic polynomials~$\{ P_j(x) \}$
and with the coefficients~$\{ a_n;b_n \}$ of the recurrence relations.
The set of values
$$\sigma_{0,l}=\sum_{s=0}^l\gamma_{s,l}m_l,$$
where $$P_j(x)=\sum_{l=0}^j\gamma_{l,j}x^l,$$
is also known for the function~$\psi(x)/w(x)$.
Let us take $\{\sigma_{0,l}\}$ for the initial data
of the modified Chebyshev algorithm.
The result of calculation by this modified Chebyshev algorithm
is the set~$\{ \widehat P_j(x) \}$ of monic polynomials orthogonal with
respect to the weight~${\psi(x) \over w(x)} w(x)dx$, i.e. the
$< , >_{\psi(x)dx}$-orthogonal set of polynomials.
The last system of polynomials allows one to compute the following integral
$$I_j(\Omega)=\int\limits_\Omega
(\psi(x)-C)\widehat P_j(x)w(x)dx,\eqno(7.3)$$
where $\Omega$ is some set on the half-axis.
Note that $\psi(x)$ generates some $U.p.$-measure and hence the knowledge
of the integral~(7.3) allows one to determine the range of the
function~$\psi(x)-C$.
The calculation of $\int_\Omega \widehat P_j(x)w(x)dx$ can
be accomplished by a quadrature formula corresponding to the weight $w(x)dx$,
because the coefficients of recurrence relation for the
system~$\{ \widehat P_j(x) \}$ are obtained by the modified
Chebyshev algorithm.
For the calculation of the integral
$\int_\Omega\psi(x)\widehat P_j(x)w(x)dx$
(the function~$\psi(x)$ is unknown) the quadrature formula for the weight
$w(x)dx$
can be used since the system~$\{ \widehat P_j(x) \}$ is known.
It is sufficient to construct a $< , >_\psi$-quadrature formula.
It should be taken into account that the set~$\Omega$ is realized
by summing up on notes from this set.
To give an illustrative example, we consider the usual continuum
harmo\-n\-ic-oscillator
problem $$-\psi^{\prime\prime}+x^2\psi=E\psi.\eqno(7.4)$$
Obviously, in this case there exists a unique point $E_{g.}$ of
the discrete spectrum. If we take
$$\mu_p=\int_{-\infty}^\infty x^p\psi_{g.s.}(x)dx,$$
then the equation~(7.4) is transformed into relation for the moments:
$$-p(p-1)\mu_{p-2}+\mu_{p+2}=E\mu_p.$$
Next, for all odd $k$, $\mu_k=0$. Therefore, after replacements $p=2k$
and $\mu_{2k}=m_k$ we have
$$m_{k+1}=Em_k+2k(2k-1)m_{k-1}.\eqno(7.5)$$
Thus, to recover all the moments~$\{m_k\}$ corresponding to~$E$
in our example, it is sufficient to know only the two first moments~$m_0,~m_1$,
i.e. these values are the missing moments. Taking $k=0$ in relation~(7.5),
we obtain $m_1=Em_0$. Hence in the considered example we can restrict
ourselves only to one moment~$m_0$ and without loss generality,
we can take $m_0=1$.
If the moment sequence~$\{m_k(E)\}_{k=0}^\infty$ is known, we can
apply the necessary modified Chebyshev algorithms to check the criterion
of solvability of the
Stieltjes moment problem (Theorem~4.1) for all fixed values~$E$.
Obviously, the chosen $E$ is nearer to $E_{g.}=1$, the more first members
of the sequences $\{\widehat b_k\}$ and~$\{\widehat b_k^{(1)}\}$ obtained by
the corresponding modified Chebyshev algorithm are positive. This directly
confirms the numerical results of calculations, that are given in Table~5,
where $n_0$ is the index of the first negative element
in the sequence~$\{\widehat b_k\}$ (left-hand side of Table~5)
and~$\{\widehat b_k^{(1)}\}$ (right-hand side of Table~5) corresponding
to the chosen~$E$.
{\rm
\vskip 1cm
{\bf Table~5.}
{\vskip 6pt
\hrule
\settabs 6 \columns
\vskip 6pt
\+&$E$&$n_0$&&$E$&$n_0$\cr
\vskip 6pt
\hrule
\vskip 6pt
\+&$1-10^{-1}$&3&&$1+10^{-1}$&4\cr
\vskip 6pt
\+&$1-10^{-2}$&5&&$1+10^{-2}$&5\cr
\vskip 6pt
\+&$1-10^{-3}$&6&&$1+10^{-3}$&6\cr
\vskip 6pt
\+&$1-10^{-4}$&7&&$1+10^{-4}$&8\cr
\vskip 6pt
\+&$1-10^{-5}$&8&&$1+10^{-5}$&9\cr
\vskip 6pt
\+&$1-10^{-6}$&9&&$1+10^{-6}$&10\cr
\vskip 6pt
\+&$1-10^{-7}$&10&&$1+10^{-7}$&11\cr
\vskip 6pt
\+&$1-10^{-8}$&11&&$1+10^{-8}$&12\cr
\vskip 6pt
\+&$1-10^{-9}$&13&&$1+10^{-9}$&13\cr
\vskip 6pt
\+&$1-10^{-10}$&14&&$1+10^{-10}$&14\cr
\vskip 6pt
\hrule
\vskip 1cm }
}
In our opinion it is instructive that the comparison of the given numerical
results with those from paper [10] emphasizes the principal
possibilities, efficiency and accessibility of the modified Chebyshev
algorithm as a numerical method of calculation
(PC AT for modified Chebyshev algorithm
and Cray-2 for method given in [10]). Some more contrast is observed
when we compare the effect of the modified Chebyshev algorithm
to estimate the range of the function~$\psi_{g.s.}(x)$.
\beginsection{8. Hilbert spaces of entire functions and the
modified Chebyshev algorithm}
To describe the solutions of the indeterminate moment problem, the extension
problem of Hermitian-positive functions, the solutions of various
interpolation problems, the isometrical inclusions
of the De~Branges spaces of entire functions, it is necessary to construct
the entire second-order matrix-function~$W(z)$ accomplishing the linear
fractional transform of the Nevanlinna function class. To construct such a
matrix-function it is desirable to have an effective computational technique.
In our opinion, the modified Chebyshev algorithm is an effective technique
for the indeterminate power moment problem. Indeed, accepting the moment
sequence
as initial data for the modified Chebyshev algorithm with $a_k=b_k=0$,
we obtain
the set of $< , >$-orthogonal polynomials~$\{P_k(\lambda)\}$ of the first
type, and at the same time the
entries~$\{\widehat a_k,\widehat b_k\}$ of the Jacobi matrix . This matrix can
be used to construct the set of $< , >$-orthogonal
polynomials~$\{Q_k(\lambda)\}$ of the second type by recursion.
After that the approximations of the entries
$$\eqalignno{ A(z)&=z\sum_{k=0}^\infty Q_k(0)Q_k(z),\cr
B(z)&=-1+\sum_{k=0}^\infty Q_k(0)P_k(z),\cr
C(z)&=1+\sum_{k=0}^\infty P_k(0)Q_k(z),\cr
D(z)&=z\sum_{k=0}^\infty P_k(0)P_k(z). \cr
}$$
of the matrix-function~$W(z)$ can be calculated.
Note that for the undetermined power moment problem the written series
converge uniformly on all compact sets in the complex plane.
The approximation of the phase function corresponding to the
Hilbert spaces of entire functions [3] can be obtained in the same way.
This gives a sequence of canonical approximations for solutions of the
infinite moment problem.
If we have a truncated trigonometric moment problem, the
matrix-func\-tion~$W(z)$ corresponding to it can be obtained by
the Levinson algorithm (see Section~3).
The procedure of construction of the
matrix-func\-tion $W(z)=\|W_{jk}(z)\|_{j,k=1}^2$ described above is
considerably more stable than a procedure directly based on determinant
equations~(2.2) and~(2.3). However, errors are accumulated in this case too,
which necessitates more accurate calculation and impedes advance
to high-degree orthogonal polynomials. If the system of
$\widetilde P_k(\lambda)=\sum\eta_{l,k}\lambda^l$-orthogonal polynomials
is used as a starting-point, knowledge of the values of $\sum \eta_{l,k} s(l)$
is necessary. Their direct calculation is a fairly complicated task. It does,
however, sometimes become possible to indicate a ``close'' to $\{ P_k\}$; then
$\sum \eta_{l,k} s(l)$ will have temperate growth (see Section~2.5).Sometimes,
if sush approach is taken, some a priori information
can be taken into consideration on measures representing the sequence~$s(l)$,
e.g. the existence of measures possessing a support $C[0,\infty)$
as its principal property (Stiltjes's case). Another example of indication
of a ``starting'' system of orthogonal polynomials ``close'' to one
which is sought
are moment methods of finding the energy of the ground state (see Section~7)
and excited states for the Schr\"odinger equation with semi-bounded potential.
Some ``starting'' systems of orthogonal polynomials are to be found
in the theory of spectral functions of string~[4].
The construction of the corresponding modified Chebyshev technique for
the arbitrary De~Branges spaces of entire functions seems to us a very
interesting and important problem.
\beginsection{ Acknowledgement.}
The authors are grateful to the referees for their constructive
and stimulating criticism.
\beginsection{ 9. References}
\item{1.} Achieser~N.I. (1965): The Classical Moment Problem and Some related
Questions in Analysis. Oliver \& Boyd, Edinburgh
\item{2.} Chebyshev,~P.L. (1859): Sur l'interpolation par la m\'ethode
des moindres carr\'es,
M\'em.Acad.Imper.Sci. St.Petersbourg. 7, {\bf 1}, 15, 1--24
\item{3.} De~Branges~L. (1968): Hilbert spaces of entire functions.
Prentice-Hall, Englewood Cliffs, N.J.
\item{4.} Dym~H., McKean~H.P. (1976):
Gaussian Processes, Function Theory,
and the Inverse Spectral Problem. Academic Press.
\item{5.} Gantmaher~F.R. (1988): Matrix Theory. Nauka, Moscow
\item{6.} Gautschi~W. (1982) On generating orthogonal polynomials,
SIAM J. Sci. Statist. Comput. {\bf 3}, 289--317
%(1985): Orthogonal polynomials --- Constructive theory
% and applications, J.Comput. and Appl.Math. {\bf 12/13}, 61--76
\item{7.} Handy~C.R. (1987): Moment method quantisation of a linear
differential equation for~$|\psi |^2$, Phys.Rev.~A, {\bf 36}, 12,
4411--4416
\item{8.} Handy~C.R.,~Bessis~D. (1985): Rapidly Lower Bounds for Schr{\"o}dinger
Equation ground State Energy, Phys.Rev.Letters, {\bf 55}, 9, 931--934
\item{9.} Handy~C.R.,~Bessis~D.,~Morley~T.D. (1988): Generating quantum energy
bounds by the moment method: A linear programming approach.
Phys.Rev.~A, {\bf 37}, 12, 4557--4569
\item{10.} Handy~C.R.,~Pei~J.Q. (1988): Moment-method analysis of ground
state of discretized bozonic systems, Phys.Rev.~A., {\bf 38}, 7,
3175--3181
\item{11.} Korzh~S.A.,~Ov\^carenko~I.E.,~Ugrinovsky~R.A. (1991):
Chebyshev recursion and its applications.
In Proc.16th USSR Conf. on Operator Theory in Functional Spaces,
pp.114 (N.Novgorod) (in Russian)
\item{12.} Kowalski~M.A. (1985): Scalar product on space of polynomials,
Acta Math.Hung., {\bf 46}, 1--2, 101--109
\item{13.} Krein~M.G.,~Naimark~M.A. (1936): The method of symmetric and
Hermitian forms in the theory of the separation of roots of
algebraic equations. English translation in
Linear and Multilinear Algebra, 1981, {\bf 10}, 265--308
\item{14.} Levinson~N. (1947): The Wiener rms (root-mean-square) Error Criterion
in Filter Design and Prediction, J.Math.Physics. {\bf 25}, 261--278.
\item{15.} Notik~A.I.,~Knafel'~A.I.,~Turchin~V.I.,~Ugrinovsky~V.A.,~Ugrinovsky~R.A.
(1990): Spectral analysis via continual maximum entropy thechnique,
Radiotechnika \& Electronika, {\bf 35}, 9, 1904--1912 (in Russian)
\item{16.} Ov\^carenko~I.E. (1990): Scalar product on polynomial space and positivity,
Dokl. AN of the Ukr.SSR, ser.A., 7, 17--21 (in Russian)
\item{17.} Ov\^carenko~I.E. (1991): Some applications of the Chebyshev recursion,
Dokl. AN of the USSR, {\bf 319}, 2, 287--291 (in Russian)
\item{18.} Sushkevich~A.K. (1941): The principles of higher algebra.
Gostexizdat (in Russian)
\item{19.} Ugrinovsky~R.A. (1992): About trigonometric moment problem with lacuna,
Levinson algorithm and some applied problem, Ukraine Math.J., {\bf 44},
5, 713--716 (in Russian)
\item{20.} Vrscay~E.R.,~Handy~C.R. (1989): The perturbed two dimensional
oscillator;
eigenvalues and infinite-fields limits via continued fractions
renormalised perturbation theory and moment
methods, J.Phys.A: Math.Gen., {\bf 22}, 7, 823--834
\item{21.} Wheeler~J.C. (1984): Modified moments and continued fraction
coefficients for the diatomic linear chain, J.Chem.Phys. {\bf 30}, 1,
472--476
}
\bye