=

$, and it is not necessary definite. We also study the scalar product on the space~$\{ {\cal P\/}_C \}$ of trigonometric polynomials and the corresponding modifications of the Chebyshev algorithm. To illustrate up-to-date variants of the Chebyshev technique (~(m)-T-recursion and its modifications~), we consider some applied problems such as the test of positive definiteness of Hankel and Toeplitz quadratic forms and the calculation of arbitrary-order determinants of Hankel -- Hadamard and Toeplitz. The (m)-T-recursion, i.e. the Chebyshev recursion modification, and the classical moment theory [1] enable us to construct effective algorithms for checking positivity of functions of one and many variables on given sets by its moments. We also consider the application of the modified Chebyshev technique to the method of symmetric and Hermitian forms in the theory of the separation of roots of algebraic equations expounded in [13]. To apply the Chebyshev idea to another circle of problems, we consider the moment method in the study of the Schr\"odinger equation and construct an algorithm based on this technique to estimate the eigenvalue boundaries and the solutions of the Schr\"odinger equation with a special kind of potential. Some results of this work have been given in [16,17,19] and have been announced in [11]. \beginsection{ 2. Scalar products on polynomial space} \proclaim{2.1}. {Orthogonal systems of polynomials } Let~$\{ {\cal P\/}_{R^1} \}$ be the linear space of polynomials on the real line $\R ^1$,~$< , >$ denote a scalar product on the space~$\{ {\cal P\/}_{R^1} \}$, which is real for real polynomials and has the property~$$

=

. \eqno (A)$$
The scalar product $< , >$ is not assumed to be definite. It is
assumed
below that all
scalar products considered on the space~$\{ {\cal P\/}_{R^1} \}$
have the property (A). We also assume that the scalar product~$< , >$ is
nontrivial, i.e., all the Hankel -- Hadamard determinants
$$H_{<,>}(n)=\det\left| \, \matrix{
{<1,1>}\hfill&{<\lambda,1>}\hfill&\ldots&{<\lambda^n,1>}\hfill\cr
{<1,\lambda>}\hfill&{<\lambda,\lambda>}\hfill&\ldots&{<\lambda
^n,\lambda>}\hfill\cr
\vdots&\vdots&\ddots&\vdots\cr
{<1,\lambda ^n>}\hfill&{<\lambda,\lambda ^n>}\hfill&\ldots&{<\lambda
^n,\lambda ^n>}\hfill\cr
}\right|\neq0 .\eqno(2.1)$$
Letting
$$s_k=<\lambda^k,1>$$
we obtain the
traditional notation for Hankel -- Hadamard determinants ~$H_{<,>}(n)$:
$$H_{<,>}(n)=\det\|s_{i+j}\|_{i,j=0}^n .$$
For the scalar product considered there exists an $< , >$-orthogonal system
of
polynomials, i.e., a system of polynomials
${\{P_n(\lambda)\}}_{n=0}^\infty$, such that
{\vskip 0.5cm
\vbox{\hbox{\qquad { 1)} $deg P_n(\lambda)=n$}
\hbox{\qquad { 2) the highest-degree coefficient is positive}}
\hbox{\qquad { 3)} $ \geq 0 ,$$
if and only
if $b_k$ is positive for all non-negative integer $k$. }
{\it Proof. \/}
If a scalar product is definite, then from~(2.8) we have that $b_k$ is
positive
for all $k=0,~1,~2,\dots~.$ Next we expand an arbitrary
polynomial~$P(\lambda)$
as a linear combination of the polynomials~$\{ P_k(\lambda)\}$
$$P(\lambda)=\sum_{k=0}^n \xi_k P_k(\lambda) $$
Then $$ =\sum_{k=0}^n |\xi_k|^2 \prod_{l=0}^k b_l$$
and if $b_k$ is positive for all $k \in Z_+$, we obtain $$ \geq 0 .$$
The proof is finished.
A set of polynomials~$\{ P_k(\lambda)\}$ is closely connected with the Haar
matrix $\| C_{j,k}^r\|$. This matrix is a "multiplication table" for
polynomials and can be obtained from the relation
$$P_j(\lambda)P_k(\lambda)=\sum_{|j-k|}^{j+k} C_{j,k}^r P_r(\lambda) .
\eqno (2.11)$$
It is clear, that
$$C_{j,k}^r= _\alpha=

_\alpha$) on the polynomial space~$\{ {\cal P\/}_{R^1} \}$. Namely let us define the scalar product as $$

_\alpha=\sum_{j=0}^{m+n} \gamma_j \alpha_j,\eqno(2.22)$$
where $m$ is the degree of~$P(\lambda)$, $n$ is the degree of~$Q(\lambda)$,
and $\gamma_j$ are the coefficients of the polynomial
$$P \overline{Q}=\sum_{j=0}^{m+n} \gamma_j P_j.\eqno(2.23)$$
Note that $$ _\alpha\geq0;$$
b) the kernel~${\cal K}_\alpha(j;k)$ is positive definite;
c) the sequence $$s_\alpha(n)=<\lambda^n,1>_\alpha \eqno(2.29)$$
forms a positive definite kernel~$s_\alpha(j+k)$;
d) the sequence~$\{\alpha_k\}$ is a positive modified moment sequence with
regard to the system~$\{P_j(\lambda)\}$, i.e. there exists a non-negative
measure~$\mu_\alpha(d\lambda)$ having moments of all orders, such that
$$\alpha_j=\int \limits_{-\infty}^\infty P_j(\lambda)\,\mu_\alpha(d\lambda);
\eqno(2.30)$$
e) the form defined on the dense in ${\cal H}_{<,>}$ set of polynomials by means
of
the relation $$\alpha({\cal J})\,(P,Q)= _\alpha$$
is positive. The Hilbert space~${\cal H}_{<,>}$ is a closed set of polynomials
with respect to the norm~$\sqrt{ }$.}
Note that
$$\alpha({\cal J})\,(P_j,P_k)={\cal K}_\alpha(j;k).\eqno(2.31)$$
If $\alpha(\lambda)$ is a polynomial, then $\alpha({\cal J})$ is an operator
on the space~${\cal H}_{<,>}$ from a dense set in~$\{ {\cal P\/}_{R^1} \}$
onto~$\{ {\cal P\/}_{R^1} \}$:
$$\alpha({\cal J})\colon P(\lambda)\rightarrow\alpha(\lambda)P(\lambda).$$
\proclaim{Theorem 2.4}. {
Let $< , >_\alpha$ be the $``\alpha''$-scalar
product on the space~$\{ {\cal P\/}_{R^1} \}$
defined by~(2.22)--(2.23) and $ _\alpha \geq0.$ Then
the set~$\Sigma_\alpha$ of measures~$\mu_\alpha(d\lambda) \geq0$, such that
$$\alpha_j=\int\limits_{-\infty}^\infty
P_j(\lambda)\,\mu_\alpha(d\lambda),$$
and the set~$\Sigma_{s_\alpha}$ of measures, such that
sequence~$s_\alpha(n)=<\lambda^n,1>_\alpha$ is represented in the form
$$s_\alpha(n)=\int\limits_{-\infty}^\infty \lambda^n\,\mu_\alpha(d\lambda),
\qquad \mu_\alpha(d\lambda)\geq0, \eqno(2.32)$$
are identical. The sequence~$\{s_\alpha(n)\}$ can be obtained from the
triangular system of linear equations
$$\sum_{l=0}^j \gamma_{l,j}s_\alpha(l)=\alpha_j, \qquad j\in Z_+.\eqno(2.33)$$
}
\vskip 1cm
\proclaim{2.5}. {$``\alpha''$-definition as multiplication by function}
The function
$$\alpha(\gamma)=\sum_{j=0}^\infty \alpha_j P_j(\gamma)\eqno(2.34)$$
can be formally compared with a sequence~$\{\alpha_j\}$ for a fixed sequence
of polynomials~$\{P_j(\gamma)\}$. If~$\varrho(d\lambda)$ is
a measure having moments of all orders, the integral
$$\int_{-\infty}^\infty P(\lambda)\,\alpha(\lambda)
\,\varrho(d\lambda),$$
where $P(\lambda)$ is a polynomial, can be rewritten
in the following way:
$$\int\limits_{-\infty}^\infty
P(\lambda)\,\alpha(\lambda)\,\varrho(d\lambda)
=\sum_{j=0}^{\hbox{deg}P}\gamma_j\alpha_j.\eqno(2.35)$$
Here, $\gamma_j$ are the coefficients of the expansion $P(\lambda)$ as a
linear combination of the polynomials~$\{ P_j(\gamma) \}$.
\proclaim{Theorem 2.5}. {
Let $< , >_1$ and~$< , >_2$ be the scalar products on the polynomial
space~$\{ {\cal P\/}_{R^1} \}$; $\{P_j(\gamma)\}$ be the $< , >_2$-orthogonal
system of polynomials;
$$\alpha_j= _1= _{2,\alpha}\ ,\eqno(2.37)$$
and if $< , >_2=< , >_{\varrho(d\lambda)}$, then
$$ _1= _{\alpha(\lambda)\varrho(d\lambda)}\ .\eqno(2.38)$$ }
\beginsection{3. Scalar product on the space of trigonometric polynomials}
In this section we shall consider polynomials of one independent
complex variable $z$ on the unit circle~$\C=\{z\colon |z|=1\}$ which
are defined by their recurrence relations
$$\eqalign {u_{n+1}&=a_{n+1}zu_n+b_{n+1}v_n,\cr
v_{n+1}&={\overline b}_{n+1}zu_n+a_{n+1}v_n,
\qquad n=0,~1,~2,\ldots\cr}$$
and the initial conditions
$$u_0(z)\equiv1,\qquad v_0(z)\equiv1.$$
Assuming that $${a_n={\overline a}_n,}\qquad {a_n^2-|b_n|^2\neq 0,}
\qquad n=1,~2,\ldots$$
we will develop a technique analogous to the one expounded for the polynomial
space~$\{ {\cal P}_{R^1}\}$ on $\R^1$.
Let $\{ {\cal P}_C\}$ be a space of polynomials with complex coefficients
on the unit circle~$\C=\{z\colon |z|=1\}$,
i.e. a space of trigonometric polynomials, and let~$< , >$ denote a
scalar product on the space~$\{ {\cal P}_C\}$ with the property
$$ = .$$
We shall assume that the Toeplitz determinants are nonzero:
$$\Delta_n=\det\left| \, \matrix{
{<1,1>}&{ \geq0$$
if and only if $$|b_k|^2<1,\qquad k=0,~1,~2,\ldots~.\eqno(3.8)$$}
Omitting details, we now give the trigonometric version of the modified
Chebyshev algorithm.
Let $\{P_n(z)\}$ be a set of monic orthogonal polynomials satisfying
recurrence relations~(3.3), and let $\{\pi_n(z)\}$ be a set of
monic $< , >$-orthogonal polynomials with the following recurrence relations
$$\beta_{n+1}\pi_{n+2}(z)=(z\beta_{n+1}+\beta_{n+2})\pi_{n+1}(z)-
\beta_{n+2}(1-|\beta_{n+1}|^2)z\pi_n(z) \ .\eqno(3.9)$$
The coefficients~$\{\beta_n\}$ are not assumed to be known.
\proclaim{Definition}. { The matrix~$T_{\pi,P}$ of modified moments,
where $$\|T_{\pi,P}\|_{k,l}=\sigma_{k,l}{\buildrel \rm def \over =}
<\pi_k,P_l>$$
will be called a {\it Chebyshev matrix}.}
Note that $\sigma_{k,l}=0$ for all~$k>l$, i.e. the Chebyshev matrix is
triangular.
The entries of the Chebyshev matrix~$T_{\pi,P}$ and the coefficients of
the three-term recurrence formulas (3.3) and~(3.9) are connected by
relations that can also be considered as a recursion process to determine
entries of the Chebyshev matrix and unknown elements~$\beta_k$. Writing
it down in a recursive manner, we have:
{{\it the initial conditions:}~$\sigma_{-1,l}=0,\quad\sigma_{0,l}=<1,P_l>,
\quad l=0,~1,\ldots,n ,$
$\qquad\qquad\qquad\qquad\qquad\beta_0=1$
{\it and for all} $k=1,~2,\ldots,~n$
$$\beta_k={1\over\beta_{k-1}}\bigl[-{\sigma_{k-1,k}\over\sigma_{k-1,k-1}}
+{\overline b_k\over\overline b_{k-1}}
+(1-|\beta_{k-1}|^2){\sigma_{k-2,k-1}\over\sigma_{k-1,k-1}}
-(1-|\beta_{k-1}|^2){\overline b_k\over\overline b_{k-1}}
(1-|b_{k-1}|^2){\sigma_{k-2,k-2}\over\sigma_{k-1,k-1}}
\big]$$
\rightline {{~}~$(m)-T_C$}
$$\sigma_{k,l}=\left\{\eqalign{
& 0, \qquad l=0,~1,\ldots,~n; \cr
& {\overline b_l\over\overline b_{l-1}}\sigma_{k,l-1}+
{\beta_k\over\beta_{k-1}}\sigma_{k-1,l}-
{\beta_k\over\beta_{k-1}}{\overline b_l\over\overline b_{l-1}}
\sigma_{k-1,l-1}+\sigma_{k-1,l-1}\cr
&-{\overline b_l\over\overline b_{l-1}}(1-|b_{l-1}|^2)\sigma_{k-1,l-2}-
{\beta_k\over\beta_{k-1}}(1-|\beta_{k-1}|^2)\sigma_{k-2,l-1}\cr
&+{\beta_k\over\beta_{k-1}}(1-|\beta_{k-1}|^2)
{\overline b_l\over\overline b_{l-1}}(1-|b_{l-1}|^2)\sigma_{k-2,l-2},
\qquad l=k,~k+1,\ldots,~n \cr} \right. $$
}
In the case~$P_k(z)=z^k$, i.e. $b_k=0$ for~$k=0,~1,\ldots,$ the
$(m)-T_C$-recurrence relations are transformed into the well-known Levinson
algorithm [14]:
{\qquad {\it step~1:} $\pi_0(z)=1,\qquad\sigma_{0,0}=<1,1>,$
\qquad {\it step~2:} for~$k=1,~2,\ldots,~n$
$$\eqalignno {
\beta_k&=-{1\over\sigma_{k-1,k-1}}**_{\varrho(d\lambda)}, \eqno(6.5)$$
and~$\|C_{j,k}^r\|$ be the Haar matrix of the system~$\{ P_l(\lambda) \}$.
\proclaim{Theorem 6.2}. {The inequality~$P(\lambda,\mu)\bigm|_D\geq0$ is
true if and only if the kernel
$${\cal K}_{P,\varrho}(j,k;\mu)=\|\sum_{|j-k|}^{j+k} C_{j,k}^rQ_r(\mu)\|
\eqno(6.6)$$
is positive definite for~$0\leq\mu\leq1$.}
According to the Silvester criterion, the test of positive definiteness
of the kernel~${\cal K}_{P,\varrho}(j,k;\mu)$ is reduced to the test
of positivity of a known sequence of
determinants~$d_0(\mu),d_1(\mu),\ldots,d_1(\mu),\ldots~$. The kernel~(6.6)
application for testing the condition~$P(\lambda,\mu)\bigm|_D\geq0$ is
simplified if~$P(\lambda,\mu)$ is a polynomial. In this case the
kernel~${\cal K}_{P,\varrho}(j,k;\mu)$ is a band matrix. In addition the
simplification can sometimes be achieved by a special choice of the Haar
matrix~$\|C_{j,k}^r\|$. However, the troubles concerning with the calculation
of the higher-order determinants do not disappear.
\vskip 1cm
\proclaim {6.3}. { The Hausdorff differences and non-negativity test }
In Theorem~6.2 we did not use the limited size of the rectangle~$D$
\footnote{$^2$}{The reasoning given can be easily modified for cases of
non-trivial unbounded regions}.
Let us now apply the specificity of the Hausdorff moment problem. From~(6.5)
and~(6.6) we have
$${\cal K}_{P,\varrho}(j,k;\mu)
= _{P(\lambda,\mu)\varrho(d\lambda)}
=\int\limits_0^1 P_j(\lambda)P_k(\lambda)P(\lambda,\mu)\varrho(d\lambda)
.$$
The one-dimensional moment sequence corresponding to the
kernel~${\cal K}_{P,\varrho}(j,k;\mu)$ is
$$s_\mu(n)=\int\limits_0^1P(\lambda,\mu)\varrho(d\lambda).\eqno(6.7)$$
Having formed the sequence of the Hausdorff differences
$$\eqalignno{
\Delta_{k,r}s_\mu&=s_\mu(r)-C_k^1s_\mu(r+1)+C_k^2s_\mu(r+2)+\cdots+
(-1)^ks_\mu(r+k) \cr
&=\int\limits_0^1 \lambda^r(1-\lambda)^k
P(\lambda,\mu)\varrho(d\lambda)&(6.8) \cr}$$
for the sequence~$\{s_\mu(n)\}$
we obtain the following proposition by the well-known Hausdorff criterion.
\proclaim{Theorem 6.3}. { The following conditions are equivalent
$$\eqalign{
{1.}\quad &P(\lambda,\mu)\bigm|_D\geq0,\cr
{2.}\quad &\Delta_{k,r}s_\mu\geq0,\qquad k,r=0,~1,\ldots~.\cr}
\eqno(6.9)$$}
Note that the test of the ``two-dimensional'' sequence~$\Delta_{k,r}s_\mu$
of the functions from one variable can be carried out in the
above way through an application of the modified Chebyshev algorithm.
The applications of the modified Chebyshev algorithm also facilitate the
computation of the integrals~(6.8) in any complicated cases.
The above constructions are particularly clear if $P(\lambda,\mu)$
is a polynomial. In this case all the functions~$\Delta_{k,r}s_\mu$ are
polynomials.
\vskip 1cm
{\it Example~4.\/} To illustrate this, let us give the numerical results
for the polynomial~$Q(\lambda,\mu)\ -1\leq\lambda\leq1,0\leq\mu\leq1,$ where
$$Q(\lambda,\mu)=(\lambda-0.6)^2+4(\lambda-0.6)(\mu-0.1)+4.1(\mu-0.1)^2
+\varepsilon,\quad \varepsilon=-0.07 .$$
The smallest $n$, such that $\widehat b_n<0$ and~$\widehat b_{n-1}<0$,
are given in Table~4 for $k,r=0,~1,\ldots,~10$.
{\rm
\vskip 1cm
{\bf Table~4.}
{\vskip 6pt
\hrule
\settabs 13 \columns
\vskip 6pt
\+&$k\ \backslash\ r$&0&1&2&3&4&5&6&7&8&9&10\cr
\vskip 6pt
\hrule
\vskip 6pt
\+&0&35&19&15&13&13 &11&11&11&11&11&11\cr
\vskip 6pt
\+&1&19&16&14&14&12&12&12&11&11&11&11\cr
\vskip 6pt
\+&2&15&15&14&14&12&12&12&12&12&11&11\cr
\vskip 6pt
\+&3&14&13&13&13&12&12&12&12&12&12&10\cr
\vskip 6pt
\+&4&12&13&13&13&12&12&12&12&12&12&12\cr
\vskip 6pt
\+&5&12&12&13&13&11&11&12&12&12&12&12\cr
\vskip 6pt
\+&6&12&12&12&12&11&11&11&11&11&11&11\cr
\vskip 6pt
\+&7&12&12&12&11&11&11&11&11&11&11&11\cr
\vskip 6pt
\+&8&11&12&12&11&11&11&11&11&11&11&11\cr
\vskip 6pt
\+&9&11&12&12&11&11&11&11&11&11&11&11\cr
\vskip 6pt
\+&10&11&12&12&11&11&11&11&11&11&11&11\cr
\vskip 6pt
\hrule
\vskip 1cm }
}
{\it Note:\/} the test of the $k$-dimensional function given in a
hyperrectangle is reduced to the check of non-negativity of a
$2^{k-1}$-dimensional countable sequence of one-dimensional functions
by the procedure given above.
\beginsection{7. Moment method for a Schr\"odinger equation
and the modified Chebyshev algorithm}
In recent works [7-9,20] to investigate the Schr\"odinger equation with
a semi-bounded potential, the moment method has been used with great
success. The main reason of the successful application of the moment
method to this problems is a phenomenon of exponential decrease
of the solution of the Schr\"odinger equation with a given potential.
Roughly, the exponential decrease is :
Let $$-\Delta\psi +(V-E)\psi=0\eqno(7.1)$$ be the Schr\"odinger equation with
a semi-bounded potential~$V$. Then the values~$E$ for which there exist
non-trivial solutions~$\psi$,
are subdivided into two classes: the continuous spectrum and
the discrete spectrum arranged on the left of it. The solutions
corresponding to the
discrete spectrum decrease exponentially. The rate of decrease
depends on the distance between the considered eigenvalues and the continuous
spectrum.
If we consider the ground state solution~$\psi_{g.s.}(x)$
of the Schr\"odinger equation corresponding to the smallest energy
value~$E_{g.}$,
then from physical considerations it will follow that
$\psi_{g.s.}(x)\geq0$.
The non-negativity of $\psi_{g.s.}(x)$ and the property of exponential
decrease allow us to transform the Schr\"odinger equation~(7.1)
for $\psi_{g.s.}(x)$
with a linear fractional semi-bounded potential by means of
comparison of $\psi_{g.s.}(x)$
and its power moment $\mu_n$ sequence into the equivalent recurrence relation
$$\mu_n=M(\widetilde\mu_0,\widetilde\mu_1,\ldots,\widetilde\mu_s,\mu_{n-k},E),
\eqno(7.2)$$
where $E$ is a parameter to be determined.
Assuming that $E_{g.}$ and the set of missing
moments~$\{\widetilde\mu_{n_1},\widetilde\mu_{n_2},\ldots,
\widetilde\mu_{n_k}\}$
are known, we can calculate all moments of the function~$\psi_{g.s.}(x)$
by the recurrence relation~(7.2) and, hence, taking into account the nature of
the decrease of $\psi_{g.s.}(x)$ we can reconstruct $\psi_{g.s.}(x)$
in principle.
However, this cannot be done immediately because the power system is very far
from the orthogonal system of polynomials corresponding to
the weight~$\psi_{g.s.}(x)$.
In this section we limit ourselves to considering the one-dimensional
equation~(7.1).
First of all, let us note that, as it was shown in [7-9,20], the sequential
upper and lower energy bounds for the ground state energy~$E_{g.}$ are defined
by the conditions
$$\det\|\mu_{i+j}(E)\|_{i,j=0}^n>0.$$
Moreover, in the considered model examples it is sufficient to restrict
ourselves to the half-axis case.
Next, let
$$\psi(x)\geq0\quad (0\leq x<\infty),
\qquad\exists k>0:\quad\psi(x)\leq Ce^{-kx}\quad (x>R),$$
and all moments
$$m_n=\int\limits_0^\infty x^n\psi(x)dx$$
are known.
Now we introduce the auxiliary weight~$w(x)=e^{\gamma x}, \gamma<2R$
with the known $<,>_w$-orthogonal system of monic polynomials~$\{ P_j(x) \}$
and with the coefficients~$\{ a_n;b_n \}$ of the recurrence relations.
The set of values
$$\sigma_{0,l}=\sum_{s=0}^l\gamma_{s,l}m_l,$$
where $$P_j(x)=\sum_{l=0}^j\gamma_{l,j}x^l,$$
is also known for the function~$\psi(x)/w(x)$.
Let us take $\{\sigma_{0,l}\}$ for the initial data
of the modified Chebyshev algorithm.
The result of calculation by this modified Chebyshev algorithm
is the set~$\{ \widehat P_j(x) \}$ of monic polynomials orthogonal with
respect to the weight~${\psi(x) \over w(x)} w(x)dx$, i.e. the
$< , >_{\psi(x)dx}$-orthogonal set of polynomials.
The last system of polynomials allows one to compute the following integral
$$I_j(\Omega)=\int\limits_\Omega
(\psi(x)-C)\widehat P_j(x)w(x)dx,\eqno(7.3)$$
where $\Omega$ is some set on the half-axis.
Note that $\psi(x)$ generates some $U.p.$-measure and hence the knowledge
of the integral~(7.3) allows one to determine the range of the
function~$\psi(x)-C$.
The calculation of $\int_\Omega \widehat P_j(x)w(x)dx$ can
be accomplished by a quadrature formula corresponding to the weight $w(x)dx$,
because the coefficients of recurrence relation for the
system~$\{ \widehat P_j(x) \}$ are obtained by the modified
Chebyshev algorithm.
For the calculation of the integral
$\int_\Omega\psi(x)\widehat P_j(x)w(x)dx$
(the function~$\psi(x)$ is unknown) the quadrature formula for the weight
$w(x)dx$
can be used since the system~$\{ \widehat P_j(x) \}$ is known.
It is sufficient to construct a $< , >_\psi$-quadrature formula.
It should be taken into account that the set~$\Omega$ is realized
by summing up on notes from this set.
To give an illustrative example, we consider the usual continuum
harmo\-n\-ic-oscillator
problem $$-\psi^{\prime\prime}+x^2\psi=E\psi.\eqno(7.4)$$
Obviously, in this case there exists a unique point $E_{g.}$ of
the discrete spectrum. If we take
$$\mu_p=\int_{-\infty}^\infty x^p\psi_{g.s.}(x)dx,$$
then the equation~(7.4) is transformed into relation for the moments:
$$-p(p-1)\mu_{p-2}+\mu_{p+2}=E\mu_p.$$
Next, for all odd $k$, $\mu_k=0$. Therefore, after replacements $p=2k$
and $\mu_{2k}=m_k$ we have
$$m_{k+1}=Em_k+2k(2k-1)m_{k-1}.\eqno(7.5)$$
Thus, to recover all the moments~$\{m_k\}$ corresponding to~$E$
in our example, it is sufficient to know only the two first moments~$m_0,~m_1$,
i.e. these values are the missing moments. Taking $k=0$ in relation~(7.5),
we obtain $m_1=Em_0$. Hence in the considered example we can restrict
ourselves only to one moment~$m_0$ and without loss generality,
we can take $m_0=1$.
If the moment sequence~$\{m_k(E)\}_{k=0}^\infty$ is known, we can
apply the necessary modified Chebyshev algorithms to check the criterion
of solvability of the
Stieltjes moment problem (Theorem~4.1) for all fixed values~$E$.
Obviously, the chosen $E$ is nearer to $E_{g.}=1$, the more first members
of the sequences $\{\widehat b_k\}$ and~$\{\widehat b_k^{(1)}\}$ obtained by
the corresponding modified Chebyshev algorithm are positive. This directly
confirms the numerical results of calculations, that are given in Table~5,
where $n_0$ is the index of the first negative element
in the sequence~$\{\widehat b_k\}$ (left-hand side of Table~5)
and~$\{\widehat b_k^{(1)}\}$ (right-hand side of Table~5) corresponding
to the chosen~$E$.
{\rm
\vskip 1cm
{\bf Table~5.}
{\vskip 6pt
\hrule
\settabs 6 \columns
\vskip 6pt
\+&$E$&$n_0$&&$E$&$n_0$\cr
\vskip 6pt
\hrule
\vskip 6pt
\+&$1-10^{-1}$&3&&$1+10^{-1}$&4\cr
\vskip 6pt
\+&$1-10^{-2}$&5&&$1+10^{-2}$&5\cr
\vskip 6pt
\+&$1-10^{-3}$&6&&$1+10^{-3}$&6\cr
\vskip 6pt
\+&$1-10^{-4}$&7&&$1+10^{-4}$&8\cr
\vskip 6pt
\+&$1-10^{-5}$&8&&$1+10^{-5}$&9\cr
\vskip 6pt
\+&$1-10^{-6}$&9&&$1+10^{-6}$&10\cr
\vskip 6pt
\+&$1-10^{-7}$&10&&$1+10^{-7}$&11\cr
\vskip 6pt
\+&$1-10^{-8}$&11&&$1+10^{-8}$&12\cr
\vskip 6pt
\+&$1-10^{-9}$&13&&$1+10^{-9}$&13\cr
\vskip 6pt
\+&$1-10^{-10}$&14&&$1+10^{-10}$&14\cr
\vskip 6pt
\hrule
\vskip 1cm }
}
In our opinion it is instructive that the comparison of the given numerical
results with those from paper [10] emphasizes the principal
possibilities, efficiency and accessibility of the modified Chebyshev
algorithm as a numerical method of calculation
(PC AT for modified Chebyshev algorithm
and Cray-2 for method given in [10]). Some more contrast is observed
when we compare the effect of the modified Chebyshev algorithm
to estimate the range of the function~$\psi_{g.s.}(x)$.
\beginsection{8. Hilbert spaces of entire functions and the
modified Chebyshev algorithm}
To describe the solutions of the indeterminate moment problem, the extension
problem of Hermitian-positive functions, the solutions of various
interpolation problems, the isometrical inclusions
of the De~Branges spaces of entire functions, it is necessary to construct
the entire second-order matrix-function~$W(z)$ accomplishing the linear
fractional transform of the Nevanlinna function class. To construct such a
matrix-function it is desirable to have an effective computational technique.
In our opinion, the modified Chebyshev algorithm is an effective technique
for the indeterminate power moment problem. Indeed, accepting the moment
sequence
as initial data for the modified Chebyshev algorithm with $a_k=b_k=0$,
we obtain
the set of $< , >$-orthogonal polynomials~$\{P_k(\lambda)\}$ of the first
type, and at the same time the
entries~$\{\widehat a_k,\widehat b_k\}$ of the Jacobi matrix . This matrix can
be used to construct the set of $< , >$-orthogonal
polynomials~$\{Q_k(\lambda)\}$ of the second type by recursion.
After that the approximations of the entries
$$\eqalignno{ A(z)&=z\sum_{k=0}^\infty Q_k(0)Q_k(z),\cr
B(z)&=-1+\sum_{k=0}^\infty Q_k(0)P_k(z),\cr
C(z)&=1+\sum_{k=0}^\infty P_k(0)Q_k(z),\cr
D(z)&=z\sum_{k=0}^\infty P_k(0)P_k(z). \cr
}$$
of the matrix-function~$W(z)$ can be calculated.
Note that for the undetermined power moment problem the written series
converge uniformly on all compact sets in the complex plane.
The approximation of the phase function corresponding to the
Hilbert spaces of entire functions [3] can be obtained in the same way.
This gives a sequence of canonical approximations for solutions of the
infinite moment problem.
If we have a truncated trigonometric moment problem, the
matrix-func\-tion~$W(z)$ corresponding to it can be obtained by
the Levinson algorithm (see Section~3).
The procedure of construction of the
matrix-func\-tion $W(z)=\|W_{jk}(z)\|_{j,k=1}^2$ described above is
considerably more stable than a procedure directly based on determinant
equations~(2.2) and~(2.3). However, errors are accumulated in this case too,
which necessitates more accurate calculation and impedes advance
to high-degree orthogonal polynomials. If the system of
$\widetilde P_k(\lambda)=\sum\eta_{l,k}\lambda^l$-orthogonal polynomials
is used as a starting-point, knowledge of the values of $\sum \eta_{l,k} s(l)$
is necessary. Their direct calculation is a fairly complicated task. It does,
however, sometimes become possible to indicate a ``close'' to $\{ P_k\}$; then
$\sum \eta_{l,k} s(l)$ will have temperate growth (see Section~2.5).Sometimes,
if sush approach is taken, some a priori information
can be taken into consideration on measures representing the sequence~$s(l)$,
e.g. the existence of measures possessing a support $C[0,\infty)$
as its principal property (Stiltjes's case). Another example of indication
of a ``starting'' system of orthogonal polynomials ``close'' to one
which is sought
are moment methods of finding the energy of the ground state (see Section~7)
and excited states for the Schr\"odinger equation with semi-bounded potential.
Some ``starting'' systems of orthogonal polynomials are to be found
in the theory of spectral functions of string~[4].
The construction of the corresponding modified Chebyshev technique for
the arbitrary De~Branges spaces of entire functions seems to us a very
interesting and important problem.
\beginsection{ Acknowledgement.}
The authors are grateful to the referees for their constructive
and stimulating criticism.
\beginsection{ 9. References}
\item{1.} Achieser~N.I. (1965): The Classical Moment Problem and Some related
Questions in Analysis. Oliver \& Boyd, Edinburgh
\item{2.} Chebyshev,~P.L. (1859): Sur l'interpolation par la m\'ethode
des moindres carr\'es,
M\'em.Acad.Imper.Sci. St.Petersbourg. 7, {\bf 1}, 15, 1--24
\item{3.} De~Branges~L. (1968): Hilbert spaces of entire functions.
Prentice-Hall, Englewood Cliffs, N.J.
\item{4.} Dym~H., McKean~H.P. (1976):
Gaussian Processes, Function Theory,
and the Inverse Spectral Problem. Academic Press.
\item{5.} Gantmaher~F.R. (1988): Matrix Theory. Nauka, Moscow
\item{6.} Gautschi~W. (1982) On generating orthogonal polynomials,
SIAM J. Sci. Statist. Comput. {\bf 3}, 289--317
%(1985): Orthogonal polynomials --- Constructive theory
% and applications, J.Comput. and Appl.Math. {\bf 12/13}, 61--76
\item{7.} Handy~C.R. (1987): Moment method quantisation of a linear
differential equation for~$|\psi |^2$, Phys.Rev.~A, {\bf 36}, 12,
4411--4416
\item{8.} Handy~C.R.,~Bessis~D. (1985): Rapidly Lower Bounds for Schr{\"o}dinger
Equation ground State Energy, Phys.Rev.Letters, {\bf 55}, 9, 931--934
\item{9.} Handy~C.R.,~Bessis~D.,~Morley~T.D. (1988): Generating quantum energy
bounds by the moment method: A linear programming approach.
Phys.Rev.~A, {\bf 37}, 12, 4557--4569
\item{10.} Handy~C.R.,~Pei~J.Q. (1988): Moment-method analysis of ground
state of discretized bozonic systems, Phys.Rev.~A., {\bf 38}, 7,
3175--3181
\item{11.} Korzh~S.A.,~Ov\^carenko~I.E.,~Ugrinovsky~R.A. (1991):
Chebyshev recursion and its applications.
In Proc.16th USSR Conf. on Operator Theory in Functional Spaces,
pp.114 (N.Novgorod) (in Russian)
\item{12.} Kowalski~M.A. (1985): Scalar product on space of polynomials,
Acta Math.Hung., {\bf 46}, 1--2, 101--109
\item{13.} Krein~M.G.,~Naimark~M.A. (1936): The method of symmetric and
Hermitian forms in the theory of the separation of roots of
algebraic equations. English translation in
Linear and Multilinear Algebra, 1981, {\bf 10}, 265--308
\item{14.} Levinson~N. (1947): The Wiener rms (root-mean-square) Error Criterion
in Filter Design and Prediction, J.Math.Physics. {\bf 25}, 261--278.
\item{15.} Notik~A.I.,~Knafel'~A.I.,~Turchin~V.I.,~Ugrinovsky~V.A.,~Ugrinovsky~R.A.
(1990): Spectral analysis via continual maximum entropy thechnique,
Radiotechnika \& Electronika, {\bf 35}, 9, 1904--1912 (in Russian)
\item{16.} Ov\^carenko~I.E. (1990): Scalar product on polynomial space and positivity,
Dokl. AN of the Ukr.SSR, ser.A., 7, 17--21 (in Russian)
\item{17.} Ov\^carenko~I.E. (1991): Some applications of the Chebyshev recursion,
Dokl. AN of the USSR, {\bf 319}, 2, 287--291 (in Russian)
\item{18.} Sushkevich~A.K. (1941): The principles of higher algebra.
Gostexizdat (in Russian)
\item{19.} Ugrinovsky~R.A. (1992): About trigonometric moment problem with lacuna,
Levinson algorithm and some applied problem, Ukraine Math.J., {\bf 44},
5, 713--716 (in Russian)
\item{20.} Vrscay~E.R.,~Handy~C.R. (1989): The perturbed two dimensional
oscillator;
eigenvalues and infinite-fields limits via continued fractions
renormalised perturbation theory and moment
methods, J.Phys.A: Math.Gen., {\bf 22}, 7, 823--834
\item{21.} Wheeler~J.C. (1984): Modified moments and continued fraction
coefficients for the diatomic linear chain, J.Chem.Phys. {\bf 30}, 1,
472--476
}
\bye
**