Content-Type: multipart/mixed; boundary="-------------9810120600564"
This is a multi-part message in MIME format.
---------------9810120600564
Content-Type: text/plain; name="98-637.comments"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="98-637.comments"
AMS subject classification: 65L15
---------------9810120600564
Content-Type: text/plain; name="98-637.keywords"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="98-637.keywords"
ODE, EVP, Singular BVP, Hamiltonian System
---------------9810120600564
Content-Type: application/x-tex; name="Abramov.tex"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline; filename="Abramov.tex"
\documentstyle[12pt]{amsart}
\textwidth=155mm
\textheight=210mm
\newcounter{const}[section]
\newtheorem{theorem}{Theorem}
\newcommand\R{{\Bbb R}}
\newcommand\C{{\Bbb C}}
\newcommand\ie{{\em i.e.}}
\newcommand\eg{{\em e.g.}}
\begin{document}
\title[Self-Adjoint Non-Linear Eigenvalue Problems]
{Self-Adjoint Non-Linear \\
Eigenvalue Problems \\
for Linear Hamiltonian Systems}
\author{A. A. Abramov and A. Aslanyan}
\address{Prof.~A.~A.~Abramov: Department of Numerical Methods,
Computing Centre, Russian Academy of Sciences, Vavilova St. 40,
Moscow 117967, Russia. Tel. (7) (095) 1353398. Fax (7) (095) 1356159.
Dr.~A.~Aslanyan: Department of Mathematics, King's College London,
Strand, London WC2R 2LS, UK. Tel. (44) (171) 8365454 ext. 1026.
Fax (44) (171) 8732017.}
\email{alalabr@@ccas.ru , aslanyan@@mth.kcl.ac.uk}
\thanks{The research was supported by the Russian Foundation for Basic
Research (Project No. 96-01-00951) and the Royal Society of London, UK}
\begin{abstract}
A method for finding eigenvalues (EVs) and eigenfunctions (EFs)
of a self-adjoint differential problem is proposed and investigated.
A two-point boundary value problem (BVP) for a linear Hamiltonian
ODE system is considered
on a finite interval and on a half-line,
a spectral parameter is involved into the system non-linearly.
Following the technique described one can calculate all
the EVs lying in a given interval
(counting for their multiplicities) under certain assumptions.
The method in question is based on oscillation properties of the
system considered and essentially uses monotone dependence of its
matrix on a spectral parameter.
The main idea of the method for a problem in a finite interval
is due to Abramov (1991) who proposed to determine
the number of EVs in terms of
the number of the so-called conjugate points.
The corresponding relation is established by
oscillation theorems which
provide a theoretical basis for the approach presented below.
Its numerical part includes a new
version of the transfer method. In other words, the problem is reduced to
numerical integration of a finite number of auxiliary Cauchy problems.
The method proposed (including its modifications)
has been applied to several problems occurring in the shell
theory; numerical results are also presented.
\end{abstract}
\maketitle
\vspace{.3in}
{\em Key words: ODE, EVP, Singular BVP, Hamiltonian System}
{\em AMS subject classification: 65L15}
\vspace{.3in}
\section{Spectral problem in a finite interval}
\subsection{Statement of a problem. Introduction}
Consider a Hamiltonian system
$$
J y^\prime = A(t,\lambda) y, \ \ \ a \leq t \leq b,
\eqno(1)
$$
with boundary conditions
$$
\psi_a(\lambda) y(a) = 0,
\eqno(2a)
$$
$$
\psi_b(\lambda) y(b) = 0.
\eqno(2b)
$$
Here
$J = \left \| \begin{array}{cc}
0& -I\\
I& 0
\end{array} \right \| $,
$I$ is the identity $(n \times n)$-matrix;
$y = \left \| \begin{array}{c}
y_1\\
y_2
\end{array} \right \|$,
$A =
\left \| \begin{array}{cc}
A_{11} & A_{12}\\
A_{21} & A_{22}
\end{array} \right \|$,
$\psi_a = \left \| \psi_{1a}, \psi_{2a}\right \|$,
$\psi_b = \left \| \psi_{1b}, \psi_{2b}\right \|$;
$y_i: [a,b] \rightarrow \C^n$ for a fixed $\lambda$,
$A_{ij}: [a,b]\times[\Lambda_1,\Lambda_2]
\rightarrow \C^{n \times n}$,
$\psi_{ia}, \psi_{ib}: [\Lambda_1,\Lambda_2] \rightarrow
\C^{n \times n}$, $i,j=1,2$.
The matrix $A$ of system~(1) depends (generally speaking, non-linearly)
on a real spectral parameter $\lambda \in [\Lambda_1,\Lambda_2]$.
The matrix functions $A_{ij}(t, \lambda)$ are continuous in
$[a,b] \times [\Lambda_1,\Lambda_2]$.
Boundary conditions (2) depend on $\lambda$ continuously;
${\rm rank} \psi_a = {\rm rank} \psi_b = n$ for all $\lambda$.
BVP (1),~(2) is assumed to be self-adjoint,
\ie, $A = A^*$, $\psi_a J \psi_a^* = \psi_b J \psi_b^* = 0$.
We call $\lambda$ an EV of (1),~(2) if
for this $\lambda$ there exists a non-trivial solution to (1) (an EF)
satisfying conditions (2). The number of linearly independent
EFs corresponding to a certain EV is called its multiplicity.
To justify a basic numerical method we assume that
\begin{enumerate}
\item[]
$A(t, \lambda)$ is non-decreasing in $\lambda$:
$\lambda_2 > \lambda_1 \Rightarrow A(t,\lambda_2) \geq
A(t,\lambda_1)$ for all $t \in [a,b]$;
\item[]
$A_{22} > 0$ in $[a,b] \times [\Lambda_1, \Lambda_2]$;
\item[]
all EVs of (1),~(2) are isolated.
\end{enumerate}
We shall also assume special (monotone) dependence of the matrices
$\psi_a, \psi_b$ upon $\lambda$; the exact condition is to be
formulated below.
If the above conditions hold, it proves to be possible to
determine numerically the exact number of EVs belonging to
$[\Lambda_1, \Lambda_2]$ in terms of solutions to auxiliary
Cauchy problems. This technique is to be studied in detail
in Section~1. A basic numerical procedure described
there is close to those proposed in [2--5]
%\cite{Abr1}, \cite{Abr2}, \cite{Lid-N}, \cite{K-K-P}
for some self-adjoint BVPs. In fact, this
is an advanced version of the transfer method (also called the
pivotal condensation method in a number of papers on
the subject). In \cite{1} the special form of Hamiltonian
system (1) was taken into account.
When studying the basic method to be considered below,
the results have been obtained which enabled its generalization.
Namely, it proved to be possible to replace the
original condition $A_{22} > 0$ by $A_{22} \geq 0$. In Section
2 we shall show that the method is still applicable to
problems of this kind.
Due to this result the method has been extended to some
special cases which include important applied problems.
In Sections~1,~2 system (1) is considered in an interval $[a,b]$,
then in Section~3 we pass to a BVP on a half-line and study
the question how to set a proper boundary condition at infinity and
transfer it to a finite point.
To sort out the solutions to (1) bounded as $t \rightarrow
\infty$, we follow an approach presented in \cite{Abr-B-Kon} and
give a method for its practical implementation, which
includes applications to EVPs.
Numerical results are discussed in Section~4
where certain problems of the shell theory are considered.
\subsection{A version of the transfer method for Hamiltonian systems}
Bearing in mind numerical implementation of the procedures described
below, we suggest that no large parameters are involved into the problem.
Let us choose a real $\tilde{\theta}$ and change the variables in (1):
$$
y = \tilde{M} \tilde{y}, \ \ \tilde{M} =
\left \| \begin{array}{cc}
I \cos\tilde{\theta} & -I \sin\tilde{\theta} \\
I \sin\tilde{\theta} & I \cos\tilde{\theta}
\end{array} \right \| ;
\eqno(1.1)
$$
we obtain
$$
J \tilde{y}^\prime = \tilde{A}(t,\lambda) \tilde{y},
\eqno(1.2)
$$
where $\tilde{A} = \tilde{M}^*A\tilde{M}$; boundary conditions (2)
become
$$
\tilde{\psi}_a \tilde{y}(a) = 0, \ \ \ \tilde{\psi}_b \tilde{y}(b) = 0,
\eqno(1.3)
$$
where $\tilde{\psi}_a = \psi_a \tilde{M}$,
$\tilde{\psi}_b = \psi_b \tilde{M}$. The change of variables (1.1) that is the rotation by the angle $\tilde{\theta}$ leaves the BVP self-adjoint.
In what follows we shall deal with the left boundary condition
for the sake of brevity, meaning
that similar facts are valid for the right one.
As is easily seen, the above assumptions imply that a polynomial
$f(z) = {\rm det} \left[\psi_{1a}(\lambda) + z
\psi_{2a}(\lambda)\right]$ is not identically zero (here $z$ is a
complex scalar). Therefore,
for each fixed $\lambda$ we can choose $\tilde{\theta}$ so that
$$
{\rm det} \left[\psi_{1a}(\lambda)\cos\tilde{\theta} +
\psi_{2a}(\lambda)\sin\tilde{\theta}\right] \not= 0. \eqno(1.4)
$$
Then the left boundary condition given by (1.3) can be written as
$$
\begin{array}{c}
\tilde{y}_1(a) = \tilde{\mu}_a \tilde{y}_2(a), \\
\tilde{\mu}_a(\lambda,\tilde{\theta}) =
(\psi_{1a}(\lambda)\cos\tilde{\theta} +
\psi_{2a}(\lambda)\sin\tilde{\theta})^{-1}
(\psi_{1a}(\lambda)\sin\tilde{\theta} -
\psi_{2a}(\lambda)\cos\tilde{\theta});
\end{array}
\eqno(1.5)
$$
it is easily verified that $\tilde{\mu}_a^* = \tilde{\mu}_a$.
Now we are able to apply the classical transfer method to
the left boundary condition: in a neighbourhood of $a$ we define a function $\tilde{\mu}(t)$ satisfying a Cauchy problem
$$
\tilde{\mu}^\prime =\tilde{\mu} \tilde{A_{11}} \tilde{\mu} +
\tilde{\mu} \tilde{A_{12}} +
\tilde{A_{21}} \tilde{\mu} +
\tilde{A_{22}}, \ \ \ \tilde{\mu}(a) = \tilde{\mu}_a, \eqno(1.6)
$$
where
$$
\tilde{A} =
\left \| \begin{array}{cc}
\tilde{A}_{11}& \tilde{A}_{12}\\
\tilde{A}_{21}& \tilde{A}_{22}
\end{array} \right \|.
$$
Here $\tilde{\mu}^* (t) = \tilde{\mu} (t)$ as is immediately seen.
Integrating numerically \ (1.6), we transfer condition \ (1.5)
over \ $[a,t_1]$, \ $t_1 \leq b$:
$$
\tilde{y}_1(t) = \tilde{\mu}(t) \tilde{y}_2(t), \ \ \ a \leq t
\leq t_1; \eqno(1.7)
$$
in other words, the linear manifold of the solutions to (1.2) satisfying
(1.5) is moved from $a$ to $t_1$ and is determined at a point $t$ by
(1.7).
The solution $\tilde{\mu}(t)$ of (1.6) not necessarily
exists on the whole $[a,b]$ as is known for the Riccati equation.
Therefore, having calculated $\tilde{\mu}(t)$ in an interval $[a,t_1]$
where it is continuous, we choose $\tilde{\tilde{\theta}}$ instead of
$\tilde{\theta}$ and make another change of variables similar to (1.1),
\ie, introduce a new function
$$
\tilde{\tilde{\mu}} (t) =
\frac
{\tilde{\mu}(t) \cos(\tilde{\theta}-\tilde{\tilde{\theta}}) -
I\sin(\tilde{\theta}-\tilde{\tilde{\theta}})}
{\tilde{\mu}(t) \sin(\tilde{\theta}-\tilde{\tilde{\theta}}) +
I\cos(\tilde{\theta}-\tilde{\tilde{\theta}})} \eqno(1.8)
$$
(here $\tilde{\tilde{\theta}}$ is chosen so that the corresponding matrix
is invertible). This rotation gives us the Cauchy problem in $[t_1,t_2]$
similar to (1.6); relation (1.7) is still valid under appropriate
notations. Further we transfer the boundary condition over
$[t_2,t_3]$ in the same way, etc.
It proves to be possible to choose the values of $\tilde{\theta}$,
$\tilde{\tilde{\theta}}, \ldots$ for the lengths of $[a,t_1]$,
$[t_1, t_2], \ldots$ to be bounded from below by a positive constant.
Therefore, a finite number of the changes
of type (1.1) permits us to reach the point $b$, that is,
to transfer condition (2{\em a}) over the whole interval $[a,b]$ for system (1).
It depends upon a particular matrix $A$ how often we have to stop and apply
formula (1.8) when integrating equations of kind (1.6) numerically.
Of course, if our problem includes some large data, the
number of points $t_i, \ i=1,2,\ldots$, can become relatively great.
As was remarked in the beginning of this subsection,
we do not consider any special situations and assume the number of
such points to be reasonable for the given matrix $A$.
An approach allowing one to reduce this number
for a particular case of problem (1),~(2) can be found in \cite{Asl-95}.
A Hamiltonian system containing large values of a spectral parameter was
considered there.
Let us show how to choose the next value of $\theta$; we shall consider
passing from $\tilde{\theta}$ to $\tilde{\tilde{\theta}}$
at $t_1$ as an example. Denote $\varphi = \tilde{\theta} -
\tilde{\tilde{\theta}}$. Our purpose is to make the norm of
$\tilde{\tilde{\mu}}(t_1)$ possibly small (in the following estimates
we mean by the norm of a Hermitian matrix
its spectral radius). Let
$\tilde{\Lambda}_1, \ldots, \tilde{\Lambda}_n$ be the EVs of
$\tilde{\mu}(t_1)$; we represent them as $\tilde{\Lambda}_k =
-\cot T_k$, $k=1,2,\ldots,n$. Then, according to (1.8), the EVs of
$\tilde{\tilde{\mu}}(t_1)$ are given by $\tilde{\tilde{\Lambda}}_k =
\cot (\varphi - T_k)$, $k=1,2,\ldots,n$. This implies that the
optimal value of $\varphi$ should provide the maximal distance to the
nearest $T_k$ (with regard to $\pi$-periodicity). Consider such
$T_k'$ and $T_k''$, lying next to each other on the real line with
regard for periodicity, that the distance between them is maximal
(among all the neighbour pairs). This distance is not less than
$\pi/n$. If $\varphi = (T_k' + T_k'')/2$, then
$\displaystyle{\min_{k} |\varphi - T_k| \geq \pi/(2n)}$. For this value of
$\varphi$ we obtain $\displaystyle{\max_{k}
{\bf |}\tilde{\tilde{\Lambda}}_k {\bf |}
\leq \cot[\pi/(2n)] < 2n/\pi}$. Thus, it is always possible to choose
such a $\tilde{\tilde{\theta}}$ that
${\bf |}\tilde{\tilde{\mu}}(t_1) {\bf |}
< 2n/\pi$ whatever norm $\tilde{\mu}(t_1)$ had. Considering the Cauchy
problem for $\tilde{\tilde{\mu}}(t)$ and recalling a common technique
for proving the existence of the solution, we can immediately see that since
${\bf |}\tilde{\tilde{\mu}}(t_1) {\bf |}$ is bounded by a fixed constant, the
solution can be extended to an interval of the length bounded
from below.
This version of the transfer method can be applied to solving
BVP (1),~(2) in a standard way once an appropriate EV is found.
Both boundary conditions (2) should be transferred to a chosen point
$t_0 \in [a,b]$. As a result we obtain a homogeneous system of linear algebraic
equations (SLAE) for $y_1(t_0)$ and $y_2(t_0)$:
$$
\begin{array}{l}
\left[\mu_l(t_0)\sin\theta_l + I\cos\theta_l\right] y_1(t_0) -
\left[\mu_l(t_0)\cos\theta_l - I\sin\theta_l\right] y_2(t_0) = 0, \\
\left[\mu_r(t_0)\sin\theta_r + I\cos\theta_r\right] y_1(t_0) -
\left[\mu_r(t_0)\cos\theta_r - I\sin\theta_r\right] y_2(t_0) = 0.
\end{array}
\eqno(1.9)
$$
Here $\theta_l$ and $\mu_l$ refer to the left boundary condition,
$\theta_r$ and $\mu_r$ --- to the right one, respectively.
It is clear that an EV $\lambda$ has to be selected for system
(1.9) to be non-trivially consistent.
If we take $\theta_l$ and $\theta_r$ to be equal (which can always be
done), this condition becomes equivalent to
$$
{\rm det}\left[\mu_l(t_0) - \mu_r(t_0)\right] = 0.
\eqno(1.10)
$$
The multiplicity of the zero EV of the Hermitian matrix
$\mu_l(t_0) - \mu_r(t_0)$ is equal to the multiplicity of the
chosen $\lambda$ as an EV of the original problem.
Below in this section (see Subsection~1.3)
we shall discuss how to calculate
all the values of $\lambda \in [\Lambda_1,
\Lambda_2]$ satisfying (1.10). Before passing to the main problem of
finding the EVs we outline briefly a possible method for calculating
EFs. After a required EV has been determined to a specified
accuracy, the corresponding EF can be computed, \eg, with the use of
the so-called reverse transfer. As was
proposed above, we choose a point $t_0 \in [a,b]$ and transfer
both boundary conditions to it. Having solved (1.9), we obtain
the values
of $y_1(t_0)$ and $y_2(t_0)$. Then the reverse transfer equation for each
of the intervals where $\theta$ is constant (and equal, for instance,
$\tilde{\theta}$) is the following:
$$
\tilde{y}_2^\prime = - (\tilde{A}_{11} \tilde{\mu} + \tilde{A}_{12})
\tilde{y}_2; \eqno(1.11)
$$
besides,
$$
\tilde{y}_1 = \tilde{\mu} \tilde{y}_2.
\eqno(1.12)
$$
Starting from the point $t_0$ where the value of $\tilde{y}_2$
is already known, we integrate the Cauchy problem for (1.11) (obviously, the values of $\tilde{\mu}$ should be stored at a sufficiently fine mesh to this end); the values of $\tilde{y}_1$
are computed due to relation (1.12). It is clear how to pass to new variables when $\theta$ changes.
How stable numerically is the stated method expected to be? On each interval
where $\theta$ does not change the classical transfer is used. Formula
(1.8) does not imply any essential calculating errors if we choose a new
value of $\theta$ as was proposed. The change of variables (1.1), \ie,
the passage from (1),~(2) to (1.2),~(1.3) is also numerically stable since the matrix $\tilde{M}$
is well-conditioned. Thus, we can conclude that in the
very general situation the method is as stable as the
classical transfer provided
that the solutions to the auxiliary Riccati equations
do not grow too abruptly.
\subsection{Oscillation theorems}
In our argument the matrix $A_{22}$ is assumed to be positive definite
for all $t$ and $\lambda$. This assumption
has been essentially used when studying oscillation properties
of problem (1),~(2) (see, \eg, \cite{1}, \cite{Lid}, \cite{Atk}). We shall start
our consideration from systems of type (1)
for which $A_{22} > 0$ to obtain the basic results.
In the next section we shall
extend our technique to the case when $A_{22} \geq 0$, which
covers important classes of EVPs (the examples will be given below).
Some of the following considerations are close to those made in
\cite{Lid}, \cite{Atk}.
For the time being we shall suppose that $\psi_a$ and $\psi_b$
do not depend on $\lambda$; the changes arising when it
is not satisfied are considered in Subsection~1.4.
In our discussion we shall deal with the concept of a
conjugate point (which differs from that defined in \cite{Atk}). Consider system
(1) and conditions (2) for a fixed value of $\lambda$.
{\bf Definition.} A point $t_* \in [a,b]$ is called a left
conjugate point of problem (1),~(2) if system (1) considered on $[a,t_*]$
has a solution $y \not\equiv 0$ subject to boundary conditions (2{\em a}) and
$y_1(t_*) = 0$. The number of such linearly independent solutions is
called the multiplicity of the left conjugate point.
Right conjugate points are defined in the same way with obvious changes
(the left boundary condition is replaced by the right one in the
above definition).
For the sake of brevity we shall formulate our main results for left
conjugate points (LCPs) bearing in mind that right conjugate points
(RCPs) have similar properties.
The properties of conjugate points due to the assumed monotonicity
of the matrix $A$ in a spectral parameter
allow us to determine the number of conjugate
points in $[a,b]$ without finding themselves. This, in turn, enables
us to calculate the total number of EVs lying in $[\Lambda_1, \Lambda_2]$
using the oscillation theorems to be proved below.
It follows from (1.1) and (1.7) that an LCP $t_*$ satisfies
$$
{\rm det} (\tilde{\mu}(t_*)\cos\tilde{\theta} -
I\sin\tilde{\theta}) = 0, \eqno(1.13)
$$
where $\tilde{\mu}$ and $\tilde{\theta}$ correspond to transferring
condition (2{\em a}) to $t_*$; here the dimension of
the kernel of the Hermitian matrix $\tilde{\mu}(t_*)\cos\tilde{\theta} -
I\sin\tilde{\theta}$ equals the multiplicity of this LCP. Of course,
the property of any point to be an LCP does not depend on the choice
of $\tilde{\theta}$, $\tilde{\tilde{\theta}}, \ldots$. As follows from
(1.13), $\cos\tilde{\theta} \not= 0$ for LCPs. Denote
$\sigma(t) = (\tilde{\mu}(t)\cos\tilde{\theta} - I\sin\tilde{\theta}) \cos\tilde{\theta}$.
\begin{theorem}
When crossing an LCP from left to right the signature of the matrix
$\sigma(t)$, i.e., the
difference between the positive and negative inertial indices of the
Hermitian form corresponding to $\sigma$,
increases by a number equal to twice the multiplicity
of this LCP.
\end{theorem}
{\bf Proof}
Let $t_*$ be an LCP of multiplicity $l$, $H_0$ denote the kernel of $\sigma(t_*)$. Denote by $H(t)$
the subspace invariant under $\sigma(t)$ which depends continuously on $t$
($t$ ranging over a small neighbourhood of $t_*$) and satisfies
$H(t_*) = H_0$.
Because $\sigma(t)$ is continuously differentiable, it is known from the
perturbation theory (see, \eg, \cite{Lanc})
that $H(t)$ is also continuously differentiable,
and if one chooses in $H(t)$ an orthogonal and normalised (with respect
to the scalar product $(u,v) = v^*u$) basis $\varphi_1(t), \ldots,
\varphi_l(t)$ continuously differentiable with respect to $t$, then the
restriction of $\sigma(t)$ to $H(t)$ is specified in this basis by the
matrix $\Gamma(t)$, given for small $(t - t_*)$ by a formula
$$
\Gamma(t) = \Gamma(t_*) + (t - t_*)\Phi^*(t_*)\sigma'(t_*)\Phi(t_*) + o(t - t_*),
$$
where $\Phi = \| \varphi_1, \ldots, \varphi_l\|$. For these $t$ using
the relation $\sigma(t_*)\Phi(t_*) = 0$ and system (1.2), we easily obtain
$$
\Gamma(t) = (t - t_*)\Phi^*(t_*)A_{22}(t_*)\Phi(t_*) + o(t - t_*).
$$
However $A_{22} > 0$ by assumption, so that for $t$ close to $t_*$
we have $\Gamma(t) > 0$ for
$t > t_*$ and $\Gamma(t) < 0$ for $t < t_*$.
Hence when passing through the point $t_*$ the EVs of
$\sigma(t)$ which vanish at $t_*$ change their sign from minus
to plus. This is equivalent to the assertion of the theorem.
Theorem~1 enables one to compute the sum $k$ of the multiplicities of
LCPs lying in the interval where $\theta$ is constant
(say, $[t_1,t_2]$, where $t_1$ and $t_2$ are not LCPs) without
locating the points themselves. This
sum is given by $2k = s(\sigma(t_2)) - s(\sigma(t_1))$,
where $s(\sigma)$ denotes the signature of $\sigma$.
Thus, we only need to
calculate these inertial indices at the selected points to find the
total number of LCPs lying in any subinterval of $[a,b]$
(note that this number is finite).
Let us fix a value of $\lambda$ not equal to any EV of (1),~(2). We shall
take a point $t$ which is neither an LCP nor an RCP for
the chosen $\lambda$ and transfer both conditions (2) to this
point. Denote by $k_l$ and $k_r$ the total number
(counting for the multiplicities) of the LCPs
and RCPs lying in $(a,t)$ and $(t,b)$, respectively. Since $t$ is
not an LCP, condition (2{\em a}) transferred to the point $t$ can be
written as
$$
y_2(t) = R_l(t) y_1(t),
$$
where the Hermitian matrix
$$
R_l(t) =
\frac{\mu_l(t)\sin\theta_l + I\cos\theta_l}
{\mu_l(t)\cos\theta_l - I\sin\theta_l}
$$
does not depend upon $\theta_l$.
Similarly, condition (2{\em b}) transferred to $t$ becomes
$$
y_2(t) = R_r(t) y_1(t),
$$
where
$$
R_r(t) =
\frac{\mu_r(t)\sin\theta_r + I\cos\theta_r}
{\mu_r(t)\cos\theta_r - I\sin\theta_r}
$$
does not depend upon $\theta_r$ either. Denote by $k_0$ the positive
inertial index of the Hermitian form corresponding to
$$
R(t) = R_r(t) - R_l(t).
$$
Let us calculate the integer $k_l + k_r + k_0$ which will play a
significant role in what follows.
\begin{theorem}
The value of $k_l + k_r + k_0$ does not depend on the choice of $t$.
\end{theorem}
{\bf Proof} When $t$ ranges over an interval containing neither LCPs
nor RCPs, $k_0$ does not change, because otherwise ${\rm det} R(t) = 0$
would hold for some $t$ and hence the SLAE
$$
\begin{array}{l}
R_r(t) y_1(t) - y_2(t) = 0, \\
R_l(t) y_1(t) - y_2(t) = 0
\end{array}
$$
would be non-trivially consistent and so the chosen $\lambda$ would be
an EV of the original problem, which has been excluded by assumption.
Obviously, $k_l$ and $k_r$ also remain the same in this case.
We shall consider how $k_l$, $k_r$ and $k_0$ change when $t$ jumps
across a conjugate point. Let $t_*$ be an LCP of multiplicity $p_l$
and an RCP of multiplicity $p_r$, where $p_l \geq 0$, $p_r \geq 0$, $p_l
+ p_r > 0$. We choose $\theta_l = \theta_r = \theta$. As has already been
stated, $\cos\theta \not= 0$. For $t \not= t_*$ we reduce $R(t)$
to a form
$$
R(t) = I \tan\theta +
\frac{I}{(\mu_r(t)\cos\theta - I\sin\theta) \cos\theta}
$$
$$
- \left(
I \tan\theta + \frac{I}{(\mu_l(t)\cos\theta - I\sin\theta)\cos\theta}
\right) = \sigma_r^{-1}(t) - \sigma_l^{-1}(t).
$$
We have $R = \sigma_r^{-1}(\sigma_l - \sigma_r) \sigma_l^{-1}$; since
${\rm det}(\sigma_l - \sigma_r) \not= 0$ we obtain
$$
R^{-1} = \sigma_l(\sigma_l - \sigma_r)^{-1}\sigma_r.
$$
Hence the Hermitian matrix function $R^{-1}(t)$ is continuously
differentiable in a neighbourhood of $t_*$ (including this point).
Let us clarify the behaviour of the EVs of $R^{-1}(t)$ that vanish at
$t=t_*$. The restriction of $R^{-1}(t)$ to the subspace generated by all the
eigenvectors corresponding to these EVs of $R^{-1}(t)$ (for details see the
proof of Theorem~1) has a form $(t-t_*)S + o(t-t_*)$, where $S$ can
be obtained as follows. Consider $[R^{-1}(t)/(t-t_*)]^{-1}$; it is clear
that as $t \rightarrow t_*$ we obtain an operator $\tilde{S}$ which is
identical to $S^{-1}$ on the kernel $H_0$ of the operator $R^{-1}(t_*)$ and
annihilates the orthogonal complement to $H_0$. Thus,
$$
\tilde{S} = \lim_{t \rightarrow t_*} \frac{t-t_*}{\sigma_r(t)} -
\lim_{t \rightarrow t_*} \frac{t-t_*}{\sigma_l(t)} = \tilde{S}_r -
\tilde{S}_l,
$$
where $\tilde{S}_r$ and $\tilde{S}_l$ are the operators corresponding to
$\sigma_r(t_*)$ and $\sigma_l(t_*)$,
$\tilde{S}_r$ annihilates the orthogonal complement to the kernel $H_r$ of
$\sigma_r(t_*)$ and
$\tilde{S}_l$ annihilates the orthogonal complement to the kernel $H_l$ of
$\sigma_l(t_*)$.
As was shown when proving Theorem~1, $\tilde{S}_r^{-1}$ and
$\tilde{S}_l^{-1}$ restricted to $H_r$ and $H_l$, respectively, are positive
definite; hence $\tilde{S}_r$ and $\tilde{S}_l$ are also positive definite
in the same subspaces. As is easily seen, $H_r$ and $H_l$ have a trivial
intersection. Therefore, $H_0 = H_r + H_l$ and ${\rm dim} H_0 = p_r + p_l$.
The Hermitian matrix $\tilde{S}$ can be represented as
$$
\tilde{S} = \sum_{i=1}^{p_r} \rho_i X_i X_i^* - \sum_{j=1}^{p_l} \pi_j Y_j
Y_j^*,
$$
where $X_1, \ldots, X_{p_r}$ are the coordinate columns of the orthogonal and
normalised eigenvectors of $\tilde{S}_r$ in $H_r$, and
$Y_1, \ldots, Y_{p_l}$ are those of $\tilde{S}_l$ in $H_l$; $\rho_i > 0$,
$\pi_j > 0$. Hence $\tilde{S}$ can be represented as
$$
\tilde{S} = Q \left \| \begin{array}{lccrccrr}
\rho_1 \cdot & & & & & & & \\
& \cdot & & & & & & \\
& & \cdot & & & & & \\
& & & \rho_{p_r} & & & 0& \\
& & & & -\pi_1 \cdot & & & \\
& & & & & \cdot & & \\
& 0 & & & & & \cdot & \\
& & & & & & & -\pi_{p_l}
\end{array} \right \| Q^*,
$$
where $Q = \|X_1, \ldots, X_{p_r}, Y_1, \ldots, Y_{p_l}\|$ and ${\rm rank} Q
= p_r + p_l$. From this, using the inertia law for Hermitian forms, we
conclude that among the EVs of $\tilde{S}$ there are precisely $p_r$
positive and $p_l$ negative.
Thus, when $t$ jumps from left to right across $t_*$ exactly $p_r - p_l$ EVs
of the matrix $R(t)$ change from positive to negative. But when it happens
we should stop regarding $t_*$ as an RCP and start regarding it as an LCP.
Therefore, the sum $k_l + k_r + k_0$ remains the same.
Denote $N(\lambda) = k_l + k_r + k_0$; the function $N(\lambda)$ is defined
for all $\lambda$ that are not EVs of the original problem. We shall study
the dependence of this function on $\lambda$.
\begin{theorem}
Let an interval $[\lambda', \lambda^{\prime\prime}]$ contain no EVs of
(1),~(2). Then $N(\lambda)$ is constant in this interval.
\end{theorem}
{\bf Proof} Since $A$ is continuous in $t, \lambda$, the solutions to
all the
auxiliary Cauchy problems are continuous in $\lambda$; for a sufficiently
small change in $\lambda$ we do not need to change the chosen
$\tilde{\theta}$, $\tilde{\tilde{\theta}}, \ldots$, and the inertia
indices of the matrices involved into
calculation of $k_l, k_r$ and $k_0$ do not change either in this case,
which proves the theorem.
\begin{theorem}
Let $\lambda_*$ be an EV of (1),~(2) having multiplicity $k$.
Then
$$
N(\lambda_*+0) - N(\lambda_* - 0) = k.
$$
\end{theorem}
{\bf Proof} Take any point $\hat{t}$ which is neither an LCP nor an RCP for
a given $\lambda_*$. If $\lambda'$ is sufficiently close to $\lambda_*$ then
for $\lambda' < \lambda_*$ one can use the same $\tilde{\theta},
\tilde{\tilde{\theta}}, \ldots$. From the comparison theorem (see \cite{Roy}),
when moving from $a$ to $\hat{t}$ along the interval where $\theta$ is
constant, we obtain
$\tilde{\mu}(t,\lambda_*) \geq \tilde{\mu}(t,\lambda')$
because of the assumed monotonicity of $A$. At the point
$t_1$ where $\theta$ changes, from (1.8) we have
$$
\tilde{\tilde{\mu}}(t_1) = I\cot(\tilde{\theta} - \tilde{\tilde{\theta}})
- [\tilde{\mu}(t_1) \sin^2(\tilde{\theta} - \tilde{\tilde{\theta}}) +
I \sin(\tilde{\theta} - \tilde{\tilde{\theta}})
\cos(\tilde{\theta} - \tilde{\tilde{\theta}})]^{-1}
\eqno(1.14)
$$
and, since $\tilde{\mu}(t_1,\lambda)$ is continuous in
$\lambda$, for small $\lambda_* - \lambda'$ we obtain
$$
\tilde{\tilde{\mu}}(t_1,\lambda_*) \geq \tilde{\tilde{\mu}}(t_1,\lambda').
$$
A similar argument works for $[t_1,t_2]$, $[t_2,t_3]$, etc.
Finally we have
$$
\mu_l(\hat{t},\lambda_*) \geq \mu_l(\hat{t},\lambda').
$$
Similarly,
$$
\mu_r(\hat{t},\lambda_*) \leq \mu_r(\hat{t},\lambda')
$$
(here $t$ changes from right to left). Thus, we obtain
$$
\mu_l(\hat{t},\lambda_*) - \mu_r(\hat{t},\lambda_*) \geq
\mu_l(\hat{t},\lambda') - \mu_r(\hat{t},\lambda').
\eqno(1.15)
$$
According to our assumptions, the kernel of the
matrix $\mu_l(\hat{t},\lambda_*)
- \mu_r(\hat{t},\lambda_*)$ has dimension $k$. As
$\lambda_*$ is an isolated EV, $\lambda'$ is not an EV, \ie, zero is not an
EV of the matrix $\mu_l(\hat{t},\lambda') - \mu_r(\hat{t},\lambda')$.
The continuity of $\mu_l(\hat{t},\lambda)$ in $\lambda$ and relation
(1.15) imply that the matrices $\mu_l(\hat{t},\lambda_*) - \mu_r(\hat{t},\lambda_*)$
and $\mu_l(\hat{t},\lambda') - \mu_r(\hat{t},\lambda')$ have the same number
of positive EVs, while the latter has $k$ more negative EVs than the former.
For $\lambda^{\prime\prime} > \lambda_*$ with $\lambda^{\prime\prime}$ close to $\lambda_*$ we similarly find that
the matrices $\mu_l(\hat{t},\lambda_*) - \mu_r(\hat{t},\lambda_*)$
and $\mu_l(\hat{t},\lambda^{\prime\prime}) - \mu_r(\hat{t},\lambda^{\prime\prime})$ have the same number
of negative EVs, and the latter has $k$ more positive EVs than the former.
From this we get $k_0(\lambda^{\prime\prime}) - k_0(\lambda') = k$. Because $k_l$ and $k_r$ obviously do not change when passing from $\lambda'$ to $\lambda^{\prime\prime}$, we finally obtain
$$
[k_l(\lambda^{\prime\prime}) + k_r(\lambda^{\prime\prime}) + k_0(\lambda^{\prime\prime})] -
[k_l(\lambda') + k_r(\lambda') + k_0(\lambda')] = k,
$$
as was required.
The following claim follows immediately from Theorems 3 and 4.
\begin{theorem}
Suppose that $\lambda_1$ and $\lambda_2$, $\lambda_1 < \lambda_2$, are not EVs of (1),~(2). Then
$N(\lambda_2) - N(\lambda_1)$ equals the total number
of all the EVs of (1),~(2) lying in
$(\lambda_1, \lambda_2)$ counting for their multiplicities.
\end{theorem}
{\bf Remark.} As was assumed before (see Subsection~1.1),
we only dealt with isolated EVs of the original problem. It is clear
that given the weak assumption that was made ($A$ does not decrease in
$\lambda$), it is possible, for example, that the whole range of $\lambda$
consists of EVs. We do not examine how to adopt our consideration to such
cases. For simplicity, we assume that all the EVs of (1),~(2) are isolated. The following rather rough condition is sufficient for this:
$A$ increases in $\lambda$; see \cite{Atk} in this regard.
\subsection{Boundary conditions depending on the spectral parameter}
We shall briefly mention the changes which appear in the statements
of Theorems~1--5 if we allow boundary
conditions (2) to depend on $\lambda$. Let us specify
this dependence.
The Hermitian matrix $\tilde{\mu}_a(\lambda)$ defined by (1.5) is continuous and does not decrease in $\lambda$ as $\lambda$ ranges in an interval where ${\rm det} [\psi_{1a}(\lambda)\cos\tilde{\theta} + \psi_{2a}(\lambda)\sin\tilde{\theta}]$ does not vanish for the chosen $\tilde{\theta}$. This property does not depend on the choice of $\tilde{\theta}$, which is easily proved using the formula
obtained from (1.14) by replacing $t_1$ with $a$.
The Hermitian matrix $\tilde{\mu}_b(\lambda)$ defined for $t=b$ by a formula
similar to (1.5) is continuous
and does not increase in $\lambda$ as $\lambda$ ranges in such an interval that
${\rm det} [\psi_{1b}(\lambda)\cos\tilde{\theta} + \psi_{2b}(\lambda)\sin\tilde{\theta}]$ does not vanish for the chosen $\tilde{\theta}$. We keep the remaining assumptions of Subsection~1.3.
We shall only describe the final results; the arguments are very similar to those of Subsection~1.3 (see also \cite{Lid}, \cite{Atk}).
As $\lambda$ increases LCPs can only move to the left. Thus, LCPs can leave the interval $(a,b)$ at the point $a$ but cannot enter it.
The total number of the LCPs leaving the interval $(a,b)$ to the left
when $\lambda$ changes from $\lambda'$ to $\lambda^{\prime\prime}$
($\lambda' < \lambda^{\prime\prime}$) can be computed as follows. For
constant $\tilde{\theta}$ satisfying a condition ${\rm det}
[\psi_{1a}(\lambda)\cos\tilde{\theta} +
\psi_{2a}(\lambda)\sin\tilde{\theta}] \not= 0$ the total number
of the LCPs leaving $(a,b)$ to the left is equal to the difference of
the negative inertial indices of the Hermitian forms corresponding to
$\sigma(a, \lambda')$ and $\sigma(a,\lambda^{\prime\prime})$. If the
range of $\lambda$ requires to change $\theta$ several times, then
one should sum up the results obtained for different $\theta$. The
number of RCPs leaving the interval to the right is computed in a
similar way.
The general meaning of this subsection compared to Subsection~1.3 is
as follows. When determining the number of LCPs for $\lambda'$ and
$\lambda^{\prime\prime}$ (counting for their multiplicities)
contained in the interval $(a,\hat{t})$ one should include
the LCPs exiting this interval on the left. RCPs are counted in a
similar way.
\subsection{Implementing the method and comparing it to others}
We shall show how to apply the results of Subsections~1.2-1.4 to problem
(1),~(2). It is required to calculate all the EVs lying in the given interval
$[\Lambda_1, \Lambda_2]$ and the corresponding EFs. Below, when
describing any numerical procedure for a specified $\lambda$, we
shall assume that this $\lambda$ is not an EV; if we hit an EV by
chance the situation is only simplified. For $\lambda = \Lambda_1$,
transferring both boundary conditions to a chosen point
that is neither an LCP nor an RCP, we compute $N(\Lambda_1)$.
Implementing the transfer method, we pass to a new value of $\theta$
(see (1.8)) as was proposed in Subsection~1.2 as soon as any
elements of the matrix $\mu$ become large in absolute value (significantly
larger than $2n/\pi$). Then we complete the same procedure for $\lambda = \Lambda_2$.
If the boundary conditions depend on $\lambda$, then changing
$\lambda$ by several stages we determine $k'$, \ie, the number of
LCPs and RCPs left the interval $(a,b)$ across its ends.
The integer $N(\Lambda_2) - N(\Lambda_1) + k'$ equals the total
number of the EVs lying in $(\Lambda_1, \Lambda_2)$. If this
value is positive we divide $(\Lambda_1, \Lambda_2)$ into parts and
find the total number of the EVs in each part, etc. After
locating a simple EV we refine it using, for instance, the method of
chords to solve equation~(1.10). If an EV is multiple, then by
successive divisions of a sequence of intervals we finally obtain a
small interval containing this EV. We also know its multiplicity,
which is of practical importance for obtaining the rank of SLAE (1.9).
Having determined an EV, we compute the corresponding EF as was described in Subsection~1.2.
It should be noted that having determined $N(\Lambda_1)$, $N(\Lambda_2)$
and $k'$ we can choose any integer $q$ such that $N(\Lambda_2) + k' \geq q \geq N(\Lambda_1) + 1$ and try to find the EV $\lambda_*$ for which
$N(\lambda_*+0) + k^{\prime\prime}(\lambda_*+0) \geq q \geq N(\lambda_*-0) + 1 + k^{\prime\prime}(\lambda_*-0)$, where
$k^{\prime\prime}(\lambda)$ is the number of LCPs and RCPs left the interval $(a,b)$ through its ends when passing from
$\Lambda_1$ to $\lambda$. This is an analogue to calculating the EV
with a given number for a scalar Sturm-Liouville problem (see
\cite{A-D-K}).
Implementing most of the procedures described above, the following algebraic
problem is important to be efficiently solved. Given a Hermitian matrix, it
is required to find numerically the inertial indices of the corresponding
Hermitian form. For a possible procedure for solving this problem and
equation~(1.10) see, in particular, \cite{Yuk}.
Comparing the method proposed to already known ones, let us mention the
following. When using the transfer methods of [3--5]
%\cite{Abr2}, \cite{Lid-N}, \cite{K-K-P},
one has to process more values, and finding the number of LCPs and RCPs is
also more complicated. The method of \cite{Lid-N}
is the closest to the considered one. That method also locates the
number of EVs by slicing, but to do this it is necessary to slice
when an EV of a unitary matrix involved in the transfer method passes
through 1, which is of course often burdensome when $n>2$.
We emphasise once again that following the method proposed and
discussed here we determine the required values as the results of
computations only at the endpoints of some range of the independent
variable; inside this interval there is no need to watch the
transition across any critical points.
\section{Properties of conjugate points}
\subsection{Extending the method to a more general case}
If $A_{22} > 0$ then, as was discussed, the number of conjugate
points in $[a,b]$ is finite. When $A_{22}$ is non-negative definite,
the number of conjugate points can be infinite.
Consider, for instance, a problem of kind (1),~(2):
$n=2$, $A_{11}=\lambda I$,
$A_{12}=A_{21}=A_{22}=0$, $\psi_a =
\left \| \begin{array}{cccc}
1& 0& 0& 0\\
0& 1& 0& 1
\end{array} \right \|$,
$\psi_b =
\|0,I\|$. As is easily seen, for all $\lambda$ each point of $[a,b]$
is an LCP while the spectrum of the problem consists of the
only EV $\lambda=1/(a-b)$. Nevertheless, it was proved in
\cite{Abr-Asl} that the method described in Section~1 can be applied to problem (1),~(2) if conjugate points do not fill any interval of $[a,b]$ even if there are infinitely many such points.
Let us show, assuming that $A_{22} \geq 0$ on $[a,b]$, how to
reduce (1) to a system of the same form,
for which $A_{22}>0$. For system (1) we make a transform
$$
y = \exp(\omega J)\hat{y}, \eqno(2.1)
$$
where $\omega(t) = \varepsilon\left \{\exp \left [Q(t-a)\right ]-1\right \}$
is a scalar function, $\varepsilon$ and $Q$ are certain positive numbers, and
$\hat{y}$ is a new unknown function. We obtain a Hamiltonian system
$$
J\hat{y}^\prime = \hat{A}(t)\hat{y}, \eqno(\hat{1})
$$
where
$$
\hat{A} =
\left \| \begin{array}{cc}
\hat{A}_{11}& \hat{A}_{12}\\
\hat{A}_{21}& \hat{A}_{22}
\end{array} \right \|
= \exp(-\omega J)A\exp(\omega J) + \omega^\prime
\left \| \begin{array}{cc}
I& 0\\
0& I
\end{array} \right \| .
$$
For small values of $\varepsilon$ we have
$$
\hat{A}_{22} = A_{22} + D + o(\varepsilon), \eqno(2.2)
$$
where $D = \omega^\prime I - \omega(A_{12}+A_{21})$. By virtue of the choice of the function
$\omega(t)$ we get
$$
D = \varepsilon \exp(Q(t-a)) ( QI- ( 1-
\exp(-Q(t-a)) ) (A_{12}+A_{21}) ).
$$
If we choose such a positive $Q$ that $QI \geq A_{12}+A_{21}$ in $[a,b]$,
then obviously $D>0$ in $[a,b]$ and, according to (2.2), we shall have
$\hat{A}_{22}>0$ in $[a,b]$ for the chosen $Q$ and a sufficiently
small positive $\varepsilon$.
System ($\hat{1}$) is obtained from (1) by a small perturbation if
$\varepsilon$ is assumed to be small. All the assumptions made for (1),~(2)
in the previous section, including $A_{22} > 0$, are satisfied for ($\hat{1}$)
and corresponding boundary conditions. It is obvious that both these
spectral problems are equivalent and have the same EVs. Thus, if we follow
the technique described in Section~1 carrying out exactly the same computations
for ($\hat{1}$), for this transformed BVP we shall obtain the number $N$ of
the EVs lying in the given interval which is the same for BVP (1),~(2). Still
we are not going to apply (2.1) to (1) in practice --- we only use system
($\hat{1}$) to justify application of the method to system (1)
for which possibly $A_{22} \geq 0$. It does not matter whether the problem
to be solved has a finite number of conjugate points or there are infinitely
many of them, since the BVP for ($\hat{1}$) always has a finite
number of such points. The technique mentioned in Section 1 is
implemented by computing the inertia indices of certain
non-degenerate Hermitian matrices. Suppose we have computed them for
(1), (2). Then, since the corresponding matrices for (1), (2) and
($\hat{1}$), (2) are close for a small $\varepsilon$, the result of
computation for (1), (2) gives us the desired answer for ($\hat{1}$), (2).
That is, if we apply the above technique to BVP (1),~(2),
which possibly has infinitely many conjugate points, we shall compute the
(finite) number of conjugate points of system ($\hat{1}$) (not those of (1))
and finally get the required number of EVs. The
only possible difficulty is that we have to change variables according
to (1.8) at several points of $[a,b]$ (not known in advance), when
implementing the method. In order that our argument using the
closeness of the two BVPs work, we need to make the mentioned changes at
`non-degenerate' points, \ie, at the points which are neither
LCPs nor RCPs of (1),~(2). This approach can fail if there are
subintervals of $[a,b]$ completely filled by conjugate points
of the original BVP.
So, the method proposed is applicable to problems of type (1),~(2) for which
$A_{22} \geq 0$ with the only additional restriction: the conjugate points
of the problem must not fill any interval of $[a,b]$ entirely.
The question when conjugate points possess this property is considered below.
We shall start from a certain example. Consider a self-adjoint ODE system
of order $2p$
$$
\sum_{k=0}^{p} (-1)^k \left( (\varphi_k z^{(k)})^{(k)} +
(\omega_k z^{(k)})^{(k-1)} - (\omega_k^* z^{(k-1)})^{(k)} \right) = 0,
\eqno(2.3)
$$
where $z: [a,b] \rightarrow \C^m$, $\varphi_j = \varphi_j^*:
[a,b] \rightarrow \C^{m \times m}$,
$\omega_j: [a,b] \rightarrow \C^{m \times m}$,
$j=1,2,\ldots,p$, $\varphi_p > 0$; for all functions $v(t)$ we
mean $v^{(l)}=0$ if $l<0$.
System (2.3) is reduced by a change of variables
$$
\left\{ \begin{array}{l}
y_k = z^{(k-1)}, \\
y_{k+p} = \sum_{j=k}^{p} (-1)^{j-k} [(\varphi_j
z^{(j)})^{(j-k)}
+ (\omega_j z^{(j)})^{(j-k-1)} - \\
- (\omega_j^* z^{(j-1)})^{(j-k)}], \\
k=1,2,\ldots,p,
\end{array} \right.
\eqno(2.4)
$$
to a Hamiltonian system of type (1) (in this case $n = pm$), its
matrix is given by
$$
A_{11} = \left \| \begin{array}{ccccc}
-\varphi_0& \omega_1& 0& & 0 \\
\omega_1^* & -\varphi_1 & \omega_2 & & \\
& & & &\\
& & & {\bf \cdot} \ \ {\bf \cdot} \ \ {\bf \cdot} & \\
& & & &\\
0& & & \omega_{p-1}^* & \ \ -\varphi_{p-1}+
\omega_p\varphi_p^{-1}\omega_p^*
\end{array} \right \| ,
$$
$$
A_{12} = \left \| \begin{array}{lrcrlr}
0& & & & &0\\
I& 0& & & &\\
& \cdot&&. &&\\
& &\cdot&& . &\\
& & & \cdot &\\
0& && & I& \omega_p\varphi_p^{-1}
\end{array} \right \| , \ \ \
A_{22} = \left \| \begin{array}{llcrr}
0& & & & 0\\
&\cdot& & &\\
& & \cdot & &\\
& && \cdot &\\
0& && & \varphi_p^{-1}
\end{array} \right \|.
$$
In \cite{Asl} it has been proved that
if $\varphi_j$ and $\omega_j$ are continuous on $[a,b]$, then
the conjugate points of this Hamiltonian
system cannot fill any interval
of $[a,b]$ entirely whatever self-adjoint boundary conditions are given.
The proof given in \cite{Asl} has essentially used the special form of the
matrix $A$ quoted above. In the next two subsections we
shall prove general theorems to cover various problems including
(2.3) and specify the class of BVPs possessing the required property.
\subsection{A sufficient condition for conjugate points not to fill any interval}
Consider system (1) with an arbitrary matrix $A(t) = A^*(t)$.
Below all the notations of Section 1 are kept.
We shall need an extra assumption: let the functions $A_{12}(t)$, $A_{22}(t)$ be sufficiently smooth (differentiable as required for our further considerations).
Suppose that the conjugate points of problem (1),~(2) fill an interval $[\alpha,\beta] \in [a,b]$, \ie, at each point of this
interval relation (1.13) holds.
Let us take a point $t_0 \in [\alpha,\beta]$ such that the
dimension of the kernel ${\cal M}(t)$ of the matrix $\sigma(t)$ is
minimal at this point. Denote this dimension by $m_0$, then ${\rm
dim} {\cal M}(t) = m_0$ in a neighbourhood $(\tau_1,\tau_2)$ of
$t_0$. In what follows by $t$ we shall mean a point of this
neighbourhood. Consider, like we did when proving Theorem~1, a
smooth $(n \times m_0)$-matrix $G(t) = \| \eta_1, \ldots, \eta_{m_0}
\|$ such that its columns form an orthogonal and normalised basis in
${\cal M}(t)$. Then (see the proof of Theorem~1) we get
$$
0 = (t - \tilde{t})G^*(t)A_{22}(t)G(t) + o(t - \tilde{t}) \eqno(2.5)
$$
for all $t, \ \tilde{t} \in (\tau_1,\tau_2)$, $\tilde{t}
\rightarrow t$.
As is seen from (2.5), $G^*(t)A_{22}(t)G(t) = 0$.
Then, since $A_{22} \geq 0$, it satisfies
$$
A_{22}G = 0. \eqno(2.6)
$$
We also have
$$
\sigma (G^\prime + A_{12}G) = 0.
$$
Indeed, from $\sigma G = 0$ we obtain $\sigma G' = - \sigma' G$. We
find $\sigma'$ using equation (1.6). Then, by means of a relation
$\tilde\mu G \cos\tilde\theta = G \sin\tilde\theta$, equality
(2.6) and the formula relating $A$ and $\tilde{A}$, the required
equation is derived. This equation implies that, since the columns
of $G$ form a basis in ${\cal M}(t)$, it holds
$$
G^\prime + A_{12}G = G\gamma \eqno(2.7)
$$
for a certain matrix $\gamma(t) \in \C^{m_0 \times m_0}$.
Formula (2.6) implies $A_{22}^\prime G + A_{22} G^\prime = 0$.
Then, premultiplying (2.7) by $A_{22}$, we get
$$
(-A_{22}^\prime + A_{22}A_{12}) G = A_{22} G \gamma = 0.
\eqno(2.7^\prime)
$$
If the rank of
$\left \| \begin{array}{c}
A_{22}\\
-A_{22}^\prime + A_{22}A_{12}
\end{array} \right \|$
is equal to $n$ on $(\tau_1,\tau_2)$, then
homogeneous SLAE (2.6),~($2.7'$) has only the trivial solution $G=0$.
Thus, assuming that ${\rm det} \mu_0(t) \equiv 0$ in
$(\tau_1,\tau_2)$, we have $G(t) \equiv 0$, which leads to a contradiction.
Otherwise, we continue the process, premultiplying at the $j$th step
relation (2.7) by
$B_j = -B_{j-1}^\prime + B_{j-1}A_{12}$ (here $B_0 = A_{22}$) and denoting the result by ($2.7^{(j)}$). If
at the previous step we had $B_{j-1}G = 0$ and, therefore,
$-B_{j-1}^\prime G = B_{j-1}G'$, then $B_jG = (-B_{j-1}^\prime + B_{j-1}A_{12})G = B_{j-1}(G' + A_{12}G) = 0$.
Thus, for all $t$ under consideration $G(t)$ satisfies
SLAE (2.6), ($2.7'$), $\ldots$, ($2.7^{(j)}$). The matrix of this SLAE is represented as
$$
C_j(t) = \left \| \begin{array}{c}
B_0(t)\\
B_1(t)\\
\vdots\\
B_j(t)
\end{array} \right \|.
$$
If at some stage we obtain $C_j(t)$ having rank $n$ for the given range of $t$ then again $G(t)=0$ that contradicts our assumption.
The following condition (set at a point $t$)
will be used below: \\
($\star$) \ \ {\em There exists such an integer} \ $j_0$ \ {\em that}
\ ${\rm rank} \ C_{j_{0}}(t) = n$.
The above argument proves
\begin{theorem}
Let ($\star$) be satisfied for all $t \in [a,b]$. Then
conjugate points of problem (1),~(2) fill no interval of $[a,b]$ completely.
\end{theorem}
Note that we did not impose any restrictions on boundary conditions.
Condition ($\star$) is sufficient for the
conjugate points of system (1) subject to arbitrary boundary
conditions of type (2) not to fill any subinterval of $[a, b]$.
Though the quoted condition is peculiar and one does not know in advance
how to choose a suitable $j_0$, there are examples when ($\star$)
is easily verified.
In particular, if $A_{12}$ and $A_{22}$ do not depend on $t$, a condition
$$
{\rm rank} \left \| \begin{array}{c}
A_{22}\\
A_{22}A_{12}\\
\vdots\\
A_{22}A_{12}^{s-1}
\end{array} \right \| = n
$$
is equivalent to ($\star$). Here $s$ is the minimal degree of a
polynomial nullifying $A_{12}$ (it follows from the Hamilton-Cayley
theorem that $s \leq n$).
It is easily seen that for the Hamiltonian system obtained from (2.3)
with the use of (2.4) ${\rm rank} \ C_{n-1} = n$ and conjugate points do not fill any subinterval of $[a,b]$, as was proved in \cite{Asl} in a different way.
Another example when condition ($\star$) proves to be satisfied which is of practical importance can be found in Section~4 among applications.
The result proved in this subsection also admits another formulation.
If for each interval of $[a,b]$ there is a subinterval $(\tau_1,\tau_2)$ where the matrices $A_{12}$ and $A_{22}$
satisfy condition ($\star$), then the conjugate points of (1),~(2) cannot
fill any interval of $[a,b]$ entirely. Certainly, the
former condition (Theorem~6) is easier to check in practice.
\subsection{A necessary and sufficient condition}
In Subsections~2.1,~2.2 the form of boundary conditions was not taken into
account. We shall show in this subsection that if $\displaystyle{\max_{j}
{\rm rank} C_j < n}$ for all $t$ in some interval, then there are self-adjoint boundary conditions (left, for instance) for which the conjugate points fill a certain subinterval.
Consider a boundary condition at the point $a$. Obviously, if we fix
an arbitrary point $t_1 \in [a,b]$ then the condition at $a$
can be selected so that at $t_1$ we have
$$
\mu(t_1) = 0, \ \ {\mbox where \ } \ y_1(t) = \mu(t)y_2(t). \eqno(2.8)
$$
Suppose that for all $j = 0,1,\ldots$ the rank of $C_j$ is less than $n$
for all $t$ in an interval $(\alpha,\beta)$.
Then, starting from some $j_0$, once the rank of $C_j$ reaches its maximal value at each point of this interval, it does not change as $j$ increases.
We take the point of $(\alpha,\beta)$ where the rank of $C_{j_0}$
is maximal and consider the neighbourhood $(t_1,t_2)$
of that point where it does not vary
as $t$ changes (it is possible due to continuity of $C_j(t)$). Thus, we have
${\rm rank}C_j = {\rm const} < n$, $j \geq j_0$, $t \in (t_1,t_2)$. Then for all considered $t$ there is a non-trivial solution $G_0(t)$ to a system
$$
C_{j_0} G_0 = 0. \eqno(2.9)
$$
Taking $G_0$ to be the solution of the greatest (constant in $t$) rank
among all the solutions to (2.9), we have
$$
C_j G_0 =
\left \| \begin{array}{c}
B_0\\
B_1\\
\vdots\\
B_j
\end{array} \right \| G_0 = 0, \ \ j \geq j_0,
$$
and, consequently, $(C_j G_0)^\prime = C_j^\prime G_0 + C_j G_0^\prime = 0$.
On the other hand,
$$
C_j^\prime =
\left \| \begin{array}{c}
-B_1 + B_0A_{12}\\
-B_2 + B_1A_{12}\\
\vdots\\
-B_{j+1} + B_jA_{12}
\end{array} \right \|,
$$
therefore,
$C_j^\prime G_0 = C_j A_{12} G_0$. This means that $C_j(G_0^\prime
+ A_{12} G_0) = 0$ and hence
$$
G_0^\prime + A_{12} G_0 = G_0\gamma_0. \eqno(2.10)
$$
Then, putting $\sigma = \mu G_0$ and using (1.6) taken at $\theta=0$,
(2.9) and (2.10),
we have $\sigma' = \mu^\prime G_0 + \mu G_0^\prime = \mu A_{11} \mu G_0
+ \mu A_{12} G_0 + A_{21} \mu G_0 + A_{22} G_0 + \mu G_0^\prime = \mu
A_{11} \sigma + A_{21} \sigma + \mu (A_{12} G_0 + G_0^\prime) = \mu
A_{11} \sigma + A_{21} \sigma + \sigma \gamma_0$.
If (2.8) is satisfied, then $\sigma(t)$ is a solution to a Cauchy problem
$$
\left\{ \begin{array}{l}
\sigma' = \mu A_{11} \sigma + A_{21} \sigma + \sigma \gamma_0,
\ \ t \in [t_1, t_2] \\
\sigma(t_1) = 0
\end{array} \right.,
$$
and thus $\sigma(t) \equiv 0$ on $[t_1, t_2]$. Since
${\rm rank} G_0(t) > 0$ on $(t_1,t_2)$ and $\mu G_0 \equiv 0$ on $[t_1,t_2]$, we have ${\rm det}\mu \equiv 0$, that is, the entire interval $[t_1,t_2]$ consists of conjugate points. This proves
\begin{theorem}
Let ($\star$) be violated at all points of $(\alpha,\beta) \subset [a,b]$.
Then there exist such boundary conditions (2)
that the conjugate points of (1),~(2) fill some subinterval of $(\alpha,\beta)$.
\end{theorem}
The existence of a non-trivial solution $G(t)$ to system (2.6),~(2.7) for some
function $\gamma$ is equivalent to the conjugate points of (1) filling a
certain interval for at least some boundary conditions. Note that the
condition of Theorem~7 is not the same as the negation
of the condition of the previous theorem. The latter implies violation of
condition ($\star$) at isolated points of $[a,b]$. Thus, the condition
that for each interval of $[a,b]$ there exists a subinterval $(t_1,t_2)$ for which ($\star$) is satisfied is necessary and sufficient for the conjugate points of system (1) not to fill any interval for all boundary conditions
of type (2).
\section{Spectral problem on a half-line}
\subsection{Singular boundary value problem}
Consider a Hamiltonian system on an infinite interval:
$$
J y'= A(t)y, \ \ \ 0 \leq t < \infty, \eqno(3.1)
$$
where $y: [0,\infty) \rightarrow \C^{2n}$,
$A: [0,\infty) \rightarrow \C^{2n \times 2n}$, $A = A^*$.
In this subsection the matrix
$A(t)$ is assumed to be continuous in $[0,\infty)$ and have a limit
at infinity:
$$
A_0 = \lim_{t\rightarrow\infty} A(t).
$$
We denote
$
A_0 = \left \| \begin{array}{cc}
A_{11}^0& A_{12}^0\\
A_{21}^0& A_{22}^0
\end{array} \right \|$ and keep the notations accepted in the previous sections as well.
Let us also assume that
\begin{enumerate}
\item[]
$A_0 J$ has no EVs on the imaginary axis;
\item[]
$A_{22}^0 > 0$.
\end{enumerate}
We supplement system (3.1) with a boundary condition
$$
y(t) \ \mbox{is \ bounded \ as} \ t \ \rightarrow \ \infty.
\eqno(3.2)
$$
The question how to transfer condition (3.2)
from infinity for system (3.1), \ie, how to replace it by an
equivalent one at a finite point, was studied in \cite{Abr-Asl-B}.
The approach used there is to be presented below in this subsection.
Consider a subspace ${\cal C}_0$ generated by the left root vectors of an EVP
$$
z A_0 = \nu z J, \eqno(3.3)
$$
corresponding to the EVs $\nu$ lying in the open right-hand half-plane.
Due to the assumed property of $A_{0}$ the dimension of this subspace is $n$
(see, for instance, \cite{Cod-Lev}).
Let $\psi_0$ be a matrix whose rows form a basis in
${\cal C}_0$; we partition it into $(n \times n)$-blocks:
$\psi_0 = \|\psi_{01}, \psi_{02}\|$. As is easily seen,
$\psi_0 J \psi_0^* = 0$ (and, therefore, $\psi_0 A_0 \psi_0^* = 0$).
Let us show that $\rm{det} \psi_{01} \not= 0$. For otherwise there is such a non-zero
$n$-row $\xi$ that $\xi\psi_{01} = 0$. But then we have
$0 = \xi \psi_0 A_0 \psi_0^* \xi^*
= \xi \psi_{01} A_{11}^0 \psi_{01}^* \xi^* +
\xi \psi_{01} A_{12}^0 \psi_{02}^* \xi^* +
\xi \psi_{02} A_{21}^0 \psi_{01}^* \xi^* +
\xi \psi_{02} A_{22}^0 \psi_{02}^* \xi^*$, which
implies that $\xi \psi_{02} A_{22}^0 \psi_{02}^* \xi^* = 0$,
and therefore, since $A_{22}^0 > 0$, it holds $\xi\psi_{02} = 0$.
Hence $\xi(\psi_{01}\psi_{01}^* + \psi_{02}\psi_{02}^*)\xi^*
= 0$, which is false, since $\psi_{01}\psi_{01}^* +
\psi_{02}\psi_{02}^* > 0$. Thus, we can change from the matrix
$\psi_0$ to $\psi_{01}^{-1}\psi_0$ represented as $\|I,\mu_0\|$.
Assuming that this transformation has been carried out, let us
keep the original notation for the transformed matrix: $\psi_0 =
\|I,\mu_0\|$. Then the equality $\psi_0 J \psi_0^* = 0$ gives $\mu_0
= \mu_0^*$.
From the results obtained in \cite{Abr-B-Kon} we find, for sufficiently large $t$, that condition (3.2)
is equivalent to $\|I,\mu(t)\| J
\left \| \begin{array}{c}
y_1(t)\\
y_2(t)
\end{array} \right \| = 0$, that is,
$$
y_2(t) = \mu(t) y_1(t), \eqno(3.4)
$$
where $\mu(t)$ is a solution to a singular Cauchy problem
$$
\left\{ \begin{array}{l}
\mu^{\prime} + \mu A_{22} \mu + \mu A_{21} + A_{12}\mu +
A_{11} = 0 \\
\lim\limits_{t\rightarrow\infty}\mu(t) = \mu_0
\end{array} \right. . \eqno(3.5)
$$
The problem has a unique solution (see \cite{Abr-B-Kon}, \cite{Kon-Pak}).
Since $\mu_0 = \mu_0^*$, the function $\mu^*(t)$
is also a solution to (3.5), and hence
$$
\mu(t) = \mu^*(t).
$$
Before passing to a practical method for obtaining $\mu_0$, we note the following. Taking into account the relation between the matrix $\|I,\mu_0\|$ and problem (3.3),
we have
$$
\|I,\mu_0\| A_0 = \Lambda_0 \|I,\mu_0\| J,
$$
where all the EVs of $\Lambda_0$ lie in the open right-hand half-plane. Hence
$$
\left\{ \begin{array}{l}
A_{11}^0 + \mu_0 A_{21}^0 = \Lambda_0 \mu_0 \\
A_{12}^0 + \mu_0 A_{22}^0 = -\Lambda_0
\end{array} \right. , \eqno(3.6)
$$
and therefore
$$
\mu_0 A_{22}^0\mu_0 + A_{12}^0\mu_0 + \mu_0 A_{21}^0 + A_{11}^0 = 0.
\eqno(3.7)
$$
Note that
$\mu_0$ is a particular solution to (3.7), for which the matrix
$\Lambda_0$ from (3.6) possesses the mentioned property (existence and
uniqueness of this solution follows from the definition of $\mu_0$).
If we study the behaviour of the solution to (3.5) for large $t$
in the neighbourhood of its limit value (that is, linearise
(3.5) with respect to $\mu$ and take as the coefficients of the
resulting linear equation their limit values), from (3.6) and (3.7)
we obtain a linear equation with constant coefficients for $\eta =
\mu-\mu_0$:
$$
\eta' = \Lambda_0 \eta + \eta \Lambda_0^*. \eqno(3.8)
$$
As is known, the
EVs of the linear transformation (in $\C^{n \times n}$) $\Lambda_0
\eta + \eta \Lambda_0^*$ are all the possible sums of the EVs of
$\Lambda_0$ and $\Lambda_0^*$. Hence, all the EVs of
$\Lambda_0$ lie in the open right-hand half-plane if and only if
all the non-trivial
solutions to (3.8) increase as $t \rightarrow \infty$.
In this case the zero solution to (3.8)
is stable as $t$ decreases. This is consistent with the fact that the solution to (3.5) is unique, and is of practical importance: for large $t$
the influence of any small errors decreases as one moves from right to left.
\subsection{Comparison theorem}
In what follows we shall need the comparison theorem for the solutions to
singular Cauchy problems for the Riccati equations of kind (3.5).
Let us cite the comparison theorem proved in \cite{Abr-Asl-B}.
%(see also \cite{Roy} in this regard).
\begin{theorem}
We consider two systems of type (3.1), with $A$ taken as $\tilde{A}$
and $\tilde{\tilde{A}}$, respectively; the corresponding solutions to
(3.5) are denoted by $\tilde{\mu}$ and $\tilde{\tilde{\mu}}$. Let
$\tilde{\tilde{A}} \geq \tilde{A}$. Then at the intersection of their
ranges of definition $\tilde{\tilde{\mu}} \geq \tilde{\mu}$.
\end{theorem}
In practice, an approximate solution $\mu(t)$ is found for problem
(3.5), and condition (3.2) is replaced by (3.4) at a far point. Some methods
for obtaining approximate values of $\mu(t)$ for large $t$ when
the matrix $A(t)$ has a special form are described in \cite{Abr-B-Kon}.
Note, in particular, the following possibility: choose some large $t$ and
consider instead of condition (3.4) its approximation
$$
y_2(t) = \hat{\mu}(t) y_1(t),
$$
where $\hat{\mu}(t)$ is the solution to
$$
\hat{\mu} A_{22}(t) \hat{\mu} + A_{12}(t)\hat{\mu} +
\hat{\mu} A_{21}(t) + A_{11}(t) = 0, \eqno(3.7')
$$
for which all the EVs of the matrix $-A_{12}(t) - \hat{\mu}A_{22}(t)$
lie in the open right-hand half-plane. Since $A(t) \rightarrow A_0$
as $t \rightarrow \infty$, for sufficiently large $t$ such a solution
exists and is unique (the recommended method for solving an equation of this kind will be given in the next subsection). For example, if
$A(t)=A_0 + A_1/t + o(1/t)$, then (cf. \cite{Abr-B-Kon})
$\mu(t) = \mu_0 + \mu_1/t + o(1/t)$, where $\mu_1$ is calculated by
formally substituting $\mu(t)$ into (3.5) and equating coefficients
by $\frac{1}{t}$; the resulting linear equation for $\mu_1$ has a unique solution. Since the expansion of the term $\mu^\prime(t)$
on the left-hand side of (3.5) does not contain
$\frac{1}{t}$, the equation for $\mu_1$ thus obtained is the same as that
obtained by substituting $\hat{\mu}(t) = \mu_0 + \hat{\mu}_1/t + o(1/t)$
into $(3.7')$ and equating the corresponding coefficients by $\frac{1}{t}$.
Thus $\mu(t) - \hat{\mu}(t) = o(1/t)$. At the same time, for $\hat{\mu}(t)$
(unlike $\mu_0 + \mu_1/t$), we know that
$\tilde{\tilde{A}} \geq \tilde{A}$ implies that
$\tilde{\tilde{\hat{\mu}}} \geq \tilde{\hat{\mu}}$ due to
Theorem~8 (or, more exactly, its algebraic part). This fact is useful for solving a corresponding EVP (see Subsection~3.5 below) since the approximate boundary condition posed at a far point $t$
possesses the same monotonicity as required in Section~1.
It is possible therefore to apply the method proposed in
that section to the EVP thus obtained.
\subsection{Solving the matrix quadratic equation}
In this subsection we shall look for the specific solution to
the algebraic Riccati equation of type (3.7)
satisfying the mentioned condition. Along with ${\cal C}_0$
we consider the subspace ${\cal C}_1$ generated by the
left root vectors of $A_0 J^{-1}$ corresponding to its EVs lying in
the left-hand half-plane. Denote by $R$ an operator associated with
EVP (3.3) as follows: $xR=x$ if $x \in {\cal C}_0$, $xR=-x$ if $x
\in {\cal C}_1$. Obviously,
$$
\|I,\mu_0\| R = \|I,\mu_0\|. \eqno(3.9)
$$
A method for constructing the projector onto the
root subspace of a certain matrix corresponding to the EVs lying in the
right-hand half-plane can be found, \eg, in \cite{Abr-84}. We shall use that method to find the operator $R$. According to \cite{Abr-84}, the following iterative process converges to the required operator:
$$
R^{(0)} = \rho A_0 J^{-1}, \ \
R^{(k+1)} = (R^{(k)} + (R^{(k)})^{-1})/2.
$$
Here $\rho$ is any positive number; note that the appropriate choice of $\rho$ can improve the convergence rate.
In order to deal with Hermitian matrices only, we introduce matrices $Q^{(k)} = R^{(k)} J$, $k=0,1,2,\ldots$. Then the iterative process becomes
$$
Q^{(0)} = \rho A_0, \ \
Q^{(k+1)} = (Q^{(k)} - J^*(Q^{(k)})^{-1}J)/2,
$$
all the matrices $Q^{(k)}$ being Hermitian,
${\displaystyle \lim_{k\rightarrow\infty} Q^{(k)} = Q}$,
$Q=Q^*$. Equation (3.9) can be rewritten as
$$
\|I,\mu_0\| Q = \|I,\mu_0\|J,
$$
and if $Q$ is represented as
$Q = \left \| \begin{array}{cc}
Q_{11}& Q_{12}\\
Q_{21}& Q_{22}
\end{array} \right \|$,
then
$$
\left\{ \begin{array}{l}
Q_{11} + \mu_0 Q_{21} = \mu_0 \\
Q_{12} + \mu_0 Q_{22} = -I
\end{array} \right. .
\eqno(3.10)
$$
Let us show that the Hermitian matrix $Q_{22}$ is invertible. In addition to
$\mu_0$, consider the matrix $\mu_1$ corresponding to the subspace
${\cal C}_1$. Obviously, it is possible to find a basis in
${\cal C}_1$ represented as a matrix $\|I,\mu_1\|$ similar to the
basis chosen in ${\cal C}_0$. Then $\|I,\mu_1\|Q = -\|I,\mu_1\|J$, or
$$
\left\{ \begin{array}{l}
Q_{11} + \mu_1 Q_{21} = -\mu_1 \\
Q_{12} + \mu_1 Q_{22} = I
\end{array} \right. .
\eqno(3.11)
$$
Subtracting the second equation of (3.11) from that of (3.10), we obtain
$$
(\mu_0 - \mu_1) Q_{22} = -2I,
$$
whence it follows that ${\rm det} Q_{22} \not= 0$.
Thus, $\mu_0$ can be found from the second equation of (3.10):
$$
\mu_0 = -(I + Q_{12}) Q_{22}^{-1}.
$$
In \cite{Abr-Asl-B} it was also proved that $Q_{22} > 0$,
which can be also used when computing $\mu_0$.
Note that system (3.10) is overdetermined. We thus control calculations checking whether
$\mu_0$ found from the second equation of the system is a Hermitian matrix and satisfies the first equation of (3.10) as well
to the required accuracy. A very rigorous check on the accuracy of $Q$ consists in verifying that, to the needed accuracy,
$QJ^{-1}$ and $A_0J^{-1}$ are permutable, that is, the matrix $QJA_0$ is anti-Hermitian.
\subsection{Properties of a linear operator associated to the non-linear EVP}
Consider an EVP for (3.1), that is, let the matrix $A$ depend
also on $\lambda$ and be
uniformly continuous as a function of the two variables in
$[0, \infty) \times (\Lambda_1, \Lambda_2)$. We also assume that $A$ is monotone with respect to $\lambda$ and $A_{22} \geq 0$ like in
Section~1. Thus, in this subsection we consider a system
$$
Jy^{\prime} - A(t,\lambda)y = 0, \ \ \ 0 \leq t \leq \infty
\eqno(3.14)
$$
with a self-adjoint boundary condition at $t= 0$:
$$
\omega y(0) = 0, \eqno(3.15)
$$
where
$\omega \in \C^{n \times 2n}$,
${\rm rank} \omega = n$, $\omega J \omega^* = 0$.
Along with EVP (3.14),~(3.15) non-linear with respect to the spectral
parameter $\lambda$, let us study an associated linear operator
$$
Ly = Jy^{\prime} - A(t,\lambda)y
$$
acting in ${\cal L}_2(0,\infty)$ and defined on functions which have a derivative from
${\cal L}_2(0,\infty)$ and satisfy condition (3.15). We keep the same assumptions for the matrix $A$ as
above in this section, including the existence of ${\displaystyle \lim_{t \rightarrow \infty} A(t,\lambda) =
A_0(\lambda)}$ for all $\lambda \in (\Lambda_1, \Lambda_2)$. Given all these assumptions the operator $L$ is
self-adjoint for all $\lambda$.
Indeed, consider the operator
$H = J \frac{d}{dt}$, which acts in ${\cal L}_2(0,\infty)$ and is defined on finite functions which are differentiable and satisfy condition (3.15) at zero. The operator $H$ is symmetric, and the domain of definition of the adjoint operator
$H^*$ consists of functions $y$ satisfying (3.15) and such that $H^*y \in {\cal L}_2(0,\infty)$. Let us find the
deficiency indices of $H$. The general solution to the system
$$
J y' = i y
$$
is given by
$$
y = \left \| \begin{array}{c}
y_1 \\
y_2
\end{array} \right \| =
\exp(t)
\left \| \begin{array}{c}
c\\
-i c
\end{array} \right \| +
\exp(-t)
\left \| \begin{array}{c}
b \\
i b
\end{array} \right \|,
$$
where $c$ and $b$ are arbitrary constant $n$-columns. Since $y(t) \in {\cal L}_2(0,\infty)$, we have $c=0$, that is,
$$
\left \| \begin{array}{c}
y_1 \\
y_2
\end{array} \right \|
= \exp(-t)
\left \| \begin{array}{c}
b \\
i b
\end{array} \right \|.
$$
We assume (see Section 1) that the necessary transformation near $t=0$ has
been made right at the start, so that in that neighbourhood relation
(1.4) is satisfied. Then from the last relation and (1.4) we obtain
$\mu b = i b$, and since $\mu = \mu^*$, it follows that $b = 0$.
Similarly we show that the system
$$
J y' = -i y
$$
has no non-trivial solutions in the domain of definition of $H^*$
either. Thus the deficiency indices of $H$ are equal to zero and,
therefore (see, \eg, \cite{Kato}), the closure of $H$ is a
self-adjoint operator. It only remains to take into account that
$L$ is obtained from this closure by adding a bounded symmetric
operator, which implies that $L$ is self-adjoint.
We shall call $\lambda_*$ a point of the spectrum of EVP (3.14),~(3.15) if zero is a point of the spectrum of $L(\lambda_*)$. If zero is an EV of
$L(\lambda_*)$, let us call $\lambda_*$ an EV of (3.14),~(3.15). Note that
no EV of (3.14),~(3.15) can ever have a multiplicity greater than $n$. In fact, condition (3.15) identifies an $n$-dimensional manifold of solutions to (3.14) and, therefore, the number of linearly independent solutions to (3.14),~(3.15) is not greater than $n$.
\begin{theorem}
Let $\lambda_*$ be an isolated point of the spectrum of EVP (3.14), (3.15).
Then zero is an isolated point of the spectrum of the operator $L(\lambda_*)$ and, therefore, an EV of $L(\lambda_*)$.
\end{theorem}
{\bf Proof} Denote the spectrum of $L$ by $\Sigma(L)$. By virtue of
the above assumptions about $A$, the self-adjoint operator $L$ is a
continuous and monotone function of $\lambda$. Then each boundary
point of $\Sigma(L)$ in a neighbourhood of $\lambda_*$ is a
continuous non-decreasing function of $\lambda$.
Indeed, let $\tilde{\xi}$ be a boundary point of
$\Sigma(L(\tilde{\lambda}))$, \ie, suppose there exists such $\alpha > 0$ that
$(\tilde{\xi}-\alpha,\tilde{\xi})$ contains no points of $\Sigma(L(\tilde{\lambda}))$
(different kinds of boundary points can be considered similarly if we replace
$(\tilde{\xi}-\alpha,\tilde{\xi})$ by $(\tilde{\xi},\tilde{\xi}+\alpha)$). Then in
some neighbourhood of
$\tilde{\lambda}$ there exists a unique continuous function
$\xi(\lambda)$ such that $\tilde{\xi} = \xi(\tilde{\lambda})$,
$\xi(\lambda)$ being the same kind of boundary point as $\tilde{\xi}$.
This follows immediately from the theorem on the continuous change
of the spectrum of a self-adjoint operator
%\cite{Kato}
[20, Th.~V.4.10]. To prove that $\xi(\lambda)$ is monotone, we take
$a \not\in \Sigma(L(\tilde{\lambda}))$, $a < \tilde{\xi}$ such that
$(a,\tilde{\xi})$ contains no points of $\Sigma(L(\tilde{\lambda}))$.
Consider an operator $B(\lambda) = (L(\lambda) - aI)^{-1}$. In some
neighbourhood of $\tilde{\lambda}$ the maximum point of
$\Sigma(B(\lambda))$ is given by the function $\beta(\lambda) =
1/(\xi(\lambda)-a)$. Since $L(\lambda)$ is non-decreasing with
respect to $\lambda$, $B(\lambda)$ is non-increasing in a
neighbourhood of $\tilde{\lambda}$. If $B(\lambda_2) \geq
B(\lambda_1)$ then ${\displaystyle \beta(\lambda_2) = \sup_{\|x\|=1}
(B(\lambda_2)x,x) \geq \sup_{\|x\|=1} (B(\lambda_1)x,x) =
\beta(\lambda_1)}$. Then the required assertion for $\xi(\lambda)$
follows from the fact that $\beta(\lambda)$ is non-increasing.
We shall prove now that zero is the isolated point of the spectrum of $L(\lambda_*)$. Suppose otherwise: in an arbitrarily small neighbourhood of zero let there be other points of $\Sigma(L(\lambda_*))$, positive, say. Then since the spectrum of $L(\lambda)$ varies continuously, or to be more precise, since
$$
{\rm dist} (\Sigma(L(\lambda_*)), \Sigma(L(\lambda))) \leq
\| L(\lambda_*) - L(\lambda) \| \eqno(3.16)
$$
(see the theorem mentioned above, \cite{Kato}), for some $\lambda_0 <
\lambda_*$ if $\lambda \in [\lambda_0,\lambda_*)$ the spectrum of
$L(\lambda)$ also has positive points. For all $\lambda \in [\lambda_0,\lambda_*)$ there exists a minimum non-negative point of $\Sigma(L(\lambda))$. Denote it by $\xi_2(\lambda)$. Choosing
$\lambda_0$ close enough to $\lambda_*$ so that
$[\lambda_0,\lambda_*)$ contains no points of the spectrum of (3.14)$,~$(3.15) (this is possible since by hypothesis
$\lambda_*$ is isolated), we obtain
$\xi_2(\lambda) > 0$, $\lambda \in [\lambda_0,\lambda_*)$. The boundary point
$\xi_2(\lambda)$ is a non-decreasing function on $[\lambda_0,\lambda_*)$, as we have shown, and so
$\xi_2(\lambda) \geq \xi_2(\lambda_0) > 0$. But, obviously, the fact that this relation holds at points as close to
$\lambda_*$ as desired on the assumption that $\Sigma(L(\lambda_*))$ has
positive points as small as desired contradicts (3.16). This contradiction proves the theorem.
Together with the operator $L$, under the same boundary condition (3.15) we shall consider another operator
$$
L_0y = Jy^{\prime} - \tilde{A}(t,\lambda)y,
$$
where
$$
\tilde{A}(t,\lambda) =
\left \{ \begin{array}{c}
A(t,\lambda), t \leq t_0 \\
A(t_0,\lambda), t > t_0
\end{array} \right., \ \ 0 < t_0 < \infty.
$$
The matrix $\tilde{A}$ thus defined corresponds to the Hamiltonian system
$$
J y' = \tilde{A}(t,\lambda)y. \eqno(3.17)
$$
Since $A(t,\lambda)$ is uniformly continuous and the limit $A_0$ exists, for sufficiently large $t_0$ we have
$$
-\varepsilon I < A(t,\lambda) - A(t_0,\lambda)
< \varepsilon I, \ t \geq t_0,
$$
where $0 < \varepsilon$ is arbitrarily small for sufficiently large $t_0$. Then
$$
\| L - L_0 \|^2 = \sup_{\|y\|=1} \int_{t_0}^{\infty} y^*
[A(t) - A(t_0)]^2 y dt < \sup_{\|y\|=1} \int_{t_0}^{\infty}
\varepsilon^2 y^*y dt = \varepsilon^2.
$$
Thus, putting
$$
\varepsilon_0(t_0) = \| L-L_0 \|, \eqno(3.18)
$$
we have
$$
\lim_{t_0\rightarrow\infty} \varepsilon_0(t_0) = 0.
$$
Suppose that zero, as an EV of $L(\lambda_*)$, has a multiplicity $k_0$.
As we have already mentioned, in a small neighbourhood of $\lambda_*$ there is a unique continuous monotone function
$\hat{\xi}(\lambda)$, which is an upper boundary point,
$\hat{\xi}(\lambda_*) = 0$, and the function
$\check{\xi}(\lambda)$, which is a lower boundary point,
$\check{\xi}(\lambda_*) = 0$, with $\hat{\xi}(\lambda_*+\delta_2) \geq
\check{\xi}(\lambda_*+\delta_2) > 0$,
$\check{\xi}(\lambda_*-\delta_1) \leq \hat{\xi}(\lambda_*-\delta_1) <
0$; $\delta_1$, $\delta_2$ being small positive numbers.
Fix these $\delta_1$ and $\delta_2$. Take $t_0$ such that
$$
\hat{\xi}(\lambda_*+\delta_2) \geq \check{\xi}(\lambda_*+\delta_2) >
\varepsilon_0, \ \
\check{\xi}(\lambda_*-\delta_1) \leq \hat{\xi}(\lambda_*-\delta_1) <
-\varepsilon_0. \eqno(3.19)
$$
We shall consider the values of $t_0$ which are large enough so that
$\varepsilon_0(t_0)$ is small compared to the distance by which
$\hat{\xi}(\lambda)$ and $\check{\xi}(\lambda)$ are isolated as the boundary points of $\Sigma(L(\lambda))$,
\ie, such that the interval $(\hat{\xi}(\lambda), \hat{\xi}(\lambda)+2\varepsilon_0)$
($(\check{\xi}(\lambda)-2\varepsilon_0,\check{\xi}(\lambda))$, respectively) contains no points of $\Sigma(L(\lambda))$,
$\lambda \in (\lambda_*,\lambda_*+\delta_2)$ ($\lambda \in (\lambda_*-\delta_1,\lambda_*)$, respectively).
In $(\varepsilon_0,\hat{\xi}(\lambda_*+\delta_2)]$
there are a total of $k_0$ isolated EVs of $L(\lambda_*+\delta_2)$, and there are no other points of
$\Sigma(L(\lambda_*+\delta_2))$ in that interval
%\cite{Kato}
[20, Ch.~V, Sec.~4.3, Cor. Th.~IV.3.18]. A similar
assertion holds for $[\check{\xi}(\lambda_*-\delta_1),-\varepsilon_0)$ and the operator
$L(\lambda_*-\delta_1)$. Since (3.18) is true (see the previous reference) there are also a total of $k_0$
isolated EVs of $L_0(\lambda_*+\delta_2)$ ($L_0(\lambda_*-\delta_1)$, respectively) in
$(0,\hat{\xi}(\lambda_*+\delta_2)+\varepsilon_0)$ ($(\check{\xi}(\lambda_*-\delta_1)-\varepsilon_0,0)$), and no other points of
$\Sigma(L_0)$ in the corresponding intervals. But since the EVs
$\xi_k^{(0)}$ of the operator $L_0$ are continuous and monotone functions of $\lambda$ and
$$
\xi^{(0)}_k(\lambda_*+\delta_2) > 0, \ \
\xi^{(0)}_k(\lambda_*-\delta_1) < 0, \ \ k=1,2,\ldots,k_0,
$$
for each of these functions at a point $\lambda^{(0)}_k \in
(\lambda_*-\delta_1,\lambda_*+\delta_2)$ we have
$$
\xi^{(0)}_k(\lambda^{(0)}_k) = 0.
$$
That is, the EVs of problems (3.14),~(3.15) and (3.17),~(3.15) satisfy the inequality
$$
|\lambda_* - \lambda^{(0)}_k| < \delta_1 + \delta_2,
\ \ \ k=1,2,\ldots,k_0.
$$
Since as $t_0 \rightarrow \infty$ the values of $\delta_1$ and $\delta_2$ can be chosen arbitrarily small, finally we have
$$
\lambda^{(0)}_k \rightarrow \lambda_*, \ \ \ t_0 \rightarrow \infty,
\ \ \ k=1,2,\ldots,k_0.
$$
As in Section~1, we shall formulate the final result having assumed
that the given range of $\lambda$ contains only isolated EVs of the
approximate problems (3.17),~(3.15) for all given $t_0$. This is not an
immediate consequence of the assumption that
$\lambda_*$ is an isolated point of the spectrum of the limit problem
(3.14),~(3.15). The point $\lambda^{(0)}_k$ is then unique for each $k$.
The above argument leads to the following assertion.
\begin{theorem}
Let $\lambda_*$ be an isolated EV of EVP (3.14),~(3.15) of
a multiplicity $k_0$, $\lambda_* \in
(\lambda',\lambda^{\prime\prime})$, and suppose that the interval
$[\lambda',\lambda^{\prime\prime}]$ contains no other points of the
spectrum of the problem. Then for sufficiently large $t_0$, the
interval $(\lambda',\lambda^{\prime\prime})$ contains exactly $k_0$
EVs of (3.17),~(3.15) (counting for
their multiplicities), each of them tending to
$\lambda_*$ as $t_0\rightarrow\infty$.
\end{theorem}
This theorem guarantees that the EVs corresponding to the original and the
approximate problems are close for sufficiently large values of $t_0$.
Suppose we are given the values of $\delta_1$ and $\delta_2$, \ie, the accuracy
required for calculating an EV $\lambda_*$. How should the value of
$\varepsilon_0$, which defines the closeness of the operators $L$ and
$L_0$, be chosen? We shall make one more assumption: for all
$\lambda_2 > \lambda_1$ there exists such a positive $\gamma$ that
$$
A(t, \lambda_2) - A(t, \lambda_1) \geq \gamma(\lambda_2 - \lambda_1) I
$$
holds for $t \in [0,\infty)$. Then, as is known from the perturbation theory (cf. \cite{Kato}),
$$
\xi_2 - \xi_1 \geq \gamma (\lambda_2 - \lambda_1),
$$
where $\xi_k$ are the EVs of $L(\lambda_k)$, $k=1,2$,
or, using the above notations,
$$
\hat{\xi}(\lambda_* + \delta_2) \geq \gamma \delta_2, \ \
\check{\xi}(\lambda_* - \delta_1) \leq -\gamma \delta_1.
$$
In order that $\varepsilon_0$ satisfy (3.19) it suffices to take
$$
\varepsilon_0 \ < \ \gamma \min(\delta_1,\delta_2).
$$
Everywhere in this subsection we regard the value of
$\varepsilon_0$ as small compared to the isolating distance of the spectrum
of $L(\lambda)$.
\subsection{Applications to the EVP}
In Subsections~3.1--3.3
the property of the limit matrix $A_0J$ not to have any EVs on the imaginary axis was essentially used. This implied the way to pose a specific boundary condition of type (3.4) at a far point discussed in Subsections~3.1-3.2. Below we shall also consider another approach. Namely, we omit the assumption about the limit matrix $A_0J$ and in
what follows assume that for a sufficiently large $T_0$
$$
A(t,\lambda) J \ \ \mbox{has \ no \ EVs \ on \ the \ imaginary \ axis} \eqno(3.20)
$$
for $t \geq T_0$ and all the values of $\lambda$ under consideration.
Condition (3.20) is important for practical implementation of the technique described. To verify (3.20) for a wide range of problems under
consideration we shall need the following proposition (see \cite{Thes}).
\vspace{0.2in}
{\bf Proposition.} {\em Let} $A(t_0,\lambda)J$
{\em have pure imaginary EVs. Then} $0$
{\em belongs to the spectrum of the corresponding operator} $L_0(\lambda)$.
\vspace{0.2in}
{\bf Proof} Let us transfer condition (3.15) to the point $t_0$ and
denote the operator thus obtained in ${\cal L}_2(t_0,\infty)$
by $\tilde{L}_0$. The corresponding system
$$
Jy^{\prime} - A(t_0,\lambda)y = 0, \ \ t \in [t_0,\infty)
\eqno(3.21)
$$
has constant coefficients. The behaviour of its
solutions is prescribed by the spectrum of $A(t_0,\lambda)J$. Note
that if $A=A^*$ then the spectrum of $AJ$ is symmetric about the imaginary axis. Hence there are equally many EVs in both half-planes, while pure imaginary EVs have even multiplicities.
Consider the intersection of the $n$-dimensional manifold
of the solutions to (3.21) satisfying the corresponding condition at
$t_0$ and the subspace of those growing not faster than a polynomial of degree $2n-1$ as $t \rightarrow \infty$. According to our assumption and due to the quoted property of the spectrum of
$A(t_0,\lambda)J$,
the dimension of the latter is greater than $n$ and, therefore, the
intersection is non-trivial. Denote one of its elements (that is not
necessarily an EF of $\tilde{L}_0$) by $\tilde{y}(t)$; let
$\tilde{y}^*(t)\tilde{y}(t) = P(t)$. Consider a sequence of functions
$$
f_n(t) = \tilde{y}(t) \varphi_n(t),
\ \ \ \varphi_n(t) = \exp(-\frac{t}{n}).
$$
We have
$$
\|f_n\|^2 = \int_{t_0}^{\infty} P(t) \exp(-\frac{2t}{n}) dt
= \alpha_n
$$
(the integral obviously converges). Let us normalise the elements of the sequence:
$$
\tilde{f}_n = \frac{1}{\sqrt{\alpha_n}} f_n;
$$
we shall obtain
$$
\tilde{L}_0\tilde{f}_n = \frac{1}{\sqrt{\alpha_n}}
\tilde{L}_0(\tilde{y} \varphi_n) =
\frac{1}{\sqrt{\alpha_n}} J \tilde{y} \varphi_n^\prime =
-\frac{1}{n\sqrt{\alpha_n}} J \tilde{y} \varphi_n;
$$
$$
\|\tilde{L}_0\tilde{f}_n\|^2 =
\int_{t_0}^{\infty} \frac{(\varphi_n^\prime(t))^2}{\alpha_n}
\tilde{y}^*(t)\tilde{y}(t) dt =
$$
$$
= \frac{1}{n^2\alpha_n} \int_{t_0}^{\infty} P(t) \varphi_n^2(t) dt =
\frac{1}{n^2} \|\tilde{f}_n\|^2 = \frac{1}{n^2} \rightarrow 0,
\ \ \ n \rightarrow \infty.
$$
Thus, for the operator $\tilde{L}_0$ we constructed the Weyl sequence
$\tilde{f}_n$ satisfying
$$
\|\tilde{f}_n\| = 1 \ \mbox{for \ all} \ n, \ \ \
\|\tilde{L}_0\tilde{f}_n\| \rightarrow 0, \ \ n \rightarrow
\infty.
$$
Therefore, it is shown that $0$ is the point of the spectrum of
$\tilde{L}_0$ and, consequently, of $L_0$.
This proposition can be used as follows. Let the spectrum of original
EVP (3.14),~(3.15) consist of a finite number of points in the given range
of the spectral parameter. Suppose, $\lambda_1$, $\lambda_2$ do not
belong to the spectrum. Then $0$ belongs to the resolvent set of the
operator $L(\lambda_1)$ and, therefore, to that of $L_0(\lambda_1)$,
since the operators are close to one another provided that $t_0$ is
sufficiently large. Then the proved assertion implies that for
these $\lambda_1$ and $t_0$ condition (3.20) is satisfied; the same
is true for $\lambda_2$. Thus, having assumed that the spectrum of
(3.14),~(3.15) in $(\lambda_1,\lambda_2)$ consists of isolated EVs,
we have shown that (3.20) holds for $A(t_0,\lambda)J$, that is the
limit matrix of the approximate problem, for the values of
$\lambda$ which are not EVs of (3.14),~(3.15). This is essential for
applying the method under consideration to the studied EVP.
Now we can pose a boundary condition as $t \rightarrow \infty$ as follows.
At $t=t_0$ we put
$$
y_2(t_0) = \mu_0 y_1(t_0) \eqno(3.22)
$$
as was suggested in Subsection~3.1, replacing the limit matrix $A_0$ by $A(t_0)$. The corresponding $\mu_0$ is calculated for the matrix $A(t_0)J$ instead of $A_0J$ exactly as was proposed in Subsection~3.3. When transferring condition (3.22) over the truncated interval $[0,t_0]$,
in some neighbourhood of $t_0$ we integrate the Cauchy problem
$$
\left\{ \begin{array}{l}
\mu^{\prime} + \mu A_{22} \mu + \mu A_{21} + A_{12}\mu +
A_{11} = 0 \\
\mu(t_0) = \mu_0
\end{array} \right. . \eqno(3.23)
$$
Let $\mu(t,t_0)$ denote the solution to this problem.
Suppose that the result of transferring the boundary condition from $t_0$ to $t$ for any fixed $t$ has a limit as $t_0 \rightarrow \infty$. Conditions sufficient for the existence of this limit were obtained in
\cite{Abr-Asl1}, see Theorems~1--4 there. For example, if $A'(t)$ is non-negative or non-positive definite, then the limit exists for all $t$. We are looking for
the values of $\lambda$ for which system (3.14) has a non-trivial solution $y(t)$ satisfying (3.15) and (3.22) as $t_0 \rightarrow \infty$.
We shall cite another comparison theorem
%\cite{Abr-Asl1})
[22, Th.~5]. It will play the same role as Theorem~8
in Subsection~3.2, that is, provide the required monotonicity of the
limit boundary condition. This will justify application of the
technique given in Section~1.
\begin{theorem}
Let the pair of functions $\tilde{A}$ and $\tilde{\tilde{A}}$
satisfy the general assumptions made in Subsection~3.4 in $[T_0, \infty)$ and for each of them let ${\displaystyle \lim_{t_0\rightarrow\infty} \mu(t,t_0)}$ exist. Denote the corresponding limits by
$\tilde{\mu}$ and $\tilde{\tilde{\mu}}$. Let $\tilde{\tilde{A}} \geq
\tilde{A}$. Then $\tilde{\tilde{\mu}} \geq \tilde{\mu}$.
\end{theorem}
Note that the transfer of condition (3.2) to a distant point $t_0$
and computation of an EF for large $t$ are numerically stable.
Consider the variational equation for (3.23):
$$
\eta^{\prime} - \eta\Lambda^* - \Lambda\eta = 0,
$$
where $\Lambda(t) = -A_{12}(t) - \mu(t)A_{22}(t)$, $\eta(t) = \mu(t) - \mu_0$.
Condition (3.22) is equivalent to the following:
$$
\left\{ \begin{array}{l}
\mu^{\prime}(t_0)=0; \
\mbox{the \ solution \ to \ (3.23) \ is \ locally} \\
\mbox{(in \ a \ neighbourhood \ of} \ t_0) \
\mbox{stable \ from \ right \ to \ left.}
\end{array} \right.
$$
As $t_0 \rightarrow \infty$ the solution tends to the stationary boundary condition which is stable when transferred from right to left at far points. The behaviour of the solutions to the transfer equations corresponds to `well posed' boundary conditions at singular points (see also \cite{Abr-B-Kon} in this regard).
Finally, we have to find the EVs and EFs for EVP (3.14),~(3.15),~(3.22) on $[0,t_0]$. The
dependence of the boundary
conditions on $\lambda$ (obtained for (3.22)
on the basis of Theorems~8,~11) is as specified in Section~1.
Thus, all the conditions of Section~1 are satisfied and
we are able to apply the method given in that section for calculating
the EVs and the corresponding EFs in $[0,t_0]$ for the problem with a spectral parameter involved non-linearly.
The described approach to the EVP on the truncated interval $[0,t_0]$
can be also regarded as if we consider approximate system (3.17) like
we did in the previous subsection. The approximate problem is solved in
the same way instead of the original one. Then Theorem~10 provides
the convergence of the approximate EVs thus obtained to
the EVs of the exact problem as $t_0 \rightarrow \infty$.
%Remark that
%if the solution to (3.17) satisfies (3.20) it is equivalent to the
%fact that this solution $y \in {\cal L}_2$.
We have considered the half-line here. Obviously, exactly the same approach
is suitable for the interval $(-\infty, \infty)$. Note that many other
cases are reduced to the situation studied here by a proper change of an
independent variable.
\section{Numerical results}
\subsection{Hamiltonian system describing free vibrations
of a shell of revolution}
Consider a thin elastic shell given in the cylindrical coordinates
$r, \varphi, x$ by
$$
r/d = F(x/l), \ \ \ 0 \leq x \leq l,
$$
where $l$ and $d$ are the length and the maximal diameter of the shell,
respectively. A PDE system corresponding to free
vibrations of a shell of revolution permits
separation of the variables (see \cite{GLT}).
The ODE system quoted in \cite{GLT} refers to the $m$-th vibration
harmonic for an arbitrary number of waves $m$ along the parallel.
This eighth-order ODE system can be reduced to the canonical form
(cf. \cite{Pri}):
$$
J y'= A(z, \Omega)y, \ \ \ 0 \leq z \leq 1. \eqno(4.1)
$$
Here $z=x/l$, $\ ^\prime$ denotes derivation with respect to $z$;
$y: [0,l] \rightarrow \R^8$, $A = A^T: [0,l] \rightarrow \R^{8
\times 8}$ for a fixed $\Omega$.
Let us quote the explicit form of $A$ obtained in \cite{Pri}.
The following notations will be used for
the elements of the blocks of $A$:
$$
A_{11} = \| t_{ij} \|, \ \ A_{12} = \| s_{ij} \|, \ \
i,j=1,\ldots,4, \ \
A_{22} = {\rm diag} (r_{11}, r_{22}, 0, r_{44}).
$$
Below all the non-zero elements of $A$ are given.
$$
s_{11} = s_{44} = -\frac{\nu F'}{F}, \ s_{12} =
-\frac{\nu m A}{\varepsilon F}, \ s_{13} = \frac{A}{\varepsilon}
\left[\frac{1}{R_1} + \frac{\nu}{R_2}\right],
$$
$$
s_{21} = \frac{m A}{\varepsilon F}, \ s_{22} = \frac{F'}{F},
\ s_{23} = -\frac{m \varepsilon_1^2 F'}{3 R_2 F^2}, \
s_{24} = \frac{m \varepsilon_1^2 A}{3 R_2 \varepsilon F},
$$
$$
s_{31} = -\frac{A}{\varepsilon R_1}, \
s_{34} = \frac{A}{\varepsilon}, \
s_{42} = -\frac{\nu m A}{\varepsilon F R_2}, \
s_{43} = \frac{\nu m^2 A}{\varepsilon F^2},
$$
$$
r_{11} = \frac{(1 - \nu^2) A}{\varepsilon F},
\ r_{22} = \frac{2(1 + \nu) A}{\varepsilon F},
\ r_{44} = \frac{12(1 - \nu^2) A}{\varepsilon \varepsilon_1^2 F},
$$
$$
t_{11} = -\frac{R_2}{\varepsilon} \left[(\frac{\varepsilon F'}{R_2})^2
+ \frac{\varepsilon_3}{(FR_2)^2} - \Omega^2\right], \
t_{12} = -\frac{m F'}{F}, \
$$
$$
t_{22} = -\frac{R_2}{\varepsilon} \left[(\frac{m}{F})^2
+ \frac{(m \varepsilon_1)^2}{12(FR_2)^2} - \Omega^2\right], \
t_{13} = \frac{F'}{R_2} \left[1 -
\frac{\varepsilon_3}{F^2}\right], \
$$
$$
t_{23} = \frac{m A}{\varepsilon R_2} \left[1 +
\frac{(m\varepsilon_1)^2}{12F^2}\right], \
t_{33} = -\frac{R_2}{\varepsilon} \left[\frac{1}{R_2^2} +
\frac{m^4 \varepsilon_1^2}{12F^4} +
\frac{\varepsilon^2 \varepsilon_3 (F')^2}{(FR_2)^2} -
\Omega^2\right], \
$$
$$
t_{14} = \frac{\varepsilon_3 A}{\varepsilon F R_2}, \
t_{24} = -\frac{m \varepsilon_1^2 F'}{12 FR_2}, \
t_{34} = \frac{(m \varepsilon_1)^2 F'}{12 F^2}
\left(1+\frac{2}{1+\nu}\right),
$$
$$
t_{44} = -\frac{\varepsilon_1^2 R_2}{12 \varepsilon}
\left[(\frac{\varepsilon F'}{R_2})^2 +
\frac{12 \varepsilon_3}{\varepsilon_1^2 F^2}\right].
$$
Here the shell thickness $h$ is assumed to be a small parameter;
$m$ is the number of waves along the parallel, $m > 0$
(non-axisymmetric vibrations are considered);
$\varepsilon = d/l$, $\varepsilon_1 = h/d$;
$\nu$ is the Poisson ratio. We denote
$A = \sqrt{1+(\varepsilon
F')^2}$, $R_1 = -\frac{A^3}{\varepsilon^2 F^{\prime\prime}}$, $R_2 =
AF$, $\varepsilon_3 = \frac{(m\varepsilon_1)^2}{6(1+\nu)}$.
The spectral parameter $\Omega$ equals the frequency of free
vibrations up to a constant factor: $\Omega = \frac{\omega}{c_p}
d$, where $c_p$ is the velocity of sound in shell's material.
The parameter $\lambda$ used throughout the paper is given by
$$
\lambda = \Omega^2/d^2.
$$
The numerical results to be cited below are given in terms of
$\Omega$.
Different self-adjoint boundary conditions at $x=0, \ x=l$
correspond to the clamped, free
and hinge supported edges of the shell.
Along with (4.1), which
contains the shell thickness $h$ (called a moment system), the
so-called membrane system is considered obtained from (4.1) by
passing to a limit as $h \rightarrow 0$. The latter is a
fourth-order system often used in the shell theory for studying
certain properties of the corresponding EVP; it is of the same form
(4.1); corresponding boundary conditions are also naturally obtained
from those for (4.1) as $h \rightarrow 0$.
The mentioned systems, both moment and membrane, are proved to satisfy the conditions accepted in
Sections~1,~2. Namely, their matrices depend monotonously upon $\Omega$.
The lower right block of (4.1) is degenerate, and it is
easily shown that condition ($\star$) of Section~2 is satisfied for all values of $x$ (in this case $j_0 =
1$), and, therefore, Theorem~6 is applicable. Thus, the method proposed in Section~1 can be applied to computing the fundamental frequencies and the corresponding vibration modes of shells of revolution.
The method has been used for calculating the
frequencies of a long cylinder and various cones; some of those results will be discussed below. Singular EVPs corresponding to closed shells are also studied numerically with the use of the results of Section~3. In particular, the results of calculating the fundamental frequencies of a prolate spheroid will be given below and compared to those obtained by another approach.
Note certain peculiarities of moment system (4.1) due to its
dependence on a small parameter $h$. The matrix $A$ contains elements
of order $h^{-2}$ which causes certain difficulties when implementing
the method if very thin shells are considered. To improve the
properties of system (4.1) new variables were introduced by means of
a proper scaling change (see \cite{Thes} for the corresponding
formulae). As a result of this scaling one gets a system of the same
form as (4.1) with a new matrix containing elements of
order $h^{-1}$, which, obviously, is conditioned better than the
original one. Thus, we apply the method to the transformed system
and compare the results for different values of $h$ up to
$h=10^{-3}$. As we diminish the thickness, calculations slow down
mostly because we have to take small integration steps when
integrating the Cauchy problems and make the transformation of type
(1.8) when transferring boundary conditions. The corresponding
membrane problem which is much simpler for numerical investigation is
also solved. The numerical results are discussed in the remained
subsections (see \cite{Thes} in this regard).
\subsection{Computing the frequencies of a cylindrical shell}
Consider free vibrations of a cylinder of the length $l=10$ and
diameter $d=1$ with free edges. System (4.1) is studied for different
values of the thickness $h$
under the proper boundary conditions
given by
$$
y_k = 0, \ \ \ \ k=5,\ldots,8.
$$
The limit case $h=0$ is also considered.
The moment and membrane systems corresponding to the cylinder
have constant coefficients. This sample problem enables
us to study various characteristic features of the shell spectrum's
behaviour typical for an arbitrary shell. Let us specify some of them.
The membrane problem is known to have a continuous spectrum along with
isolated EVs (see, for
example, \cite{GLT}). For the cylindrical shell under consideration it consists
of one point $\Omega_0=2$ which is the accumulation point of spectrum at
the same time. The interval $[0,\Omega_0)$ is called the regular degeneracy
zone (RDZ). Let us fix a small $\varepsilon>0$. If the membrane problem has
$k$ EVs in the range $[0, \Omega_0 - \varepsilon)$, then for sufficiently
small values of the thickness $h$ the corresponding moment system also has
exactly $k$ EVs in the same range of the spectral parameter.
The EVs lying in the RDZ can be considered as small perturbations of the
corresponding EVs of the membrane problem, asymptotic formulae in terms of
$h$ being valid. We do not compare our numerical results to those obtained
with the use of the asymptotic formulae. Various results of this kind can be found in \cite{A-A-L} where vibrations of conic shells were studied.
Table~1 contains the results of computing the
frequencies of the two problems for the number of
waves $m=1$. The frequencies of the thin cylinder with $h=0.01$
lying in the RDZ are proved to be close to those of the infinitely thin
cylinder, and the numerical results are in good agreement with the theory
developed in \cite{GLT}. There are 46 EVs of the membrane problem
lying in the RDZ for the given shell thickness.
In Table~2 the frequencies of the membrane problem
are given for different numbers of waves along a
parallel. The fact that an EV with a given number becomes smaller as
$m$ grows is explained in \cite{GLT} where it is proved that the union
of spectra for different
values of $m$ tends to fill the interval $[0,\Omega_0]$ as
$m \rightarrow \infty$.
The results of calculations in the zone $\Omega > \Omega_0$ for both
membrane and moment problems are given in Tables~3 and 4.
(See also \cite{Thes} where the vibration modes
of the cylinder with different vallues of $h$ are also computed.)
Note that the smaller $h$ is taken the more EVs occur in the RDZ of
the problem (see the table below where $N([0,2])$ denotes the number
of EVs in the RDZ).
\vspace{0.1in}
\begin{tabular}{|c|c|c|c|}
\hline
$h$ & 0.01 & 0.005 & 0.001 \\ \hline
$N([0,2])$ & 46 & 56 & 92 \\ \hline
\end{tabular}
\vspace{0.1in}
The EVs of the membrane problem ($h=0$) accumulate to the point
$\Omega_0 = 2$ from the left (see the table below for the number of
EVs in $[0,\Omega]$ as $\Omega$ tends to $\Omega_0$).
\vspace{0.1in}
\begin{tabular}{|c|c|c|c|c|}
\hline
$\Omega$ & 1.9500 & 1.9900 & 1.9990 & 1.9995 \\ \hline
$N([0,\Omega])$ & 48 & 99 & 297 & 418 \\ \hline
\end{tabular}
\vspace{0.1in}
\subsection
{Singular EVP for a prolate spheroid}
The method proposed in Section~3 for replacing a singular EVP by a
corresponding problem on a truncated interval was used for finding
the frequencies of axisymmetric vibrations of a thin elastic
shell of revolution. For a closed shell the EVP occurs, which
corresponds to a Hamiltonian system with a singularity
at $t=0$ (and can be reduced to the system considered
in Section~3 on $[0,\infty)$).
This problem has been studied in \cite{A-K-P} for
certain closed shells, namely, for a prolate spheroid and a cylinder
with hemispherical ends. In particular, they have investigated
how to specify boundary conditions at a singular point; the results
of calculations for the mentioned shells have been given. We shall
illustrate our method for specifying the conditions at a singular
point and compare the results of calculating the frequencies of a
prolate spheroid with those of \cite{A-K-P}. First we take the same
system as in \cite{A-K-P} with the singularity at $t=0$.
Bearing in mind that this system is equivalent to (3.14)
considered on the half-line, we pass to approximate system (3.17).
(Note that in this case condition (3.20) is not satisfied for the
limit matrix, though it holds for the approximate problem as was
discussed in the previous section.) For this problem we find the
matrix $\mu_0$, which assigns the boundary condition at a point $t_0$
close to the singularity, by solving system (3.7) as was suggested in
Subsection 3.3. Once we have reduced the original problem to the
appropriate non-singular one in this way, we calculate the
frequencies using the method proposed in Section~1. The results given
in Table~5 illustrate that the EVs of the approximate problems
converge to some limits as $t_0 \rightarrow 0$.
The second column of Table~5 contains the results obtained in \cite{A-K-P}
(see Table~5 there); the problem data, \ie, the shell geometry and
thickness, are also taken from that paper.
We are primarily interested in the of
our results as $t_0 \rightarrow 0$ and, secondly, in their closeness
to the already known ones. The EVs obtained by the two approaches
are clearly different, but of qualitatively the same order. The
number of EVs in the RDZ of the problem (there were found 10
frequencies in the given region for the shell thickness $h = 0.01$)
is the same in both cases. We conclude that the two problems give
similar consistent results.
%\pagebreak
\vspace{.2in}
Table~1. $m=1$, $\Omega < 2$
\vspace{0.2in}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
N & $h=0$ & $h=0.01$ & \ \ \ &
N & $h=0$ & $h=0.01$ \\ \hline
1 & 0.0 & 0.0 & \ \ \ &
12 & 1.2261 & 1.2261 \\ \hline
2 & 0.0 & 0.0 & \ \ \ &
13 & 1.2930 & 1.2932 \\ \hline
3 & 0.0740 & 0.0744 & \ \ \ &
14 & 1.2974 & 1.2976 \\ \hline
4 & 0.1856 & 0.1857 & \ \ \ &
15 & 1.3929 & 1.3932 \\ \hline
5 & 0.3233 & 0.3233 & \ \ \ &
16 & 1.3937 & 1.3941 \\ \hline
6 & 0.4713 & 0.4714 & \ \ \ &
17 & 1.4855 & 1.4858 \\ \hline
7 & 0.6206 & 0.6206 & \ \ \ &
18 & 1.5010 & 1.5013 \\ \hline
8 & 0.7663 & 0.7663 & \ \ \ &
19 & 1.5700 & 1.5705 \\ \hline
9 & 0.9038 & 0.9039 & \ \ \ &
20 & 1.5896 & 1.5907 \\ \hline
10 & 1.0298 & 1.0298 & \ \ \ &
& $\ldots$ & $\ldots$ \\ \hline
11 & 1.1416 & 1.1416 & \ \ \ &
46 & 1.9429 & 1.9995 \\ \hline
\end{tabular}
\vspace{0.2in}
Table~2. $h=0$, $m > 1$
\vspace{0.2in}
\begin{tabular}{|c|c|c|c|c|}
\hline
N & $m=2$ & $m=3$ & $m=4$ & $m=5$ \\ \hline
1 & 0.0 & 0.0 & 0.0 & 0.0 \\ \hline
2 & 0.0 & 0.0 & 0.0 & 0.0 \\ \hline
3 & 0.0248 & 0.0118 & 0.0068 & 0.0044 \\ \hline
4 & 0.0667 & 0.0321 & 0.0185 & 0.0120 \\ \hline
5 & 0.1260 & 0.0620 & 0.0361 & 0.0235 \\ \hline
6 & 0.1990 & 0.1001 & 0.0589 & 0.0385 \\ \hline
7 & 0.2813 & 0.1457 & 0.0867 & 0.0570 \\ \hline
8 & 0.3704 & 0.1975 & 0.1190 & 0.0787 \\ \hline
9 & 0.4628 & 0.2543 & 0.1554 & 0.1034 \\ \hline
10 & 0.5564 & 0.3148 & 0.1952 & 0.1309 \\ \hline
\end{tabular}
\vspace{0.2in}
Table~3. $h=0$, $m=1$, $\Omega > 2$
\vspace{0.2in}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
N & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline
$\Omega$ & 2.0404 & 2.1997 & 2.3521 & 2.4954
& 2.6297 & 2.7547 \\ \hline
\end{tabular}
%\pagebreak
\vspace{0.2in}
Table~4. $m=1$, $\Omega > 2$, $h=0.01$
\vspace{0.2in}
\begin{tabular}{|c|c|c|c|c|}
\hline
N & $\Omega$ & \ \ \ &
N & $\Omega$ \\ \hline
47 & 2.0028 & \ \ \ &
57 & 2.0888 \\ \hline
48 & 2.0110 & \ \ \ &
58 & 2.1002 \\ \hline
49 & 2.0198 & \ \ \ &
59 & 2.1121 \\ \hline
50 & 2.0281 & \ \ \ &
60 & 2.1246 \\ \hline
51 & 2.0348 & \ \ \ &
61 & 2.1376 \\ \hline
52 & 2.0377 & \ \ \ &
62 & 2.1511 \\ \hline
53 & 2.0474 & \ \ \ &
63 & 2.1652 \\ \hline
54 & 2.0570 & \ \ \ &
64 & 2.1799 \\ \hline
55 & 2.0672 & \ \ \ &
65 & 2.1945 \\ \hline
56 & 2.0777 & \ \ \ &
66 & 2.1952 \\ \hline
\end{tabular}
\vspace{0.2in}
Table~5
\vspace{0.2in}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
N & \cite{A-K-P} & $t_0=10^{-7}$ & $t_0=10^{-8}$& $t_0=10^{-9}$&
$t_0=10^{-10}$& $t_0=10^{-11}$ \\ \hline
1 & 0.0 & 0.0& & & & \\ \hline
2 & 1.338 & 1.732& 1.735& 1.735& 1.736& 1.736 \\ \hline
3 & 2.457 & 2.808 & 2.811& 2.812& 2.812& 2.812 \\ \hline
4 & 3.547 & 3.870 & 3.872& 3.873& 3.874& 3.874 \\ \hline
5 & 4.599 & 4.896 & 4.898& 4.899& 4.900& 4.900 \\ \hline
6 & 5.582 & 5.839 & 5.841& 5.842& 5.842& 5.842 \\ \hline
7 & 6.403 & 6.581 & 6.582& 6.582& 6.583& 6.583 \\ \hline
8 & 6.897 & 6.971 & 6.972 & 6.972 & 6.972 & 6.972 \\ \hline
9 & 7.080 & 7.000 & 7.000 & 7.000 & 7.001 & 7.001 \\ \hline
10 & 7.144 & 7.151 & 7.151 & 7.151 & 7.151 & 7.151 \\ \hline
11 & 7.182 & 7.186 & 7.188 & 7.188 & 7.188 & 7.188 \\ \hline
\end{tabular}
\pagebreak
\begin{thebibliography}{99}
%
\bibitem{1}
A. A. Abramov,
{\em A method of finding the eigenvalues and
eigenfunctions of a self-conjugate differential problem},
Comp. Maths Math. Phys., 31 (1991), pp.~27--36.
%
\bibitem{Abr1}
A. A. Abramov, {\em A version of the pivotal condensation method},
Zh. vychisl. Mat. mat. Fiz., 1 (1961), pp.~349--351 (Russian).
%
\bibitem{Abr2}
A. A. Abramov, {\em On the transfer of boundary conditions for a
system of linear ordinary differential equations (a version of
pivotal condensation)}, Zh. vychisl. Mat. mat. Fiz.,
1 (1961), pp.~542--545 (Russian).
%
\bibitem{Lid-N}
V. B. Lidskii and M. G. Neigauz, {\em
On the pivotal condensation method in the case of self-adjoint
second-order system}, Zh. vychisl. Mat. mat. Fiz.,
2 (1962), pp.~161--165 (Russian).
%
\bibitem{K-K-P}
D. I. Kitoroage, N.~B.~Konyukhova and B.~S.~Pariiskii,
{\em A Method of Trigonometrical Matrices for Solving a System
of Hyperradial Schr\"{o}dinger Equations}, VTs Akad. Nauk SSSR,
Communications on Applied Math., Moscow, 1989 (Russian).
%
\bibitem{Abr-B-Kon}
A.~A. Abramov, K.~Balla and N.~B.~Konyukhova,
{\em Stable initial manifolds and
singular boundary value problems for systems of ordinary differential
equations}, Comput. Math. Banach Center Publs, 13 (1984),
pp.~319--351.
%
\bibitem{Asl-95}
A. A. Aslanyan,
{\em A method of solving the self-conjugate eigenvalue problem for
large values of the spectral parameter},
Comp. Maths Math. Phys., 35 (1995), pp.~1331--1339.
%
\bibitem{Lid}
V. B. Lidskii,
{\em Oscillation theorems for canonical systems of differential
equations}, Dokl. Akad. Nauk SSSR, 102 (1955), pp.~877--880
(Russian).
%
\bibitem{Atk}
F. V. Atkinson, {\em Discrete and Continuous Boundary Problems},
Academic Press, London, 1964.
%
\bibitem{Lanc}
P. Lancaster, {\em Theory of Matrices}, Academic Press,
New York, 1978.
%
\bibitem{Roy}
H. L. Royden, {\em Comparison theorems for the matrix Riccati
equation}, Commun. Pure Appl. Math., 41 (1988), pp.~739--746.
%
\bibitem{A-D-K}
A.~A. Abramov, V. V. Ditkin, N.~B.~Konyukhova {\em et al.},
{\em Evaluation of the eigenvalues and eigenfunctions of ordinary
differential equations with singularities},
Comp. Maths Math. Phys., 20 (1980), pp.~63--81.
%
\bibitem{Yuk}
L.~F. Yukhno, {\em Numerical solution of the non-linear spectral
problem for symmetric matrices}, Comp. Maths Math. Phys.,
27 (1987), pp.~26--30.
%
\bibitem{Abr-Asl}
A. A. Abramov and A. A. Aslanyan,
{\em A generalization of the method for solving the eigenvalue problem for
Hamiltonian systems}, Comp. Maths Math. Phys., 34 (1994), pp.~1629--1633.
%
\bibitem{Asl}
A. A. Aslanyan,
{\em The investigation of certain properties of the self-conjugate
eigenvalue problem}, Comp. Maths Math. Phys., 36 (1996),
pp.~1567--1571.
%
\bibitem{Cod-Lev}
E. A. Coddington and N. Levinson, {\em Theory of Ordinary Differential
Equations}, McGraw-Hill, 1955.
%
\bibitem{Kon-Pak}
N.~B.~Konyukhova and T.~V.~Pak,
{\em Transfer of admissible boundary conditions
from infinity for systems of linear ordinary differential equations with a
large parameter}, Comp. Maths Math. Phys., 27 (1987),
pp.~847--866.
%
\bibitem{Abr-Asl-B}
A. A. Abramov, A. A. Aslanyan and K. Balla,
{\em A comparison of the solutions of the sweep equations
for Hamiltonian linear systems, in the case where the
boundary conditions are transferred from infinity},
Comp. Maths Math. Phys., 35 (1995), pp.~1453--1460.
%
\bibitem{Abr-84}
A. A. Abramov, {\em Numerical solution of certain algebraic
problems arising in the theory of stability},
Comp. Maths Math. Phys., 24 (1984), pp.~1--6.
%
\bibitem{Kato}
T. Kato, {\em Perturbation Theory of Linear Operators},
Springer, Berlin, 1966.
%
\bibitem{Thes}
A. A. Aslanyan, {\em Investigating and Solving Selfadjoint
Boundary Value Problems for Linear ODE Systems},
PhD thesis, Moscow, 1996 (Russian).
%
\bibitem{Abr-Asl1}
A. A. Abramov and A. A. Aslanyan,
{\em On a singular boundary value problem for linear
Hamiltonian systems of ordinary differential equations},
Comp. Maths Math. Phys., 36 (1996), pp.~1017--1026.
%
\bibitem{GLT}
A.~L.~Gol'denveizer, V.~B.~Lidskii and P.~E.~Tovstik,
{\em Free Oscillations of Thin Elastic Shells},
Nauka, Moscow, 1979 (Russian).
%
\bibitem{Pri}
V. Yu. Prikhod'ko,
{\em Sound Radiation and Scattering by Closed Prolate Shells
of Revolution}, Rumb, Leningrad, 1990 (Russian).
%
\bibitem{A-A-L}
A. A. Aslanyan, A. G. Aslanyan and V. B. Lidskii,
{\em Asymptotic formulae for the frequencies of
axisymmetric oscillations of a shell of revolution},
Comp. Maths Math. Phys., 38 (1998), pp.~288--299.
%
\bibitem{A-K-P}
A.~A. Abramov, N.~B.~Konyukhova, B.~S.~Pariiskii {\em et al.},
{\em Free Axisymmetric Oscillations of Closed Elastic Thin Shells
of Revolution}, VTs Akad. Nauk SSSR,
Communications on Applied Math., Moscow, 1991 (Russian).
%
\end{thebibliography}
\end{document}
---------------9810120600564--