x$ and the transformation (7) is non-singular.
By the assumption of the theorem, the off-diagonal terms
are absolutely integrable. The diagonal terms are purely imaginary and
hence
Levinson's theorem is applicable. Asymptotic behavior of the solutions (10)
follows directly from the explicit solution of the equation (8) with
diagonal terms omitted and application of
transformations we applied to the original system of equations. $\Box$
To complete the proof of Theorem 1.3, we need to construct the function
$q(x,\lambda)$ verifying (9) and conditions given in Theorem 2.2 for a.e.
$\lambda \in S.$ The main problem is that if we try to solve equation
\begin{equation}
q'(x,\lambda) + \frac{i}{2\Im(\theta \overline{\theta})}V(x)
\overline{\theta}^{2}(x,\lambda)\exp (2ip(x,\lambda)) +
\frac{i}{2\Im(\theta \overline{\theta})}V(x) \theta^{2}(x,\lambda)
\exp(-2ip(x,\lambda))q^{2}(x,\lambda) =0
\end{equation}
by iteration, we obtain expressions involving multilinear integral
operators of certain type. We need to show that these expressions
converge a.e. $\lambda \in S$ in order to ensure $q(x,\lambda)
\stackrel{x \rightarrow \infty}{\longrightarrow} 0$ for a.
e. $\lambda,$ and to make sure that (9) is satisfied after some
number of iterations.
The first approximation to the solution would be
\[ q^{(0)}(x,\lambda) = \frac{i}{2\Im(\theta \overline{\theta})}
\int\limits_{x}^{\infty}V(t)\overline{\theta}^{2}(t,\lambda)
\exp (2ip(t,\lambda))\,dt. \]
Again, we have to justify this formula by proving that the
conditional integral is well-defined a.e. $\lambda.$ This is
relatively simple and has been already done in \cite{Kis1}.
In the next section, we formulate the main result from \cite{Kis1} that we
will use and make a few comments.
\newpage
\begin{center}
\large \bf 3. A.e. convergence for integral operators.
\end{center}
Let the operator $K$ be defined on the measurable bounded functions $f$
of compact support by
\begin{equation}
(Kf)(\lambda) = \int\limits_{0}^{\infty} k(\lambda,x) f(x)\, dx,
\end{equation}
where $k(\lambda,x)$ is a measurable and bounded function on
$I \times R^{+}.$
To study the a.e. convergence of the integral defining $Kf(\lambda)$
on functions from $L^{p},$ we study the corresponding maximal function.
Denote by $M_{K}f(\lambda)$ the maximal function
\begin{equation}
M_{K}f(\lambda) = {\rm sup}_{N} \left|
\int\limits_{0}^{N}k(\lambda,x)f(x)\,dx
\right|.
\end{equation}
In \cite{Kis1}, the following was proved: \\
\noindent \bf Theorem 3.1. \it Suppose that an operator K, defined by \rm
(12) \it with bounded kernel $k(\lambda,x),$ satisfies the norm estimate
$\| Kf\|_{2} \leq C_{2} \|f\|_{2}$ for all bounded functions of compact
support. Then for every $q>2$ and $p$ such that $q^{-1}+p^{-1}=1,$
we have the following estimate for the maximal function:
\begin{equation}
\|M_{K}f\|_{q} \leq C_{q} \|f\|_{p} \;\: for \;\: every \;\: f \in L_{p}
\end{equation}
As a consequence, the integral
\[ \int^{N}_{0} k(\lambda,x)f(x)\,dx \]
converges
as $N \rightarrow \infty$ for almost every value of
$\lambda$ if $f \in L_{p}.$ \\
\rm For the proof of Theorem 3.1 we refer to \cite{Kis1}; we also
sketch in the appendix a proof of a very similar result
(Lemma A.3). The proof of Theorem 3.1 may be obtained from that
proof by a simple modification.
Finally, we note that it is a general principle for singular
integral operators of this type that a couple of $L^{p}$-type
norm estimates, $L^{p_{1}}-L^{q_{1}}$ and $L^{p_{2}}-L^{q_{2}}$ valid
on functions of compact support imply the
estimates of type (14) (but may be in general involving Lorentz spaces)
for the corresponding maximal function (and hence
a.e. convergence) for all
intermediate $L^{p},$ $p_{1}< p < p_{2}.$ See \cite{Kis2} for more
details.
\begin{center}
\large \bf 4. Norm estimates for multilinear transforms.
\end{center}
In this section, we study the questions related to the norm estimates
for certain multilinear transforms. The results of this and next sections
will enable us to fulfill the plan sketched in the end of Section 2 and
find the function $q(x,\lambda)$ with the needed properties
for a.e. $\lambda$ by iteration of (11).
Suppose that the functions $k_{i}(\lambda, x),$ $i=1,...n$ are defined on
$I \times R^{+},$ where $I$ is some measurable set in $R.$ We assume that
the
operators
\[ (K_{i}f)(\lambda) = \int\limits_{R^{+}} k_{i}(\lambda, x)f(x)\,dx \]
satisfy the bounds
\begin{equation}
\|K_{i}f\|_{L^{q}(I, d\lambda)} \leq C_{i}\|f\|_{L^{p}(R^{+}, dx)}
\end{equation}
on functions of compact support
for some $2>p \geq 1$ and $q>p.$
Let $n \geq 2.$ Let $A$ be
any set of ordered pairs $\alpha = (i_{\alpha}, i_{\alpha'}),$
with $1 \leq i_{\alpha}, i_{\alpha'} \leq n.$ Let $|A|$ denote
the cardinality of $A.$ By $\chi_{E}(x)$ we denote
a characteristic function which is equal to one when $x \in E$ and
is zero otherwise.
Consider the multilinear operator $T_{n}$ given by
\begin{equation}
T_{n}(f_{1},..f_{n})(\lambda) = \int\limits_{R^{n}} \prod\limits_{j=1}^{n}
f_{j}(x_{j})k_{j}(x_{j},\lambda) \prod\limits_{\alpha \in A}
\chi_{R^{+}}(x_{i_{\alpha}}-x_{i_{\alpha'}})\,dx,
\end{equation}
$x=(x_{1},...x_{n}).$ Notice that if there were no ``diagonal"
characteristic functions,
the expression (16) would decompose into a product of one-dimensional
integrals, and the analysis would become trivial. \\
\noindent \it Remark. \rm We do not rule out the possibility
that some of the characteristic functions in (16) are contradictory
and the whole expression is zero. \\
Our goal in this section is to prove the following
property: \\
\bf Theorem 4.1. \it Suppose that the multilinear operator
$T_{n}$ is given by (16) with kernels $k_{j}(\lambda,x_{j})$
satisfying (15).
Then for any functions $f_{i} \in L^{p}(R^{+},dx),$ $i=1,...n,$
such that the integral (16)
converges absolutely for a.e. $\lambda,$ we have
\[ \|T_{n}(f_{1},...f_{n})\|_{s_{n}} \leq C_{n}\prod_{i=1}^{n}
\|f_{i}\|_{p}, \]
where $s_{n}^{-1} = nq^{-1}.$ The constant $C_{n}$ depends only on $n$ and
constants in the norm bounds (15) for operators $K_{i}.$ \\
\noindent \it Remark. \rm Notice that the conclusion of Theorem 4.1 holds
in particular when $s_{n}<1.$ \\
\noindent \rm By assumption, the value of $T_{n}(f_{1},...f_{n})(\lambda)=
g(\lambda)$ is well-defined for a.e. $\lambda$ by the absolutely convergent
integral.
Our strategy will be to divide the domain of integration into
disjoint pieces
and represent the function $g$ as a sum of terms coming from
integration over
these disjoint pieces, formally:
\[ g(\lambda)=\sum_{i=1}^{\infty}g_{i}(\lambda).\]
Because of the absolute convergence, we have that the sum
$\sum_{i=1}^{n}g_{i}
(\lambda)$ converges to $g(\lambda)$ for a.e. $\lambda$ as $n \rightarrow
\infty.$ We show, choosing the functions $g_{i}$ in a convenient way, that
the sum also converges absolutely in the
appropriate space $L_{s_{n}},$ thus proving Theorem 4.1.
In the proof of Theorem 4.1, we will need a certain representation of the
function
\[ f(x_{1})f(x_{2})\chi_{R^{+}}(x_{2} - x_{1}) \]
as a sum of products of two functions depending only on $x_{1}$
and $x_{2}$
respectively.
Let us first introduce a decomposition of $R^{+}$ associated with
the function
$f.$ Normalize the function $f$ so that $\|f\|_{p}^{p}=1.$
By $\chi_{E}$ we
will denote the characteristic function of the set $E.$
Let $E(1,1)$ and
$E(1,2)$ be
disjoint intervals such that
\[ \|f(x)\chi_{E(1,1)} \|^{p}_{p} =
\|f(x)\chi_{E(1,2)} \|^{p}_{p} =
2^{-1}, \]
$E(1,1) \cup E(1,2)=R^{+}$ and $E(1,1)$ lies
entirely to the
right of $E(1,2)$ (i.e. for any $x \in E(1,1),$
$y \in E(1,2)$
we have $x \leq y$). We note that $E(1,2)$ is
half-infinite and assume
$E(1,1)$ contains its
right end for the above decomposition to hold. We also remark that the
decomposition is not necessarily unique ($f$ might vanish on some set so
that this decomposition will be non-unique), and we just take some
decomposition. In future we will omit such
inessential details. We continue to decompose each of the intervals
$E(1,l)$ in a similar manner, obtaining on the $m^{{\rm th}}$ step
$2^{m}$
intervals $\{ E(m,l)\}_{l=1}^{2^{m}},$ such that
$\cup_{l=1}^{2^{m}}E(m,l)=R^{+},$ $\|f(x)\chi (E(m,l)
\|_{p}^{p}=2^{-m}$ for $j=1,...2^{m},$ the intervals are disjoint
and $E(m,l)$ lies entirely to the left from $E(m,i)$ if
$l < i.$ In notation $E(m,l),$ we refer to $m$ as ``generation"
of this interval and to $l$ as ``index". Of
importance, in particular, will be the following evident property of
intervals $\{E(m,l) \}_{m=1,\, l=1}^{\infty, \,\,\, j=2^{m}}:$
any two intervals are either disjoint or one is contained in another.
We proceed to decompose the ``diagonal" characteristic functions
in a convenient way. \\
\bf Lemma 4.2. \it The following identity holds:
\begin{equation}
\chi_{R^{+}}(x_{2} - x_{1}) f(x_{1})f(x_{2})=
\left( \sum\limits_{m=1}^{\infty}
\sum\limits_{l=1, l {\rm odd}}^{2^{m}} \chi_{E(m,l)}(x_{1})
\chi_{E(m,l+1)}(x_{2}) \right)f(x_{1})f(x_{2}).
\end{equation}
\noindent \bf Proof. \rm Let us denote by $H_{12}$ the set
\[ H_{12} = \{ x \in R^{2}, x=(x_{1},x_{2}) | x_{1} < x_{2} \}, \]
and by ${\rm supp} f$ the closure of the set of the points $x$ such
that for every interval $I,$ such that $x \in I,$ $|f(x)|$ is positive
on the set of positive Lebesgue measure in $I.$
The claim will follow if we show that
\[ H_{12} \cap ({\rm supp}_{x_{1}}f \times {\rm supp}_{x_{2}}f)=
\cup_{m=1}^{\infty} \cup_{l=1, l {\rm odd}}(E(m,l)
\times E(m,l+1)
) \]
and the sets under the union on the right hand side are disjoint.
The latter fact is easy to see: if $E(m,l) \subset E(s,i),$
$l$ odd, $s\neq m,$ then necessarily $m>s$ and $E(m,l+1)$ also
belongs to $E(s,i),$ not $E(s,i+1).$ On the other hand, we
show that for every $y_{2},y_{1} \in {\rm supp}
f,$ $y_{1} < y_{2},$ there exist two sets $E(m,l),$ $E(m,l+1),$
with $l$ odd, such that $y_{1} \in E(m,l)$ and $y_{2}
\in E(m,l+1).$
Let $\|f \chi_{(y_{1}, y_{2})}\|_{p}^{p}=a>0.$ Here we assume that $f$ is
normalized and use the
condition that $y_{1},$ $y_{2}$ lie in ${\rm supp}f$ to infer
that $a>0.$ Choose
$s$ so that $2^{-s} \geq a \geq 2^{-s-1}.$ If $y_{1},$ $y_{2}$ lie in one
set of generation $s,$ $E(s,l),$ then necessarily $y_{1}
\in E(s+1,2l-1)$ and $y_{2} \in E(s+
1,2l).$ If $y_{1}$ and $y_{2}$ lie in different sets of
generation $m$, $E(s,l)$ and $E(s,l+1),$ then either
$l$ is odd or $y_{1} \in E(s-1,l/2^{r})$ and
$y_{2} \in E(s-1,l/2^{r}+1),$ where $r$ is such that $l/2^{r}$ is
odd. $\Box$ \\
\noindent \it Remark. \rm In particular, if ${ \rm supp} f = R^{+},$
we get a representation of diagonal characteristic function $\chi_
{R^{+}}(x_{2} - x_{1})$ as a sum of products of characteristic
functions of some intervals in $x_{1}$ and $x_{2}$ variables. \\
\noindent \bf Proof of Theorem 4.1. \rm Without loss of
generality, we assume throughout the proof that
$\|f_{i}\|_{p}=\frac{1}{n}$ for all $i=1,...n.$ Let
\[ f(x) = \left( \sum\limits_{i=1}^{n}|f_{j}(x)|^{p} \right)^{\frac{1}{p}}.
\]
Consider the family of the intervals $\{E(m,l)\}$ associated with the
function $f.$ An important property of this family is that
\begin{equation}
\|f_{i}(x)\chi(E(m,l))\|^{p}_{p} \leq 2^{-m}
\end{equation}
for all $i,$ $l,$ $m.$
Write
\[ A=\{ \alpha_{1},...\alpha_{|A|} \}. \]
We begin by substituting the result of Lemma 4.2 into formula (16):
\begin{equation}
T_{n}(\overline{f})=
\sum\limits_{m_{1}}^{\infty}...\sum\limits_{m_{|A|}=1}^{\infty}
\sum\limits_{l_{1}=1} '...\sum\limits_{l_{|A|}=1} '\int\limits_{R^{n}}
dx \prod\limits_{j=1}^{n} k_{j}(\lambda, x_{j})f_{j}(x_{j})
\prod\limits_{t=1}^{|A|} \chi_{E(m_{t},l_{t})}(x_{i_{t}})
\chi_{E(m_{t},l_{t}+1)}(x_{i'_{t}}),
\end{equation}
where $\sum\limits_{l_{t}}'$ means the sum over all odd integers $l_{t}
\in [1,2^{m_{t}}],$ and $i_{t}=i_{\alpha_{t}},$ $i_{t}'=i_{\alpha_{t}}'.$
Thus
\begin{equation}
T(\overline{f})(\lambda) \leq
\sum\limits_{m_{1}}^{\infty}...\sum\limits_{m_{|A|}=1}^{\infty}
F^{\overline{m}}
(\overline{f})(\lambda),
\end{equation}
where $\overline{m}= (m_{1},...m_{|A|})$ and
\begin{equation}
F^{\overline{m}}(\overline{f})(\lambda)=
\sum\limits_{l_{1}=1} '...\sum\limits_{l_{|A|}=1} '\int\limits_{R^{n}}
dx \prod\limits_{j=1}^{n} |K_{j}(f_{j}\chi_{G(j,\overline{l})})(\lambda)|,
\end{equation}
where $\overline{l}=(l_{1},...l_{|A|})$ (all variables
$l_{t}$ take only odd
values), the set $G(j,\overline{l})$
depends on $\overline{m}$ (and on $|A|$), and
\[ G(j,\overline{l}) = \left[ \bigcap_{t: j=i_{t}}E(m_{t},l_{t}) \right]
\bigcap
\left[ \bigcap_{t:j=i'_{t}}E(m_{t},l_{t}+1) \right]. \]
For many values of $\overline{l},$ the set $G(j,\overline{l})$ is
empty for some $j.$
Such terms contribute zero to the sum, and
this is an observation underlying the estimate that we are going to
derive for $F^{\overline{m}}(\overline{f}).$ We aim to prove that
\begin{equation}
\|F^{\overline{m}}(\overline{g})\|_{s_{n}} \leq C_{n}2^{-\gamma_{n}
|\overline{m}|}\prod\limits_{j=1}^{n}\|g_{j}\|_{p}^{(1-\beta_{n})}
\end{equation}
for any $g_{1},...g_{n}$ which satisfy (18).
Here $\gamma_{n}$ is some positive constant which
depends only on $n,$ $C_{n}$ depends on $n$ and constants in norm bounds
(15) for the operators $K_{j},$ and $\beta_{n}$ satisfies
\[ 1 \geq 1-\beta_{n} > {\rm max}(\frac{p}{q}, \frac{p}{2}) . \]
We may proceed to prove (22) by induction on $|A|.$ The case
$|A|=0$ is immediate from hypothesis on $K_{j}$ by H\"older inequality
(in this case $\overline{m}=0$).
It will be convenient to consider the graph
$\Upsilon$ with vertices $\{1,...n\}$ and edges $(i_{t},i'_{t})$ joining
$i_{t}$ to $i_{t}'$ for any $t$ (and no other edges).
To each edge we associate the generation $m_{t}$ which
corresponds to a generation in the decomposition (17)
of $\chi_{R^{+}}(x_{i_{t}}-x_{i_{t}'})$ that we fixed in the sum (21).
It suffices to
treat the case where $\Upsilon$ is connected; the general case then
follows by H\"older's inequality.
Fix $\overline{m}.$ Relabel the indices so that $m_{1} \leq ... \leq
m_{|A|}.$ For simplicity of notation, we also relabel pairs $(i_{t},i_{t}')$
so that $m_{t}$ still denotes the generation in the decomposition of
$\chi_{R^{+}}(x_{i_{t}}-x_{i_{t}'}).$
Let $N$ be the largest index for which $m_{N}=m_{1}.$
Drop from the sum (21) all terms for which there exists $j$ such that
$G(j,\overline{l})=\emptyset;$ such terms contribute $0.$ We say that an
index
$l$ \it remains \rm if the corresponding term has not been dropped.
We have the following \\
\noindent \bf Lemma 4.3. \it For any $1 \leq j \leq n,$ either
\begin{equation}
G(j,\overline{l}) \subset E(m_{1},l_{1}) \,\,\,\forall \overline{l}
\,\,\,{\rm remaining}
\end{equation}
or
\begin{equation}
G(j,\overline{l}) \subset E(m_{1},l_{1}+1) \,\,\,\forall \overline{l}
\,\,\,{\rm remaining}.
\end{equation}
Let $B_{1}=\{j:$ (23) holds $\}$, $B_{2} = \{ j:$ (24) holds $\}$.
Moreover if $2 \leq t \leq N$ (that is, if $m_{t}=m_{1}$) then
\begin{equation}
l_{t}=l_{1}\,\,\,\forall \overline{l} \,\,\, {\rm remaining}.
\end{equation}
Finally for each $t>N$ (so $m_{t}>m_{1}$), either $i_{t},$ $i'_{t}$ are
both in $B_{1}$ for all remaining $l$
or they are both in $B_{2}$ for all remaining $l.$
We say that $t \in B_{1},$ $t \in B_{2}$ respectively. \\
\noindent \bf Proof. \rm For any $m \geq 1,$ $l$ odd, set
\[ \tilde{E}(m,l) = E(m,l) \cup E(m,l+1). \]
First we prove that
if $\overline{l}$
remains, then
\begin{equation}
\tilde{E}(m_{t},l_{t}) \subset \tilde{E}(m_{1},l_{1})
\end{equation}
for all $t.$
Notice that both sets in (26) also belong to the
family $E(m,l)$ (they are $E(m_{t}-1, \frac{l_{t}-1}{2}),$
$E(m_{1}-1, \frac{l_{1}-1}{2})$ respectively; we may
assume $E(0,0)=R^{+}$).
Therefore, to prove (26) it is sufficient to show
that the two sets in (26) intersect, since in this case
by martingale-type property one is contained in another.
Recall that $m_{1}$ is the generation which is
fixed in decomposition of the
characteristic function $\chi_{R^{+}}(x_{i_{1}}-x_{i_{1}'})$
in the sum (21) for $F^{\overline{m}}.$
Pick any other $m_{t}$ which is fixed
in the decomposition of the characteristic
function $\chi_{R^{+}}(x_{i_{t}}-x_{i_{t}'}).$
Since the graph $\Upsilon$ is connected, we can find
a path in $\Upsilon$ which connects either $i_{t}$ or $i_{t}'$ with either
$i_{1}$ or $i_{1}',$ and does not contain the edges $(i_{1},i_{1}'),$
$(i_{t},i_{t}').$
Suppose that this path goes from $i_{1}$ to $i_{t}$ and
passes successively through the edges with
the corresponding generations $m_{t_{1}},...m_{t_{r}}.$ For
$G(j,\overline{l})$
to be non-zero for all $j,$ we must have
\[
\tilde{E}(m_{1},l_{1}) \cap \tilde{E}(m_{t_{1}},l_{t_{1}}) \neq
\emptyset, \,\,\,\, \tilde{E}(m_{t},l_{t}) \cap \tilde{E}(m_{t_{r}},
l_{t_{r}}) \neq \emptyset, \,\,\,\,\,{\rm and} \]
\begin{equation}
\tilde{E}(m_{t_{i}},l_{t_{i}}) \cap \tilde{E}(m_{t_{i+1}},l_{t_{i+1}})
\neq \emptyset \,\,\,\,{\rm for\,\,\, all\,\,\,} i=1,...r-1.
\end{equation}
Hence by our assumption that $m_{1} \leq m_{t}$ for all $t$ we see that
$\tilde{E}(m_{t_{1}},l_{t_{1}}) \subset \tilde{E}(m_{1},l_{1}).$
But then by (27) also
$\tilde{E}(m_{1},l_{1}) \cap \tilde{E}(m_{t_{2}},l_{t_{2}}) \neq
\emptyset,$ hence $\tilde{E}(m_{t_{2}},l_{t_{2}})
\subset \tilde{E}(m_{1},l_{1}).$ We continue in the same way
concluding that
$\tilde{E}(m_{t},l_{t}) \subset \tilde{E}(m_{1},l_{1})$ and hence (26)
holds.
The statements (23), (24) and (25)
of the lemma now follow immediately from the
martingale-type
property of the sets $E(m,l)$ and the definition of the set
$G(j,\overline{l}).$
To prove the final statement,
suppose that we know in addition that $m_{t}>m_{1}.$
We can find a path in $\Upsilon$ which goes from $i_{t}$ or $i_{t}'$
to a vertex adjacent to an edge $(i_{s},i_{s}')$
with the corresponding
generation $m_{s}$ equal to $m_{1}$ (i.e. with $s \leq N$), and contains only
edges with the corresponding generations strictly less than $m_{1}.$
An argument analogous to the above shows that in this case
$\tilde{E}(m_{t},l_{t})$ is contained either in $E(m_{1},l_{s})$ or in
$E(m_{1},l_{s}+1)$ for all
remaining $\overline{l},$
depending on whether the vertex to which the path
leads coincides with $i_{s}$ or $i_{s}'$ respectively. By (25)
the lemma is proven.
$\Box$ \\
\noindent \it Remark. \rm It may happen that there exist two (or more)
different paths from $i_{t}$ (or $i_{t}'$)
one of which leads to vertex
$j_{1}$ where $f_{j_{1}}(x_{j_{1}})$ is multiplied by
$\chi_{E(m_{1},l_{1})}(x_{j_{1}})$
while the other leads to vertex $j_{2}$ where
$f_{j_{2}}(x_{j_{2}})$ is multiplied by
$\chi_{E(m_{1},l_{1}+1)}(x_{j_{2}}).$ In this case, $G(j,\overline{l})$
is zero and hence $\overline{l}$ is not remaining. \\
By Lemma 4.3
\[ F^{\overline{m}}(\overline{f})(\lambda) = \sum\limits_{l_{1}} '
\sum\limits_{l_{N+1}} '...\sum\limits_{l_{|A|}} '
\prod\limits_{j \in B_{1}}|K_{j}(f_{j}\chi_{E(m_{1},l_{1})}\chi_{G(j,
\overline{l})})(\lambda)|
\prod\limits_{j \in B_{2}}|K_{j}(f_{j}\chi_{E(m_{1},l_{1}+1)}\chi_{G(j,
\overline{l})})(\lambda)| \leq \]
\[ \leq \sum\limits_{l_{1}} 'F_{A_{1},B_{1}}^{\overline{m}^{(1)},l_{1}}
(\overline{f})(\lambda)F_{A_{2},B_{2}}^{\overline{m}^{(2)},l_{1}}
(\overline{f})(\lambda), \]
where
\[ F_{A_{1},B_{1}}^{\overline{m}^{(1)},l_{1}}(\overline{f})(\lambda)=
\sum\limits_{l_{t}:t \in A_{1}} ' \prod\limits_{j \in B_{1}}
|K_{j}(f_{j}\chi_{E(m_{1},l_{1})}\chi_{G(j,l)})(\lambda)| \]
and $\sum\limits_{l_{t}:t \in A_{1}} '$ denotes the sum over all $l_{t}$
such that $1 \leq l_{t} \leq 2^{m_{t}},$ $l_{t}$ is odd, $t \in A_{1},$
and we write $m^{(1)}=
(m_{t})_{t \in A_{1}}.$ Note that
$F_{A_{1},B_{1}}^{\overline{m}^{(1)},l_{1}}(\overline{f})(\lambda)$
depends only on those $f_{j}$ for which $j \in B_{1}.$ The factor
$F_{A_{2},B_{2}}^{\overline{m}^{(2)},l_{1}}(\overline{f})(\lambda)$
is defined similarly, but $E(m_{1},l_{1})$ is replaced by
$E(m_{1},l_{1}+1).$
We may rewrite for each $j \in B_{1}$
\[ G(j,l) = E(m_{1},l_{1}) \bigcap G^{1}(j, (l_{t})_{t \in A_{1}}), \]
where
\[ G^{1}(j, (l_{t})_{t \in A_{1}}) = \left[ \bigcap\limits_{t \in A_{1},
j=i_{t}} E(m_{t},l_{t}) \right] \bigcap \left[ \bigcap\limits_{t \in A_{1},
j=i_{t}'}E(m_{t},l_{t}+1) \right]. \]
Indeed, by Lemma 4.3 all other sets which enter in the definition
of $G(j,l)$ belong to $E(m_{1},l_{1}+1)$ and hence are absent
for $\overline{l}$ which remain.
Thus $F_{A_{1},B_{1}}^{\overline{m}^{(1)},l_{1}}(\overline{f})(\lambda)$
(and $F_{A_{2},B_{2}}^{\overline{m}^{(2)},l_{1}}(\overline{f})(\lambda)$)
are the expressions of the same form as the original $F^{\overline{m}}.$
Since $0<|A_{1}|<|A|,$ both
$F_{A_{1},B_{1}}^{\overline{m}^{(1)},l_{1}}(\overline{f})(\lambda)$ and
$F_{A_{2},B_{2}}^{\overline{m}^{(2)},l_{1}}(\overline{f})(\lambda)$
may be estimated by induction on $|A|.$ Therefore
\begin{equation}
\|F_{A_{1},B_{1}}^{\overline{m}^{(1)},l_{1}}(\overline{f})(\lambda)\|_
{s_{|B_{1}|}} \leq C2^{-\gamma_{|B_{1}|} |m^{(1)}|}
\prod\limits_{j \in B_{1}}
\|f_{j}\chi_{E(m_{1},l_{1})}\|_{p}^{1-\beta_{|B_{1}|}};
\end{equation}
the similar bound also holds for
$F_{A_{2},B_{2}}^{\overline{m}^{(2)},l_{1}}(\overline{f})(\lambda).$
Using (28), we are ready to estimate
$\|F^{\overline{m}}(\overline{f})\|_{s_{n}}.$ We distinguish between two
cases: $s_{n}<1$ and $s_{n}>1.$ Suppose first that $s_{n}<1.$
Then
\begin{equation}
\|F^{\overline{m}}(\overline{g})\|_{s_{n}}^{s_{n}} \leq
\sum\limits_{l_{1}}'
\|F_{A_{1},B_{1}}^{\overline{m}^{(1)},l_{1}}(\overline{g})(\lambda)\|
_{s_{|B_{1}|}}^{s_{n}}
\|F_{A_{2},B_{2}}^{\overline{m}^{(2)},l_{1}}(\overline{g})(\lambda)\|_
{s_{|B_{2}|}}^{s_{n}}.
\end{equation}
We used the fact that $\|
\sum h_{i}(x) \|^{s}_{s} \leq \sum \|h_{i}(x)\|_{s}^{s}$ when $s<1$ and
H\"older's inequality. Plugging the estimate (28)
and a similar
bound for
$F_{A_{2},B_{2}}^{\overline{m}^{(2)},l_{1}}(\overline{g})(\lambda)$
into (29), we find
\begin{equation}
\|F^{\overline{m}}(\overline{g})\|_{s_{n}}^{s_{n}} \leq
C_{n} 2^{-(\gamma_{|B_{1}|}|m^{(1)}|+\gamma_{|B_{2}|}|m^{(2)}|)s_{n}}
\sum\limits_{l_{1}} '\left(
\prod\limits_{j \in B_{1}} \|g_{j}\chi_{E(m_{1},l_{1})}
\|_{p}^{(1-\beta_{|B_{1}|})s_{n}} \prod\limits_{j \in B_{2}}\|g_{j}
\chi_{E(m_{1},l_{1}+1)}\|_{p}^{(1-\beta_{|B_{2}|})s_{n}}\right).
\end{equation}
Pick $0 < a_{1}, a_{2} <1$ such that
\[ a_{1}(1-\beta_{|B_{1}|}) = a_{2} (1-\beta_{|B_{2}|}) = \frac{p}{q}. \]
We can find such $a_{1},$ $a_{2}$ by induction assumption.
The sum in (30) may be estimated by H\"older's inequality in the following
way:
\[ \sum\limits_{l_{1}}
'\left(\prod\limits_{j \in B_{1}} \|g_{j}\chi_{E(m_{1},l_{1})}
\|_{p}^{(1-\beta_{|B_{1}|})s_{n}} \prod\limits_{j \in B_{2}}\|g_{j}
\chi_{E(m_{1},l_{1}+1)}\|_{p}^{(1-\beta_{|B_{2}|})s_{n}}\right) \leq \] \[ \leq
\prod\limits_{j \in B_{1}}
{\rm max}_{l_{1}}\|g_{j}\chi_{E(m_{1},l_{1})}\|_{p}^
{(1-a_{1})(1-\beta_{|B_{1}|})s_{n}}\prod\limits_{j \in B_{2}}{\rm max}_{l_{1}}\|g_{j}\chi_{E(m_{1},l_{1}+1)}\|_{p}^
{(1-a_{2})(1-\beta_{|B_{2}|})s_{n}}\prod\limits_{j=1}^{n}\|g_{j}\|_{p}^
{\frac{p}{q}s_{n}}. \]
Thus
\[ \|F^{\overline{m}}(\overline{g})\|_{s_{n}}
\leq C_{n}2^{-{\rm min}(\gamma_
{|B_{1}|}, \gamma_{|B_{2}|})(|m^{(1)}|+|m^{(2)}|)}\prod\limits_{j=1}^{n}
{\rm sup}_{l}\|g_{j}\chi_{E(m_{1},l)}\|_{p}^{{\rm min}(1-\beta_{|B_{1}|},
1-\beta_{|B_{2}|})-\frac{p}{q}}\prod\limits_{j=1}^{n} \|g_{j}\|_{p}^
{\frac{p}{q}}. \]
Obviously $\|g_{j}\|_{p} \geq {\rm sup}_{l}\|g_{j}\chi_{E(m_{1},l)}\|_{p},$
and $ {\rm sup}_{l}\|g_{j}\chi_{E(m_{1},l)}\|_{p} \leq 2^{-m_{1}}$ by (18).
Pick $\beta_{n}$ so that
\[ {\rm min}(1-\beta_{|B_{1}|}, 1-\beta_{|B_{2}|}) > 1-\beta_{n} > {\rm
max}(\frac{p}{q}, \frac{p}{2}), \]
and
\[ \gamma_{n} = {\rm min}(({\rm min}(1-\beta_{|B_{1}|},1-\beta_{|B_{2}|})-
(1-\beta_{n})), \gamma_{|B_{1}|}, \gamma_{|B_{2}|}). \]
Then
\begin{equation}
\|F^{\overline{m}}(\overline{g})\|_{s_{n}} \leq C_{n} 2^{-\gamma_{n}|m|}
\prod\limits_{j=1}^{n} \|g_{j}\|_{p}^{1-\beta_{n}}.
\end{equation}
There is only finite number of pairs $B_{1},$ $B_{2}$ such that $|B_{1}|+
|B_{2}|=n,$ and hence the constants $\gamma_{n},$ $\beta_{n}$ may be chosen
universally.
The case $s_{n}>1$ is similar. Using triangle inequality and H\"older's
inequality, we get
\[ \|F^{\overline{m}}(\overline{g})\|_{s_{n}} \leq
\sum\limits_{l_{1}}'
\|F_{A_{1},B_{1}}^{\overline{m}^{(1)},l_{1}}(\overline{g})(\lambda)\|
_{s_{|B_{1}|}}
\|F_{A_{2},B_{2}}^{\overline{m}^{(2)},l_{1}}(\overline{g})(\lambda)\|_
{s_{|B_{2}|}} \leq \]
\[ \leq C_{n}2^{-{\rm min}(\gamma_{|B_{1}|}, \gamma_{|B_{2}|})(|m^{(1)}|
+|m^{(2)}|)}\sum\limits_{l_{1}}' \left(\prod\limits_{j \in B_{1}}
\|g_{j}\chi_{E(m_{1},l_{1})}
\|_{p}^{(1-\beta_{|B_{1}|})} \prod\limits_{j \in B_{2}}\|g_{j}
\chi_{E(m_{1},l_{1}+1)}\|_{p}^{(1-\beta_{|B_{2}|})}\right). \]
Provided that
\begin{equation}
\frac{1}{p} (|B_{1}|(1-\beta_{|B_{1}|})+|B_{2}|(1-\beta_{|B_{2}|}))>1,
\end{equation}
we can apply the same argument as in the case $s_{n}<1$ to prove (31).
But (32) holds for all $|B_{1}|,$ $|B_{2}|\geq 1$ since by induction
hypothesis $1-\beta_{r}> \frac{p}{2}$ for all $r