Content-Type: multipart/mixed; boundary="-------------0201171235641"
This is a multi-part message in MIME format.
---------------0201171235641
Content-Type: text/plain; name="02-25.comments"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="02-25.comments"
25 pages
---------------0201171235641
Content-Type: text/plain; name="02-25.keywords"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="02-25.keywords"
random average process, random surfaces, harness process,
linear process, smoothing process, voter model, surface fluctuations,
central limit theorem
---------------0201171235641
Content-Type: application/x-tex; name="fvml.tex"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline; filename="fvml.tex"
\documentclass[12pt]{article}
\oddsidemargin -3mm % Remember this is 1 inch less than actual
%\evensidemargin 7mm
\textwidth 17cm
\topmargin -9mm % Remember this is 1 inch less than actual
%\headsep 0.9in % Between head and body of text
\headsep 20pt % Between head and body of text
\textheight 23cm
\scrollmode
%\baselineskip=30pt
%\renewcommand{\baselinestretch}{1.5}
\usepackage{amsfonts}
\usepackage{amsmath}
\usepackage{amssymb}
%\usepackage{showkeys}
\allowdisplaybreaks
%numeracao automatica de constantes
\newcounter{konstanta}
\newcommand{\cc}[1]{\refstepcounter{konstanta}
c_{\thekonstanta}
\newcounter{#1}
\setcounter{#1}{\value{konstanta}}
%\hbox to 0pt{\kern-8pt\raisebox{10pt}{\framebox{\tiny\sf #1}}\hss} %label
}
\newcommand{\cs}[1]{c_{\arabic{#1}}
%\hbox to 0pt{\kern-8pt\raisebox{10pt}{\framebox{\tiny\sf #1}}\hss} %label
}
\def\eps{\varepsilon}
\def\qed{\hfill\rule{.2cm}{.2cm}}
\def\P{{\mathbb P}}
\def\E{{\mathbb E}}
\def\Z{{\mathbb Z}}
\def\N{{\mathbb N}}
\def\R{{\mathbb R}}
\def\V{{\mathbb V}}
\def\1{\mbox{1\kern-0.28emI}}
\def\o{{\mathbf 1}}
\def\n{{\cal N}}
\def\vn{{\cal V}_n}
\def\l{\lambda}
\def\g{\gamma}
\def\s{\sigma}
\def\e{\epsilon}
\newtheorem{theo}{Theorem}[section]
\newtheorem{prop}{Proposition}[section]
\newtheorem{lm}{Lemma}[section]
\newtheorem{cor}{Corollary}[section]
\newtheorem{rmk}{Remark}[section]
\newtheorem{df}{Definition}[section]
\newcommand{\beq}{\begin{equation}}
\newcommand{\eeq}{\end{equation}}
\newcommand{\beqn}{\begin{eqnarray}}
\newcommand{\beqnn}{\begin{eqnarray*}}
\newcommand{\eeqn}{\end{eqnarray}}
\newcommand{\eeqnn}{\end{eqnarray*}}
\title{\bf Time fluctuations of the random average process with parabolic initial conditions}
\author{
L.R.G.~Fontes\thanks{Partially supported by CNPq grant 300576/92-7;
research part of FAPESP theme grant 99/11962-9 and
CNPq PRONEX grant 662177/96-7}\\ Universidade de S\~ ao Paulo
\and D.P.~Medeiros\thanks{Partially supported by CAPES}\\
Universidade Federal da Bahia \and
M.~Vachkovskaia\thanks{Supported by FAPESP (00/11462-5)}\\
Universidade de S\~ ao Paulo}
\date{}
\begin{document}
\maketitle
\begin{abstract}
The random average process is a randomly evolving $d$-dimensional
surface whose heights are updated by random convex combinations
of neighboring heights. The fluctuations of this process in
case of linear initial conditions have been studied before.
In this paper, we analyze the case of polynomial initial conditions
of degree 2 and higher. Specifically, we prove that the time
fluctuations of a initial parabolic surface are of order $n^{1-d/4}$
for $d=1,2,3$; $\sqrt{\log n}$ in $d=4$; and are bounded in $d\geq5$.
We establish a central limit theorem in $d=1$. In the bounded case of
$d\geq5$, we exhibit an invariant measure for the process as seen from
the average height at the origin and describe its asymptotic space
fluctuations. We consider briefly the case of initial polynomial
surfaces of higher degree to show that their time fluctuations are
not bounded in high dimensions, in contrast with the linear and
parabolic cases.
\end{abstract}
\vskip 3mm
\noindent{\bf Keywords:}
random average process, random surfaces, harness process,
linear process, smoothing process, voter model,
surface fluctuations, central limit theorem
\vskip 3mm
\noindent{\bf AMS Classification numbers: } Primary: 60K35, 82C41
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%% Intro %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}
The random average process (RAP) was introduced in~\cite{ff} as a model
of a randomly evolving $d$ dimensional surface in $d+1$ space.
The evolution consists of the heights of the surface getting updated,
at either discrete or continuous time, by random convex combinations
of neighboring heights (see~(\ref{eq:ave}) below).
In this way, starting out with a given surface,
which can be deterministic or itself random, we get at all times evolved
surfaces which are random (as functions of the random convex weights
and possibly random initial condition).
Closely related processes are the {\em harness process} introduced by
Hammersley~\cite{h} (see also~\cite{t}) and the {\em
smoothing process}~\cite{a}~\cite{ls}~\cite{l}, where
height updates consist of {\em deterministic} convex combinations
of neighboring heights plus an additive (for the former process)
or multiplicative (for the latter one) random noise.
The RAP (as well as the smoothing process)
is a special case of Liggett's linear
processes~(\cite{l}, Chapter IX).
A much studied special case of the RAP (one which
we discuss only briefly in this paper)
is the {\em voter model}~\cite{l}~\cite{d}. This corresponds
to having the random convex combination almost surely assign total mass to a
neighbor chosen at random. As discussed in~\cite{ff}, the behavior
of the voter model is rather different from the more general case
treated in that paper and also here.
In this paper, we study at length the discrete time RAP with a parabolic
initial condition.
One of our main results are upper and lower bounds of the same
leading order to the time fluctuations of the evolving surface.
Under suitable assumptions on the
distribution of the convex weights,
we obtain $n^{2-d/2}$ as the leading
order of the variance (as a function of time $n$) of the height of the surface
at a given site in dimensions $d=1,2,3$; $\log n$ in $d=4$; and constant
for $d\geq5$ (see Theorem~\ref{fluc.main} below).
This compares firstly to the case of
linear initial conditions~\cite{ff}, where the analogous variance is
of order $\sqrt n$ in $d=1$; $\log n$ in $d=2$; and constant
for $d\geq3$. The approach and techniques here are also comparable with those
of~\cite{ff}. Here, as there, we have a dual process with the same single
time distribution as the RAP (see~(\ref{eq 11}-\ref{eq 12}) below) which,
when centered, is a martingale. It is enough then to study
this process (as far as single time distributions are concerned).
Variances are then also shown to be related to moments of a
space-inhomogeneous Markov chain, here in $2d$-space
(see~(\ref{eq 21},\ref{rep},\ref{eq 38},\ref{eq dist}) below), rather than
in $d$-space. In view
of the extra complication, we keep the analysis simple by
making extra assumptions on the distribution of the convex weights
vis a vis~\cite{ff} (see first paragraph of next section). We also take
a convenient concrete form for the initial condition
(see~(\ref{eq 13}) below). The problem is then further reduced to one
involving the same $d$-dimensional Markov chain that enters
the analysis of the linear case in~\cite{ff}, but via a different, if related,
quantity (see~(\ref{eq up}) below).
The analysis proceeds indirectly (as in the linear case) by taking
generating functions. The argument for the parabolic case will actually
involve also the derivative of the generating functions entering the analysis
of the linear case. As in~\cite{ff}, we compare with the
analogous quantity for a $d$-dimensional space homogeneous Markov chain
(a random walk), and then get the upper bounds
(see~(\ref{eq up1},\ref{eq up2},\ref{eq up3},\ref{eq up4},\ref{eq up5+})
below). The argument for the lower bounds is similar, but
simpler. It involves a $d$-dimensional random walk
directly (see~(\ref{eq lb},\ref{52}) below).
We also prove, in one dimension, a central limit theorem
for the time fluctuations of the surface (see Theorem~\ref{clt}
below). As in~\cite{ff}, we verify the hypotheses of a martingale
CLT in~\cite{hh}.
The boundedness of the height variance of the RAP in high dimensions
seems to be a
distinguishing feature of linear and parabolic initial conditions,
among polynomial ones. We show in Theorem~\ref{unstab}
below that, starting out with a cubic, the heights have variance
of order at least $n^2$ in all high enough dimensions. This
divergence in time of the fluctuations
can be argued for initial polynomials of higher degree as well.
One difference between the initial linear and parabolic
cases is the following. Due to the martingale property of the dual
processes, mentioned above, and the $L_2$-boundedness in high dimensions,
the RAP's as seen from the average height at the origin
with initial linear and parabolic surfaces
converge weakly to invariant measures for the dynamics
as seen from the average height at the origin,
in those dimensions. The spatial fluctuations of these measures can
be then studied and they are found to be bounded for linear
initial conditions. This is {\em not} the case for the initial condition
here. We show in Theorem~\ref{theo:spa} below that the non trivially
scaled space fluctuations of the invariant measures converge
weakly to a non trivial limit. The variances also converge to
the variance of the limit.
This paper grew out of the PhD research of the second author,
which consisted of the RAP with a parabolic initial condition
of a different form from the one treated here. \cite{tese}
contains essentially the same results we present here (except for
the CLT and the discussion and result of Section 5),
obtained with the same approach and techniques, and more:
sharp bounds were obtained in one dimension
(see Theorem~\ref{t 3.1.3} below); fluctuations of the surface as
seen from the height at the origin were shown to be bounded for
$d\geq3$, the scaled spatial fluctuations of the limiting invariant
measures for the process as seen from the height at the origin arising
in this context were proved to converge to non trivial weak limits;
and continuous
time analogs of the discrete time results were established.
The assumptions on the convex weights distribution made
in~\cite{tese} are less
restrictive than the ones here. Some of the extra results will be the
object of a future paper.
We close this introduction with a comparison to the harness process
mentioned above. The underlying space is $\Z^d$.
For the case of a deterministic initial surface (as here),
translation invariant convex weights (in here, that is the case in the
distributional sense) and i.i.d.~$L_2$ mean zero
noise, the surface (height vector)
at time $n$ can be written as
\beq
\nonumber
\label{harness}
X_n={\cal U}^nX_0+\sum_{k=1}^{n}{\cal U}^{n-k}{\cal Z}_k,
\eeq
where ${\cal Z}_{k}$, $k\geq1$, are i.i.d.~vectors of i.i.d.~$L_2$
mean zero noise components and ${\cal U}$ is the convex weights matrix
satisfying that ${\cal U}(i,i+j)\geq0$ does not depend on $i$ for all $j$,
$\sum_j{\cal U}(i,j)=1$ for all $i$ and
$\sum_j||j||^2{\cal U}(0,j)<\infty$.
It is clear then that, in this context,
the height variances do not depend on $X_0$. With the same approach
and techniques used in~\cite{ff} and here, it is possible to
show that those variances have the same order of magnitude as
the ones of the RAP with linear initial conditions in all dimensions
(the argument is actually quite straightforward in this case).
We conclude that the fluctuations
of the RAP and the harness process behave rather differently for parabolic
(and higher degree polynomial) initial conditions of the kind
considered in this paper.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%% Section 2 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Definitions and main results}
We consider a $d$-dimensional discrete time random average
process. At time $t=0$ we have an initial configuration $X_0$ in
the state space $\R^{\Z^d}$, which is a hypersurface
of dimension $d$. So, $X_0(i)$ is the height of this surface at
site~$i\in\Z^d$. The evolution is defined as follows. At time $n$ the
height of each site will be a random convex combination of the heights
of its neighbors at time $n-1$. Let $X_n(i)$ denote the
height of site $i$ at time $n$. We have
\beq
\label{eq:ave}
X_n(i)=\sum_{j\in\Z^d} u_n(i,j)X_{n-1}(j), \quad n\geq1,
\eeq
where
$U=\{u_n(i,i+\cdot), n\ge 1, i\in \Z^d\}$ is a family of
i.i.d.~random probability vectors independent of the
initial configuration $X_0$.
In particular, this means that almost surely $u_n(i,j)\geq0$
and $\sum_{j\in\Z^d}u_n(i,j)=1$ for all $n\geq1$ and $i,j\in\Z^d$.
Let $e_j$, $j=1,\ldots,d$, be the $j$-th
positive coordinate vector and let $\n=\{\pm e_j,j=1,\ldots,d\}$.
We assume
\begin{enumerate}
\item Nearest neighbor range: $u_1(0, i)=0$ almost surely for $i\notin\n$;
\item Symmetry:
$\{u_1(0,i); i \in\n\} \stackrel {d} = \{u_1(0,-i); i \in\n\}$;
\item Coordinate exchangeability:\\
$\{u_1(0,\pi(i)); i \in\n\} \stackrel {d} = \{u_1(0,i); i \in\n\}$
for all permutations of coordinates $\pi$;
\item Non voter model case:
$\P(u_1(0,i)=1\mbox{ for some }i\in\n)<1$.
\item Non degeneracy:
$\P\left(\sum_{j=1}^d[u_1(0,e_j)-u_1(0,-e_j)]=0\right)<1$,
\end{enumerate}
where $\stackrel {d} =$ means identity in distribution.
The first and third assumptions are for simplicity. The fourth one is to
rule out a simpler (and qualitatively different) case, namely the voter model
(but see Remark~\ref{vm} below).
The last one involves the particular initial condition we will consider
(see~(\ref{eq 13}) below) and, for that case, rules out a trivial case,
where there are no fluctuations.
Let ${\cal F}$ be the sigma-algebra
generated by $U$ and let ${\cal F}_n$ be the sigma algebra generated by
$\{u_m(\cdot, \cdot), 1\le m \le n\}$. Let $\P$ and $\E$ denote
the underlying probability and expectation and
let $\nu$ be the (marginal) distribution of $u_k(\cdot,\cdot)$, which
does not depend on $k\geq1$. We
will write $\nu(X)$ for the expectation of a random variable $X$
with respect to $\nu$.
Note that for $x\in \Z^d$
\begin{eqnarray}
\label{eq 6}
X_n(x)&=&\sum_{j_1, \ldots j_n \in {\Z}^d} \prod_ {i=1}^n
u_{n-i+1}(j_{i-1},j_i) X_0(j_n)\nonumber\\
&=& \sum_{j_n \in {\Z}^d}\P( Y_n^{x,n} = j_n \mid {\cal F} )
X_0(j_n)
= \E[ X_0 (Y_n^{x,n}) \mid {\cal F} ],
\end{eqnarray}
where $ (Y_k^{x,n})^n_{k=0} $ is a random walk which starts in
$x$ with transition probabilities
\begin{equation}\label{eq 7}
\P(Y_k^{x,n} = j \mid Y_{k-1}^{x,n} = i, {\cal F} ) =
u_{n-k+1}(i,j).
\end{equation}
For the process $(X_n)$ to be well defined, it is sufficient that
(see~\cite{ff}, Lemma 2.1)
\begin{equation}\label{eq 75}\E|X_0(Y_n^{0,n})|<\infty
\mbox{ for all $n$.}
\end{equation}
Consider also a random walk $ (Y_k^x)_{k \geq 0} $ with transition
probabilities
\begin{equation}
\label{eq 8}
\P({Y}_k^x = j \mid {Y}_{k-1}^x = i,
{\cal F} ) = u_k(i,j).
\end{equation}
Then for all $n\ge 0$
\begin{equation}\label{eq 9}
\{ \P({Y}_k^x = \cdot \mid {\cal F});\; x \in
{\Z}^d,\;
0 \leq k \leq n \}
\stackrel {d} = \{\P(Y_k^{x,n} = \cdot \mid {\cal F});
\; x \in
{\Z}^d,\; 0 \leq k\leq n \},
\end{equation}
since $u_k \stackrel {d} = u_{n - k + 1}$ for all $n,k$. The unconditional
(on ${\cal F}$) distributions of $ Y_i^x $ and $ Y_i^{x,n}$,
which are equal in their common time span, make them random walks
with transition probabilities
\[
\pi(x,y) = \nu[u_1(x,y)] = \nu[u_1(0,y - x)] = \pi(0,y-x)
\]
for all $x,y\in\Z^d$. Notice that, by the nearest neighbor assumption
on the $u$'s, these random walks are simple and symmetric.
For $n\ge 0$ and any $x\in \Z^d$ define
\begin{equation}\label{eq 11}
L_n(x) = \E[X_0(Y_n^x) \mid {\cal F}].
\end{equation}
From~(\ref{eq 9}) we have that for any fixed $n$
\begin{equation}
\label{eq 12}
X_n:=\{X_n(x);\; x\in{\Z}^d\}\stackrel{d}=\{L_n(x);\; x\in{\Z}^d\}=:L_n.
\end{equation}
Thus, to establish results about single time distributions
of $X_n$ it is sufficient to give an argument for $L_n$.
In this paper we consider a parabolic initial condition of the form
\begin{equation}\label{eq 13}
X_0(x) = ({\mathbf 1}x)^2=\left(\sum_{k=1}^d x_k\right)^2,
\end{equation}
for any $x\in \Z^d$, where ${\mathbf 1}\in\R^d,\,{\mathbf 1}=(1,\ldots,1)$.
This clearly satisfies~(\ref{eq 75}).
\begin{rmk}
Other parabolic forms, like $(\l x)^2=(\sum_{k=1}^d \l_kx_k)^2$,
where $\l\in\R^d$ is a fixed non null vector,
can be handled, with essentially the same results and techniques.
The form $||x||^2=\sum_{k=1}^d x_k^2$ was analyzed in~\cite{tese}.
\end{rmk}
With the choice~(\ref{eq 13}), we have
\begin{equation}\label{eq 14}
L_n(x) = \E[ (\o Y_n^x)^2 \mid {\cal F}].
\end{equation}
Since $Y_n^x$ is a random walk starting from $x$ and the $u$'s are
symmetric, we have
$
\E({L}_n(x)) = \E((\o Y_n^x)^2) = \E((\o(Y_n^0+x))^2) = (\o x)^2+
\E((\o Y_n^0)^2).
$
Writing $Y_n^0=\sum_{i=1}^n\Delta_i$, where $\Delta_1,\ldots,\Delta_n$ are
i.i.d.~random vectors such that $\P(\Delta_1=\pm e_i)=1/2d$, for
$i=1,\ldots,d$, we have
$\E((\o Y_n^0)^2)=\sum_{i=1}^n\E((\o\Delta_i)^2)=n$,
since $\o\Delta_1=\pm1$ almost surely. Thus
$
\E({L}_n(x)) = (\o x)^2+ n.
$
Let
$
\bar{Y}_n^x = Y^x_n - \E(Y^x_n)=Y^x_n-x.
$
Then,
$
(\o Y^x_n)^2 = (\o \bar{Y}^x_n)^2 + 2(\o x)(\o \bar{Y}^x_n) +
(\o x)^2
$
and
\beq
\label{eq:decomp}
{L}_n(x) = {\bar L}_n(x) + 2(\o x)\bar{Z}_n(x) +
(\o x)^2,
\eeq
where
$
\bar{L}_n(x) = \E((\o \bar{Y}^x_n)^2 \mid {\cal F}_n)
$
and
$
\bar{Z}_n(x) = \E(\o \bar{Y}^x_n \mid {\cal F}_n).
$
Note that, by translation invariance, the distributions of $
\bar{L}_n(x) $ and $ \bar{Z}_n(x) $ do not depend on $ x $. Let
${\bar L}_n:={L}_n(0)=\bar{L}_n(0)$ and $\bar{Z}_n:={Z}_n(0)=\bar{Z}_n(0)$. So,
\begin{equation}\label{eq 21}
\V({L}_n(x)) = \V({\bar L}_n) + 4(\o x)^2\V({\bar Z}_n) +
4(\o x){\mathbb C}\mbox{ov}({\bar L}_n , {\bar Z}_n).
\end{equation}
Below, $c_1,c_2,\ldots$ will always denote positive real numbers which may
depend only on $d$.
One of our main results is the following.
\begin{theo} \label{fluc.main} For all $x\in\Z^d$,
there exist $\cc{-1},\cc{0}$ such that
\[\cs{-1}{\mathfrak O}(n,d)\le\V(L_n(x))\le \cs{0}{\mathfrak O}(n,d)\]
\end{theo}
for all $n$, where
\begin{eqnarray}
\label{eq bd13}
{\mathfrak O}(n,d)
&=&\left \{
\begin{array}{ll}
n^{2-\frac{d}{2}},
& \mbox{ if $d=1,2,3$};\\
\log n, & \mbox{ if $d=4$};\\
\mbox{constant}, & \mbox{ if $d\ge 5$}.
\end{array}
\right .
\end{eqnarray}
The proof of Theorem~\ref{fluc.main} will be presented
in Section 4.
In dimension $1$, it is possible to get a stronger result,
for which we do not present a proof here, but rather refer
to~\cite{tese} (Theorem 3.1.3):
\begin{theo}
\label{t 3.1.3} If $d=1$, then there exists $\cc{00}$ such that
for all $x\in\Z^d$
\[
\frac{\V(L_n(x))}{n^{3/2}}\to \cs{00},\mbox{ when } n\to \infty.
\]
\end{theo}
We establish also a Central Limit Theorem for $\bar L_n=L_n(0)$
in dimension $1$ (the proof, presented in Section 6, does not use
Theorem~\ref{t 3.1.3}). Let $\vn:=\V( \bar L_n)$.
\begin{theo}
\label{clt} In $d=1$, the distribution of $\vn^{-1/2}\bar L_n$ converges
to a standard Gaussian, as $n\to\infty$.
\end{theo}
Our analysis yields that in dimensions 5 or more
there exists an invariant measure for the dynamics of the surface
as seen from the average height at the origin.
This is related to the almost sure
existence of the limits of $\bar{L}_n-n$ and $\bar{Z}_n$
as $n\to\infty$.
In Subsection~\ref{inv}, we discuss this and prove a
result about the asymptotic shape and magnitude of the space fluctuations
of this measure.
In Section 3, we state and prove auxiliary results for the arguments of
the proofs of our main results. In Section 5, we discuss the case of
higher order polynomial initial conditions and prove a result that
indicates a substantial difference with the linear and parabolic cases,
namely the unboundedness of the time fluctuations at high dimensions.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%% Section 3 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Preliminaries}
To prove Theorems~\ref{fluc.main} and~\ref{clt} we will need
some lemmas.
\begin{lm}
\label{l 2.2.1} The process $ {\bar L}_n - n$ is a martingale with
respect to $\{ {\cal F}_n, n \geq 0 \}$.
\end{lm}
\noindent{\bf Proof.\/} Let $ Y_n = Y^0_n $. So,
$
\E({\bar L}_n) = \E((\o Y_n)^2) = n.
$
We have
\begin{eqnarray*}
{\bar L}_n & = & \E[(\o Y_n)^2 \mid {\cal F}_n] \\
& = & \sum_{k\in {\Z}^d}\sum_{j\in {\Z}^d}(\o j)^2
\P(Y_n=j \mid Y_{n-1}=k, {\cal F}_n) \P(Y_{n-1} = k \mid
{\cal F}_n)\\
%
& = & \sum_{k\in {\Z}^d}\sum_{j\in {\Z}^d}\{\o [k+(j-k)]\}^2
u_n(k,j) \P(Y_{n-1} = k \mid {\cal F}_{n-1}) \\
%
& = & \sum_{k\in {\Z}^d}(\o k)^2
\P(Y_{n-1}=k \mid {\cal F}_{n-1})\\
& & +2\sum_{k\in {\Z}^d}(\o k)\Big(\sum_{j\in {\Z}^d}
[\o (j-k)] u_n(k,j)\Big) \P(Y_{n-1} = k \mid {\cal F}_{n-1}) \\
%
& & + \sum_{k\in {\Z}^d}\Big(\sum_{j\in {\Z}^d}[\o(j-k)]^2
u_n(k,j)\Big) \P(Y_{n-1} = k \mid {\cal F}_{n-1}) \\
%
& = & \E[(\o {Y}_{n-1})^2 \mid
{\cal F}_{n-1}] + 2\sum_{k\in {\Z}^d}(\o k) (\o \theta_n(k))
\P(Y_{n-1} = k \mid {\cal F}_{n-1})\\
& & + \sum_{k\in {\Z}^d} \P(Y_{n-1} = k \mid {\cal F}_{n-1}),
\end{eqnarray*}
where $\theta_n(k) = \sum_{j\in{\Z}^d}(j-k)u_n(k,j)$, since
$[\o(j-k)]^2=1$ every time $u_n(k,j)\ne0$, due to the nearest neighbor
character of the $u$'s.
Let ${W}_n=\E[(\o {Y}_{n-1})(\o \theta_n(Y_{n-1})) \mid {\cal F}_n],$
so ${\bar L}_n = {\bar L}_{n-1} + 2{W}_n + 1.$
Note that the distribution of $\theta_n(k)$ does not depend on $n$ or
$k$ and
$\E( \theta_1(0)) = \E\Big(\sum_{j\in{\Z}^d}ju_1(0,j)\Big) = 0.$
We have that $ {\bar L}_{0} = 0$ and, for $n\geq1$,
${\bar L}_n = \sum_{i=1}^n(2{W}_i + 1 )$. Thus
\beq
\label{eq:mart}
{\bar L}_n-n = 2 \sum_{i=1}^n {W}_i.
\eeq
Also
\begin{eqnarray}\label{eq 30}
\E[{W}_n \mid {\cal F}_{n-1}]&=&
\E\{\E[(\o{Y}_{n-1})(\o\theta_n(Y_{n-1})) \mid {\cal F}_{n}]
\mid {\cal F}_{n-1}\} \\
&=&\E[(\o{Y}_{n-1})(\o\theta_n(Y_{n-1})) \mid {\cal F}_{n-1}]\nonumber\\
&=& \sum_{k\in {\Z}^d}\E[(\o k)(\o \theta_n(k)) \mid Y_{n-1}= k,{\cal F}_{n-1}]
\P(Y_{n-1} = k \mid {\cal F}_{n-1})\nonumber\\
&=&\sum_{k\in {\Z}^d}(\o k)(\o \E[\theta_n(k)])
\P(Y_{n-1} = k \mid {\cal F}_{n-1})=0,
\end{eqnarray}
since $\theta_n(k)$ is independent of ${\cal F}_{n-1}$ for all $n,k$.
Thus, ${\bar L}_n -n$ is a martingale with respect to $\{
{\cal F}_n, n \geq 0 \}$ and Lemma~\ref{l 2.2.1} is proved.
\qed
\begin{lm}
\label{l 2.5.1} Let $(\hat Y_n)_{n\ge 0}$ be an independent copy
of $( Y_n)_{n\ge 0}$ given ${\cal F}$. Then $(Y_n,\hat{Y}_n)$ is
a Markov chain in $\Z^d\times \Z^d$ with the following transition
probabilities
\beq
\label{tp}
\P(Y_n=k_n,\hat{Y}_n=l_n \mid Y_{n-1}=k_{n-1},
\hat{Y}_{n-1}= l_{n-1}) =
\nu[u_1(k_{n-1},k_n)u_1(l_{n-1},l_n)].
\eeq
\end{lm}
\noindent {\bf Proof.}
\begin{eqnarray}\nonumber
\lefteqn{\P(Y_n=k_n,\hat{Y}_n=l_n \mid Y_{n-1}=k_{n-1},
\hat{Y}_{n-1}= l_{n-1},\ldots,Y_1=k_1,\hat{Y}_1=l_1)}\\\nonumber
%
&{=}& \E[\P(Y_n=k_n,\hat{Y}_n=l_n \mid Y_{n-1}=k_{n-1},
\hat{Y}_{n-1}=l_{n-1},\ldots,Y_1=k_1,\hat{Y}_1=l_1,{\cal F}_n)]\\
\label{tp1}
&{=}& \E[\P(Y_n=k_n \mid Y_{n-1}=k_{n-1},{\cal F}_n)
\P(\hat Y_n=k_n \mid \hat Y_{n-1}=k_{n-1},{\cal F}_n)].
\end{eqnarray}
The second equality is justified by the independence of the random
walks $Y_n$ and $\hat{Y}_n$ given ${\cal F}_n$ and
the Markov property of $Y_n$ and $\hat{Y}_n$.
The result follows
from the fact that the right hand side of~(\ref{tp1})
equals both
$\E[\P(Y_n=k_n,\hat{Y}_n=l_n \mid Y_{n-1}=k_{n-1},\hat{Y}_{n-1}=l_{n-1},{\cal F}_n)]
=\P(Y_n=k_n,\hat{Y}_n=l_n \mid Y_{n-1}=k_{n-1},\hat{Y}_{n-1}=l_{n-1})$
and $\E[u_n(k_{n-1},k_{n})u_n(l_{n-1},l_{n})]=
\nu[u_1(k_{n-1},k_n)u_1(l_{n-1},l_n)]$. \qed
\begin{cor}
\label{l 2.5.2} Let $D_n=Y_n-\hat Y_n$ and $S_n=Y_n+\hat Y_n$.
Then $(D_n, S_n)_{n\ge 0}$ is a Markov chain in $\Z^d\times \Z^d$
with transition probabilities
\begin{eqnarray}
\nonumber
\lefteqn{\P(D_n=d_n,S_n=s_n \mid D_{n-1}=d_{n-1},S_{n-1}=s_{n-1})}\\
\label{eq dist}
&= \nu\Big[u_1\Big(\frac{s_{n-1}+d_{n-1}}{2},\frac{s_{n}+d_{n}}{2}\Big)
u_1\Big(\frac{s_{n-1}-d_{n-1}}{2},\frac{s_{n}-d_{n}}{2}\Big)\Big].&
\end{eqnarray}
\end{cor}
\noindent {\bf Proof.} Straightforward from Lemma~\ref{l 2.5.1}.
\begin{rmk}
\label{l 2.5.3} From Corollary~\ref{l 2.5.3}, we have that, if $d_{n-1}=0$,
\begin{eqnarray*}
\lefteqn{ \P(D_n=d, S_n = s_{n-1}+s \mid D_{n-1}= 0, S_{n-1}=
s_{n-1})}\\
& = & \E\Big[u_1\Big(\frac{s_{n-1}}{2},\frac{s_{n-1}}{2}+
\frac{s+d}{2}\Big)u_1\Big(\frac {s_{n-1}}{2},\frac{s_{n-1}}{2}+
\frac{s-d}{2}\Big)\Big]\\
& = &
\E\Big[u_1\Big(0,\frac{s+d}{2}\Big)u_1\Big(0,\frac{s-d}{2}\Big)\Big]=
\nu\Big[u_1\Big(0,\frac{s+d}{2}\Big)u_1\Big(0,\frac{s-d}{2}\Big)\Big],
\end{eqnarray*}
where the second equality follows by translation invariant
of the $u$'s.
On the other hand, if $d_{n-1}\ne 0$, then,
\begin{eqnarray*}
\lefteqn{\P(D_n=d_{n-1}+d, S_n= s_{n-1}+s \mid
D_{n-1}=d_{n-1}, S_{n-1}=s_{n-1})}\\
& = & \E\Big[u_1\Big(\frac{s_{n-1}+d_{n-1}}{2},\frac{s_{n-1}+d_{n-1}}{2}+
\frac{s+d}{2}\Big)\\
&&\quad\times u_1\Big(\frac{s_{n-1}-d_{n-1}}{2},\frac{s_{n-1}-d_{n-1}}{2}+
\frac{s-d}{2}\Big)\Big]\\
& = & \E\Big[u_1\Big(\frac{s_{n-1}+d_{n-1}}{2},\frac{s_{n-1}+d_{n-1}}{2}+
\frac{s+d}{2}\Big)\Big]\\
&&\times \E\Big[u_1\Big(\frac{s_{n-1}-d_{n-1}}{2},
\frac{s_{n-1}-d_{n-1}}{2}+ \frac{s-d}{2}\Big)\Big]\\
& = &
\E\Big[u_1\Big(0,\frac{s+d}{2}\Big)\Big]\E\Big[u_1\Big(0,\frac{s-d}{2}\Big)\Big]=
\nu\Big[u_1\Big(0,\frac{s+d}{2}\Big)\Big]\nu\Big[u_1\Big(0,\frac{s-d}{2}\Big)\Big],
\end{eqnarray*}
where we have used the
independence and translation invariance of the $u$'s. So,
\begin{eqnarray}\label{eq 47}
\lefteqn{\P(D_n=d_{n-1}+d,S_n= s_{n-1}+s \mid D_{n-1}=d_{n-1},
S_{n-1}=s_{n-1})} \nonumber\\
&=&
\begin{cases}
\nu[u_1(0,\frac{s+d}{2})u_1(0,\frac{s-d}{2})],
& \mbox{ if $d_{n-1}=0$}\\
\nu[u_1(0,\frac{s+d}{2})]\nu[u_1(0,\frac{s-d}{2})],&
\mbox{ if $d_{n-1}\neq0$},
\end{cases}
\end{eqnarray}
and thus $(D_n,S_n)$ is space homogeneous
in $ \{0\} \times
{\Z}^d $ and $ ({\Z}^d \setminus \{0\}) \times {\Z}^d$
separately, but not in
${\Z}^d \times {\Z}^d$.
\end{rmk}
\begin{rmk}
\label{rmk:dn}
It follows from~(\ref{eq 47}) that $D_n$, $n\geq0$, is a Markov chain
with transition probabilities (see also \cite{ff}, Lemma~2.5)
\begin{equation}%\label{eq 63}
\gamma(l,k)=
\begin{cases}
\sum_{j\in {\Z}^d}\nu[u_1(0,m)u_1(0,j+k)],& \mbox{ if } l=0,\\
\sum_{j\in {\Z}^d}\nu[u_1(0,j)]\nu[u_1(l,j+k)],& \mbox{ if } l\ne0.
\end{cases}
\end{equation}
Our assumptions on the $u$'s make the jumps of $D_n$ have length only
either $0$ or $2$, with $\gamma(0,0)<1$.
The jumps of length $2$ can be (only) in any of the coordinate
positive and negative directions.
All of these possibilities have equal probabilities
(which does not depend on the starting point, provided it is
in $\Z^d\setminus\{0\}$), that is
\[\gamma(l,l\pm2e_j)=(1-\gamma(l,l))/(2d)
\,\,\mbox{ for all }\,\, l\in\Z^d \,\,\mbox{ and }\,\, j=1,\ldots,d\,;\]
\[\gamma(l,l)=\gamma(l',l') \,\,\mbox{ if }\,\, l,l'\ne0. \]
\end{rmk}
\begin{rmk}
Remark~\ref{l 2.5.3} allows us to construct $(D_n,S_n)$
in the following way. Let $(\tilde\delta, \tilde\xi)$ be distributed as
the increments of $(D_n,S_n)$ in $\{0\} \times {\Z}^d $ and
$ (\delta,\xi) $
as those in $({\Z}^d \setminus \{0\})
\times {\Z}^d $, that is,
\begin{equation}\label{eq 48A}
\P(\tilde\delta =d, \tilde\xi = s ) =
\nu\Big[u_1\Big(0,\frac{s+d}{2}\Big)u_1\Big(0,\frac{s-d}{2}\Big)\Big]
\end{equation}
and
\begin{equation}
\label{eq 48B}
\P(\delta = d, \xi = s ) =
\nu\Big[u_1\Big(0,\frac{s+d}{2}\Big)\Big]\nu\Big[u_1\Big(0,
\frac{s-d}{2}\Big)\Big].
\end{equation}
Let $ \{(\delta_n(i), \xi_n(i)); i \in {\Z}^d \setminus
\{0\}, n \geq 1 \} $ and $ \{(\delta_n(0), \xi_n(0)); n
\geq 1 \} $ be two independent families of i.i.d.~random
vectors such that
$(\delta_1(0),\xi_1(0))\stackrel{d}=(\tilde\delta,\tilde\xi)$
and
$(\delta_1(i)),\xi_1(i))\stackrel{d}=(\delta,\xi)$
for all $i\in({\Z}^d\setminus\{0\})$. Then
\begin{equation}\label{eq 49}
(D_n,S_n)=\sum_{i=1}^n (\delta_i(D_{i-1}),\xi_i(D_{i-1})).
\end{equation}
\end{rmk}
\begin{lm}
\label{l 2.5.4} For all $n$, given $D_{n-1}$ and $\delta_n(D_{n-1})$,
$
\xi_n(D_{n-1})\stackrel{d}{=}-\xi_n(D_{n-1}).
$
\end{lm}
\noindent {\bf Proof.} For all $d$, given that $D_{n-1}=d_{n-1}$, if
$d_{n-1}=0$, then, by Remark~\ref{l 2.5.3},
\begin{eqnarray*}
&&\P\left(\delta_n(D_{n-1})=d,\xi_n(D_{n-1})=s \mid D_{n-1}= 0\right)
=\P(\tilde\delta =d, \tilde\xi = s ) \\
&=&\nu\Big[u_1\Big(0,\frac{s+d}{2}\Big)u_1\Big(0,\frac{s-d}{2}\Big)\Big]
\stackrel{I}{=}
\nu\Big[u_1\Big(0,\frac{-s-d}{2}\Big)u_1\Big(0,\frac{-s+d}{2}\Big)\Big]\\
&=&\P(\tilde\delta=d,\tilde\xi=-s)=
\P\left(\delta_n(D_{n-1})=d,\xi_n(D_{n-1})=-s \mid D_{n-1}= 0\right),
\end{eqnarray*}
where equality $I$ follows from the symmetry of the $u$'s.
The case $d_{n-1}\ne0$ is similar. \qed
\medskip
Let $H_n$ be a space homogeneous Markov chain (random walk)
with transition probabilities
\begin{equation}\label{eq 61}
\gamma_H(l,k) = \sum_{j\in {\Z}^d}\nu(u_1(0,j))\nu(u_1(l,j+k))
\quad\mbox{ for all } l, k\in \Z^d.
\end{equation}
\begin{rmk}
\label{rm:id}
The transition probabilities of $H_n$ and $D_n$
differ only at the origin. We have also that
$0<\gamma_H(0,0)<\gamma(0,0)<1$.
\end{rmk}
For $0~~i)s^i.\]
This in turn tends to $\sum_{i\geq0}\P(T>i)=\E(T)$ as $s\uparrow1$.
It is a well known result for the (nearest neighbor) symmetric random
walk in $\Z^d$ that $E(T)=\infty$ (\cite{s}, Proposition 18.1). Thus,
\begin{equation}
\label{eq 397}
\lim_{s\uparrow 1}\frac{f^{(0)}(s)}{g^{(0)}(s)}=
\frac{1-\bar\g}{1-\g}.
\end{equation}
and the result for $k=0$ follows, since $\g<1$ (see Remark~\ref{rmk:dn}).
For the next case, notice first that
\[
f^{(1)}(s)=[f(s)]^2d\phi(s,\g)/ds=[f(s)]^2[\g+(1-\g)\psi^{(1)}(s)]
\]
and, analogously,
\[
g^{(1)}(s)=[g(s)]^2d\phi(s,\bar\g)/ds=[f(s)]^2[\bar\g+(1-\bar\g)\psi^{(1)}(s)].
\]
From~(\ref{eq 397}), there exists
a constant $M>1$ such that $f(s)\leq Mg(s)$ for all $0\leq s<1$.
By Remark~\ref{rm:id}, we can have $M$ also satisfy
$1-\g\leq M(1-\bar\g)$, $\g\leq M\bar\g$.
It follows that
$\lim_{s\uparrow 1}f^{(1)}(s)/g^{(1)}(s)
\leq M^3$.
For the next case, notice that
\begin{eqnarray*}
f^{(2)}(s)&=& [f(s)]^2\frac{d^2\phi(s,\g)}{ds^2}+2[f(s)]^3
\left(\frac{d\phi(s,\g)}{ds}\right)^2\\
& = & [f(s)]^2(1-\g)\psi^{(2)}(s)
+2[f(s)]^3[\g+(1-\g)\psi^{(1)}(s)]^2
\end{eqnarray*}
and a similar expression holds for $g^{(2)}(s)$, with $\bar\g$
replacing $\g$. From the above considerations, it follows that
$\lim_{s\uparrow 1}f^{(2)}(s)/g^{(2)}(s)\leq 3M^5$.
Similarly, we find that
$$\lim_{s\uparrow 1}f^{(3)}(s)/g^{(3)}(s)\leq 13M^7\quad\mbox{and}\quad
\lim_{s\uparrow 1}f^{(4)}(s)/g^{(4)}(s)\leq 75M^9.\quad
\qed$$
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%% Section 4 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Fluctuations of $L_n$}
\subsection{Proof of Theorem~\ref{fluc.main}}
By Lemma~\ref{l 2.2.1} and~(\ref{eq:mart}),
\begin{equation}\label{rep}
\V({\bar L}_n)=4\sum_{i=1}^n\V({W}_i).
\end{equation}
Now,
\begin{eqnarray}
\lefteqn{\V({W}_i)=\E({W}_i)^2=
\E\Big[\Big(
\sum_{k \in {\Z}^d}(\o k)(\o \theta_i(k))\P(Y_{i-1}=k \mid {\cal F}_{i-1})
\Big)^2\Big]\nonumber}\\
&=&\E\sum_{k,r\in{\Z}^d}(\o k)(\o r)(\o \theta_i(k))(\o \theta_i(r))
\P(Y_{i-1}=k \mid {\cal F}_{i-1})\P(Y_{i-1}=r \mid {\cal F}_{i-1})
\nonumber\\
\label{var}
&=&\sum_{k \in {\Z}^d}(\o k)^2\E\{(\o \theta_i(k))^2
\P^2(Y_{i-1} = k \mid {\cal F}_{i-1})\}\nonumber\\
& = & \sum_{k\in
{\Z}^d}(\o k)^2\nu[(\o \theta_i(k))^2]
\E[\P^2(Y_{i-1} = k \mid {\cal F}_{i-1})]\nonumber\\
&=&\sigma^2\sum_{k\in{\Z}^d}(\o k)^2\E[\P^2(Y_{i-1} = k \mid {\cal F}_{i-1})],
\end{eqnarray}
by the independence between $\theta_i(\cdot) $ and ${\cal F}_{i-1}$,
between $\theta_i(k)$ and
$\theta_i(r)$ when $k \neq r$, and the zero mean of the latter,
where $\sigma^2=\nu[(\o \theta_i(k))^2]=\nu[(\o \theta_1(0))^2]$
is positive by the non degeneracy assumption on the $u$'s.
We have that
\begin{eqnarray*}\label{eq 35}
&&\E[\P^2(Y_{i-1}=k \mid {\cal F}_{i-1})]=
\E[\P(Y_{i-1}=k \mid {\cal F}_{i-1})
\P(\hat{Y}_{i-1}=k \mid {\cal F}_{i-1})]\\
&=&\E[\P(Y_{i-1}=k,\hat{Y}_{i-1}=k \mid {\cal F}_{i-1})]\\
&=&\P(Y_{i-1}=\hat{Y}_{i-1}=k)=\P\Big(\frac{S_{i-1}}{2}=k,D_{i-1}=0\Big),
\end{eqnarray*}
by the definition of $S_n$ and $D_n$
(see Lemma~\ref{l 2.5.1} and Corollary~\ref{l 2.5.2} above). Thus,
\begin{equation}\label{eq 38}
\V({W}_i) = \sigma^2\sum_{k\in {\Z}^d}
(\o k)^2 \P\Big(\frac{S_{i-1}}{2}=k,D_{i-1}=0\Big)=
\frac{\sigma^2}{4}\E\left[(\o {S}_{i-1})^2;D_{i-1}=0\right].
\end{equation}
Thus from~(\ref{eq 38}) and~(\ref{rep}), we get
\begin{equation}\label{varl}
\V({\bar L}_n)=\s^2\sum_{i=0}^{n-1}\E\left[(\o {S}_{i})^2;D_{i}=0\right].
\end{equation}
%%%%%%%%%%%%%%%%%%%%%%%% Upperbound %%%%%%%%%%%%%%%%%%%%%%%%
\subsubsection{Upper bound for $\V({\bar L}_n)$}
Writing $S_i$ as a sum of its increments, we get
\begin{eqnarray}%\label{eq 54}
\lefteqn{\E\left[(\o {S}_{i})^2;D_{i}=0\right]=\E\Big[
\Big(\sum_{j=1}^{i}\o\xi_j(D_{j-1})
\Big)^2;\,D_i=0\Big]}\nonumber\\
&=&
\E\Big[\sum_{j=1}^{i}[\o\xi_j(D_{j-1})]^2
+2\sum_{1=k0$,
\begin{eqnarray}\nonumber
\V({\bar L}_n)&\geq& 4\s^2\varepsilon\sum_{i=1}^{n-1}\,\,i
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\sum_{\mbox{}\quad\quad\quad k\in\Z^d:\;(\o k)^2>
\varepsilon i}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\P^2(Y_i=k)\\
\label{eq 76}
&=&4\s^2\varepsilon\sum_{i=1}^{n-1}i
\left[\sum_{k\in {\Z}^d} \P^2(Y_i = k)-
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\sum_{\mbox{}\quad\quad\quad\quad k\in\Z^d:\;(\o k)^2\leq
\varepsilon i} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\P^2(Y_i = k)\right].
\end{eqnarray}
Notice that $Y_n-Y_n'$ is a random walk distributed as $H_n$ from
the previous subsection. Thus,
\begin{eqnarray}\label{eq 79}
\sum_{k\in {\Z}^d}\P^2(Y_i = k) = \sum_{k\in {\Z}^d}\P(Y_i = Y_i'= k)
= \P(Y_i = Y_i') = \P({H}_i = 0).
\end{eqnarray}
Using~(\ref{eq 79}) in~(\ref{eq 76}), we obtain
\begin{eqnarray}\label{eq 80}
\V({\bar L}_n)\geq4\s^2\varepsilon
\sum_{i=1}^{n-1}i\,\left\{\P({H}_i = 0)-
\sup_{k}\P(Y_i = k)\P\left((\o Y_i)^2\leq\varepsilon i\right)
\right\}.
\end{eqnarray}
Using~(\ref{eq 62}) again, there exists $\cc{13}$ such that
$\P({H}_{i}=0) \geq \cs{13} i^{-d/2}$ for all $i$. It is also
well known that there exists $\cc{19}$ such that
$\sup_{k} \P(Y_{i}=k) \leq \cs{19} i^{-d/2}$. Thus,
\begin{eqnarray}\label{eq 82}
\V({\bar L}_n)\geq4\s^2\varepsilon
\sum_{i=1}^{n-1} i^{1-d/2}\left[\cs{13}-\cs{19}\,
\P\left(\left(\o\frac{Y_i}{\sqrt{i}}\right)^2\leq
\varepsilon\right)\right].
\end{eqnarray}
By the Central Limit Theorem, $Y_i/\sqrt{i}$ converges weakly
to a vector of i.i.d.~continuous (Gaussian) random variables, say $V$.
Since the function $(\o\,\cdot)^2$ is continuous in $\R^d$,
we have that the probability at the right hand side of~(\ref{eq 82})
converges to $\P((\o V)^2\leq\varepsilon)$ as $i\to\infty$.
Now $\o V=\sum_{j=1}^dV_j$ is a non degenerate Gaussian.
Thus,
$\P(\o V=0)=\P((\o V)^2=0)=0$. We conclude that there exists
$\varepsilon>0$ for which the expression in brackets
at the right hand side of~(\ref{eq 82}) is bounded below by a
positive constant $\cc{20}$ for all large enough $i$. Thus,
for some $\cc{51}$, $\cs{-1}$, all $n$ and $d=1,2,3,4$
\begin{equation}
\label{52}
\V({\bar L}_n)\geq4\cs{51}\varepsilon\sum_{i=1}^{n-1} i^{1-d/2}
\geq\cs{-1}{\mathfrak O}(n,d).
\end{equation}
\vspace{.5cm}
By Proposition~2.3 of~\cite{ff},
\begin{eqnarray}\label{eq 905}
\V(\bar Z_n)=\sigma^2\sum_{j=0}^{n-1}\P(D_{j}=0).
\end{eqnarray}
Corollary~3.4 of the same paper states that
\begin{eqnarray}\label{eq 91}
\V({\bar Z}_n) \mbox{ is of order } \left \{
\begin{array}{cc}
n^{1/2}, & \mbox{ if $d=1$,}\\
\log n, & \mbox{ if $d=2$}\\
\mbox{constant}, & \mbox{ if $d\geq 3$}.
\end{array}
\right.
\end{eqnarray}
Thus, from~(\ref{52}),
for all $d\ge 1$, $\V(\bar L_n)$ dominates $\V({\bar Z}_n)$.
Since the order of ${\mathbb C}\mbox{ov}({\bar L}_n,{\bar Z}_n)$
is intermediate to
the orders of $\V({\bar Z}_n)$ and $\V(\bar L_n)$, it is also
dominated by the latter. Thus, by~(\ref{eq 21}),
the order of $\V(L_n(x))$ is the same as
the order of $\V(\bar L_n)$ and, from~(\ref{52}) and~(\ref{eq upp}),
the proof of Theorem~\ref{fluc.main} is complete. \qed
\smallskip
Note that by~(\ref{eq up}) and~(\ref{eq 905}), if the
non degeneracy assumption on the $u$'s does not hold, then
$\V(\bar L_n(x))$ and $\V({\bar Z}_n)$ vanish identically
and thus so does $\V(L_n(x))$.
\begin{rmk}
\label{vm}
In the case of the voter model, $\g=\g(0,0)=1$ and thus $D\equiv0$.
From~(\ref{varl}), we get that
$\V({\bar L}_n)=\s^2\sum_{i=0}^{n-1}\E\left[(\o {S}_{i})^2\right]$,
which one then easily estimates to be of the order of $n^2$ in all
dimensions. From~(\ref{eq 905}), we have that $\V({\bar Z}_n)$ is
of the order of $n$ in all dimensions. Thus, by~(\ref{eq 21}),
$\V(L_n(x))$ is of the order of $n^2$ in all dimensions for all $x$.
\end{rmk}
%%%%%%%%%%%%%%%%% Invariant measure %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Invariant measure in high dimensions and its fluctuations}
\label{inv}
Since in $5$ and higher dimensions $\bar L_n-n$ is an $L_2$-bounded
martingale, it converges almost surely as $n\to\infty$. The same holds
for $\bar Z_n$ (see \cite{ff}, Lemma 2.2, and~(\ref{eq 91}) above).
We thus have
from~(\ref{eq:decomp}) that $L_n-n$
converges almost surely as $n\to\infty$, say to $\tilde L$.
We thus have that the distribution of $\tilde L$ is invariant
for the surface dynamics corresponding to applying the random
averaging~(\ref{eq:ave}) and subtracting 1 at each site. A
similar argument as that for Proposition 5.2 in~\cite{ff}
can be made for that. We omit it here.
From these facts and~(\ref{eq:decomp}), we are able to argue
the following.
\begin{theo}
\label{theo:spa}
The distribution of $(\tilde L(x)-(\o x)^2)/|x|$ converges weakly
to the distribution of $2\mu\tilde Z$ as $|x|\to\infty$,
where $\tilde Z$ is a non trivial random variable and
$\mu$ is a real number,
provided $\o x/|x|\to\mu$ as $|x|\to\infty$. The variance of
$(\tilde L(x)-(\o x)^2)/|x|$
converges to the variance of the weak limit.
\end{theo}
\noindent{\bf Proof.\/}
Let $L'=\lim_{n\to\infty}\bar L_n-n$ and $Z'=\lim_{n\to\infty}\bar Z_n$.
Then, from~(\ref{eq:decomp}), $\tilde L(x)=L'(x)+2(\o x)Z'(x)+(\o x)^2$
and thus
$(\tilde L(x)-(\o x)^2)/|x|=(L'(x)/|x|)+2[(\o x)/|x|]Z'(x)$.
The distributions of both $L'(x)$ and $Z'(x)$ do not depend on $x$,
since they are the limits of distributions that do not depend on $x$,
and $L'(x)$ and $Z'(x)$ have finite second moments, by the $L_2$-Martingale
Convergence Theorem. We conclude that $L'(x)/|x|\to0$
as $|x|\to\infty$ in $L_2$ and in probability. We take $\tilde Z=Z'(0)$.
The non triviality of $\tilde Z$ follows from the positivity of the variance
of $Z'(0)$
(which equals $\s^2\sum_{i=0}^\infty\P(D_{i}=0)$; see~(\ref{eq 905}) above).
The result follows. \qed
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%% Section 5 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Remark on higher degrees}
The boundedness of the fluctuations in high dimensions, characteristic
of the cases of parabolic (studied above) and linear initial
conditions~\cite{ff},
does not necessarily occur for a polynomial initial condition
of higher degree. We illustrate with the case of
a cubic.
\begin{prop}\label{unstab}
Let $\hat L_n(x)=\E[(\o Y_n^x)^3 \mid {\cal F}_n],\,x\in\Z^d$.
For all high enough dimensions, there exists $\cc{22}$ such that
for all $n\geq1$
\begin{equation}
\label{eq:unstab}
\V(\hat L_n(0))\geq\cs{22}n^2.
\end{equation}
\end{prop}
\noindent{\bf Proof.\/}
\begin{eqnarray*}
\hat L_n(0)&=&\E[(\o Y_n)^3 \mid {\cal F}_n]\\
&=&\sum_{k\in \Z^d}\sum_{j=1}^d\sum_{\alpha=\pm
1}[\o(k+\alpha e_j)]^3u_n(k,k+\alpha
e_j)\P(Y_{n-1}=k \mid {\cal F}_{n-1})\\
&=&\sum_{k\in \Z^d}\sum_{j=1}^d\sum_{\alpha=\pm
1}[(\o k)+\alpha]^3u_n(k,k+\alpha
e_j)\P(Y_{n-1}=k \mid {\cal F}_{n-1})\\
&=&\sum_{k\in \Z^d}\sum_{j=1}^d\sum_{\alpha=\pm 1}
[(\o k)^3+3\alpha(\o k)^2+3(\o k)+\alpha]\\
&&\times \quad
u_n(k,k+\alpha e_j)\P(Y_{n-1}=k \mid {\cal F}_{n-1})\\
&=& \hat L_{n-1}+3\bar Z_{n-1}
+3\E[(\o Y_{n-1})^2(\o\theta_n(Y_{n-1})) \mid {\cal F}_{n}]
+\E[\o\theta_n(Y_{n-1}) \mid {\cal F}_{n}],
\end{eqnarray*}
where $\bar Z_{n}$ is as in the previous sections.
Since the distribution of the $u_n$'s are symmetric
and independent of ${\cal F}_{n-1}$,
$\E(\hat L_n \mid {\cal F}_{n-1})=\hat L_{n-1}+3\bar Z_{n-1}$.
From this and the fact that $ \bar Z_{n}$ is a
martingale we get that
$\hat L_{n}-3n\bar Z_{n}$ is a martingale. So
$\hat L_{n}-3n\bar Z_{n}=3\sum_{i=1}^n
\E[(\o Y_{i-1})^2(\o\theta_i(Y_{i-1})) \mid {\cal F}_{i}]
+\sum_{i=1}^n\E[\o\theta_i(Y_{i-1}) \mid {\cal F}_{i}]$.
The latter sum equals $\bar Z_n$ (see~\cite{ff},~(2.24)).
We thus get that $\hat L_{n}-3(n+1)\bar Z_{n}$ is also a martingale
with
$\hat L_{n}-3(n+1)\bar Z_{n}=3\sum_{i=1}^n \hat W_i$, where
$\hat W_i:=\E[(\o Y_{i-1})^2(\o\theta_i(Y_{i-1})) \mid {\cal F}_{i}]$.
The variance of $\hat W_i$ can be (roughly) estimated as follows
\begin{eqnarray*}
\V(\hat W_i)&=&
\sigma^2\E\sum_{k\in\Z^d}(\o k)^4
\P^2(Y_{i-1}=k \mid {\cal F}_{i-1})\\
&=&\sigma^2\E[(\o S_{i-1})^4;D_{i-1}=0]
\leq 16\sigma^2\,(i-1)^4\,\P(D_{i-1}=0),
\end{eqnarray*}
where in the above inequality we used
the fact that $S$ has jumps
of length at most $2$.
Thus $\V(\hat L_{n})$
is bounded above by constant times $\sum_{i=1}^\infty i^{4}\P(D_{i}=0)$.
From Lemma~\ref{l 3.1.1}, we can obtain an upper bound
for the latter sum by replacing $\P(D_{i}=0)$ in it
by constant times $\P(H_{i}=0)$. We conclude that the sum
is bounded by
constant times $\sum_{i=1}^\infty i^{4-d/2}$, which is
finite if $d\geq11$.
On the other hand~(\ref{eq 905},\ref{eq 91})
above tells us that the fluctuations
of $\bar Z_{n}$ are positive and bounded for
$d\ge 3$ and thus $\V(\hat L_{n})$ is of order
$n^2$, if $d\geq11$. \qed
\begin{rmk}
It is possible, with a similar estimation as the one done
in Sub-subsection~4.1.1,
to get a sharper estimate of the variance of $\sum_{i=1}^n \hat W_i$
and then obtain an upper bound of constant times
$\sum_{i=1}^n i^{2-d/2}$ for $\V(\hat L_{n})$.
This and~(\ref{eq 91}) above would imply
that the term $3(n+1)\bar Z_{n}$ gives the dominating contribution
for the variance of $\hat L_{n}$ in $d\geq2$ and thus the conclusion
of Proposition~\ref{unstab} would hold for these dimensions.
(The case of $d=1$ would demand further analysis, since the contributions
of $3(n+1)\bar Z_{n}$ and $\sum_{i=1}^n \hat W_i$ would be of the same
order and one would have to rule out cancelations.) We chose not to
make an exhaustive analysis here, but rather to indicate the
phenomenon of unboundedness of the fluctuations
in all high dimensions as simply as possible.
\end{rmk}
\begin{rmk}
Unboundedness of the fluctuations in high dimensions should also
occur for higher degree initial polynomials such as
$X_0(x)=(\o x)^k$, $k\geq4$. Results similar to Proposition~\ref{unstab}
should hold for those cases, with similar arguments.
\end{rmk}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%% Section 6 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Central limit theorem}
In $d=1$, the following holds.
\begin{lm}\label{decay}
Let $D$ be as in the previous sections. Then
\begin{equation}
\P(D_{n}=0 \mid D_{0}=0)\simeq n^{-1/2}.
\end{equation}
\end{lm}
From here on, $a_n\simeq b_n$ means that $\lim_{n\to\infty}a_n/b_n$
exists and is positive.
\noindent {\bf Proof.} This is loosely argued in Remark 3.3
of~\cite{ff}. We repeat
that argument. The result follows from {\bf P20.2} of~\cite{s},
Lemma~\ref{l 3.1.1} and the fact that (1) of {\bf P20.2} of~\cite{s} holds
for $H$. We leave more details to the reader. \qed
We will also use the fact that $\P(D_{n}=0 \mid D_{0}=m)\leq\P(D_{n}=0 \mid D_{0}=0)$
for all $m$. (See proof of Lemma 3.1 in~\cite{ff} for an argument.)
\vspace{.5cm}
\noindent {\bf Proof of Theorem~\ref{clt}.}
To prove Theorem~\ref{clt}, it is enough that
we verify the two
conditions of the Corollary to Theorem~3.2 of~\cite{hh}. In the notation
of that reference,
$X_{ni}=\vn^{-1/2}(2W_i+1)$ and ${\cal F}_{ni}={\cal F}_i$ for all $i,n$.
We recall that
$W_i=\E[Y_{i-1}\theta_i(Y_{i-1}) \mid {\cal F}_i]=
\sum_{k}k\theta_i(k)\P(Y_{i-1}=k \mid {\cal F}_{i-1})$.
%%%%%%%%%%%%%%%%%%%%%%%% 1st condition %%%%%%%%%%%%%%%%%%%%%%%%
\subsection{First condition}
Since $\vn$ is of the order of $n^{3/2}$, it is enough to show that
\begin{eqnarray}
\label{cond1clt1}
\frac{1}{n^{3/2}}\sum_{i=1}^{n} \E(W_i^2;|W_i|>\eps n^{3/4} \mid
{\cal F}_{i-1})\to 0
\end{eqnarray}
in probability as $n\to \infty$ for any $\eps>0$. For that, it
suffices to prove that
\begin{equation}
\label{cond1clt2}
\frac{1}{n^{3/2}}\sum_{i=1}^{n} \E(W_i^2; |W_i|>\eps n^{3/4})\to 0
\end{equation}
as $n\to\infty$. By Cauchy-Schwarz, the following is an upper bound
for the above expectation.
\begin{eqnarray}
\sqrt{\E(W_i^4)\,\P(|W_i|>\eps n^{3/4})}
\end{eqnarray}
Now,
\begin{eqnarray*}
\lefteqn{\E(W_i^4)
=\E\Big\{\sum_{k}k\theta_i(k)\P(Y_{i-1}=k \mid {\cal F}_{i-1})\Big\}^4}\\
&=&\cc{27}\sum_{k}k^4\E(\P^4(Y_{i-1}=k \mid {\cal F}_{i-1}))\\
&& +\cs{27}\E\sum_{k< l}k^2 l^2
\P^2(Y_{i-1}=k \mid {\cal F}_{i-1})\P^2(Y_{i-1}=l \mid {\cal F}_{i-1})\\
&\le&\cs{27}\sum_{k}k^4\E(\P^2(Y_{i-1}=k \mid {\cal F}_{i-1}))
+\cs{27}\Big\{\sum_{k}k^2
\E(\P^2(Y_{i-1}=k \mid {\cal F}_{i-1}))\Big\}^2\\
&\le&\cc{28}\E(S_{i-1}^4; D_{i-1}=0)+
\cs{28}\E^2(S_{i-1}^2;D_{i-1}=0)\\
&\le&\cs{28}\sqrt{\E(S_{i-1}^8)}\sqrt{\P(D_{i-1}=0)}+
\cs{28}\E^2(S_{i-1}^2;D_{i-1}=0)\le \cc{29}\, i^{7/4}
\end{eqnarray*} for some $\cs{27}, \cs{28}, \cs{29}$.
For the last inequality, we used~(\ref{eq 38},\ref{eq upl}),
Lemma~\ref{decay} and $\E(S_{i-1}^8)\leq\cc{235}\E(Y_{i-1}^8)\simeq i^4$
for some $\cs{235}$. Now,
\begin{eqnarray*}
\P(|W_i|>\eps n^{3/4})
&\leq&
\eps^{-2}\, n^{-3/2}\E(W_i^2)=\eps^{-2}\,n^{-3/2}\V(W_i)\\
&\le& \eps^{-2}\,n^{-3/2}\s^2 i\,\P(D_{i-1}=0)
\leq\cc{30} n^{-3/2} i^{-1/2},
\end{eqnarray*}
where the second inequality is~(\ref{eq upl}), and the last one
follows from Lemma~\ref{decay}. Thus
\begin{eqnarray*}
&n^{-3/2}\sum_{i=1}^n\E[W_i^2;|W_i|>\eps n^{3/4})]
\le\cs{29}\cs{30} n^{-3/2}\sum_{i=1}^n\sqrt{i^{7/4}}
\sqrt{n^{-3/2}i^{1/2}}&\\
&\simeq n^{-9/4}\sum_{i=1}^n i^{9/8}\simeq n^{-1/8}\to 0&
\end{eqnarray*}
as $n\to\infty$. We have
proved~(\ref{cond1clt2}) and thus~(\ref{cond1clt1}).
%%%%%%%%%%%%%%%%%%%%%%%% 2nd condition %%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Second condition}
In the notation of~\cite{hh}, it can be written as
\begin{equation}
\label{4.1}
V_n^2:=\vn^{-1}\sum_{i=1}^n\Big(\E[(2W_i+1)^2 \mid {\cal F}_{i-1}]
-\E[(2 W_i+1)^2]\Big)\to 0
\end{equation}
in probability as $n\to\infty$. Since $\vn$ is of the order of $n^{3/2}$,
and also noting that $\E(W_i \mid {\cal F}_{i-1})=0$
for all $i\geq1$, it is enough to argue that
\begin{equation}
\label{v1}
n^{-3/2}\sum_{i=1}^n[\E(W_i^2 \mid {\cal F}_{i-1})-\E(W_i^2)]\to 0
\end{equation}
in probability as $n\to\infty$. We write the above expression as
\begin{eqnarray}
&&n^{-3/2}\sum_{i=1}^n
\Big(\E\{\E^2[Y_{i-1}\theta_i(Y_{i-1}) \mid {\cal F}_i] \mid {\cal F}_{i-1}\}
-\E[\E^2(Y_{i-1}\theta_i(Y_{i-1}) \mid {\cal F}_i)]\Big)\nonumber\\
\nonumber&=& \sigma^2n^{-3/2}\sum_{i=1}^n\sum_{k\in\Z}k^2
\{\P^2(Y_{i-1}=k \mid {\cal F}_{i-1})-\E[\P^2(Y_{i-1}=k \mid {\cal F}_{i-1})]\}\\
&=& \sigma^2n^{-3/2}\sum_{i=0}^{n-1}\Big(\E(S_i^2;D_i=0 \mid
{\cal F}_{i})-\E(S_i^2;D_i=0)\Big).\label{4.05}
\end{eqnarray}
We will argue that the variance of the latter expression tends
to $0$ as $n\to\infty$. We write the variance of the sum as
\begin{eqnarray}
\nonumber
&&\sum_{j=1}^{n-1}
\E\Big(\sum_{i=0}^{n-1}[\E(S_i^2;D_i=0 \mid {\cal F}_{i\wedge j})
-\E(S_i^2;D_i=0 \mid {\cal F}_{i\wedge(j-1)})]\Big)^2\nonumber\\
\label{4.4}
&=&\sum_{j=1}^{n-1}
\E\Big(\sum_{i=j}^{n-1}[\E(S_i^2;D_i=0 \mid {\cal F}_j)
-\E(S_i^2;D_i=0 \mid {\cal F}_{j-1})]\Big)^2.
\end{eqnarray}
Now the inner sums in the right hand side of~(\ref{4.4}) can be written as
\begin{eqnarray}\nonumber
\sum_{i=j}^{n-1}\!\!\!\!\!\!\!\!&&\sum_{k,m}E(S_i^2;D_i=0 \mid S_j=k,D_j=m)
(\P(S_j=k,D_j=m \mid {\cal F}_j)\nonumber\\
&& -\P(S_j=k,D_j=m \mid {\cal
F}_{j-1}))\nonumber\\
\nonumber
=\sum_{i=j}^{n-1}\!\!\!\!\!\!\!\!\!\!&&\sum_{k,m,l,l'}
\E(S_i^2;D_i=0 \mid S_j=k,D_j=m)\,
\P( Y_{j-1}=l,\hat Y_{j-1}=l' \mid {\cal F}_{j-1} )\\
\label{4.6}
\times\!\!\!\!&&\!\!\!\!\!\!
[\P(S_j=k,D_j=m \mid Y_{j-1}=l,\hat Y_{j-1}=l',{\cal F}_j)\\
&&
- \P(S_j=k,D_j=m \mid Y_{j-1}=l, \hat Y_{j-1}=l')]. \nonumber
\end{eqnarray}
Let $u_{n,k}:=u_n(k,k+1)$. Note that the all possible values for the pair
$(k,m)$ are:
$(l+l',\, l-l'-2)$, $(l+l'-2,\, l-l')$, $(l+l'+2,\, l-l')$, $(l+l',\, l-l'+2)$,
and
\begin{eqnarray*}
\P(S_j=l+l', D_j=l-l'-2 \mid Y_{j-1}=l, \hat Y_{j-1}=l',
{\cal F}_j)
&=&(1-u_{j, l})u_{j,l'}\\
\P(S_j=l+l'-2, D_j=l-l' \mid Y_{j-1}=l, \hat Y_{j-1}=l',
{\cal F}_j)
&=&(1-u_{j, l})(1-u_{j,l'})\\
\P(S_j=l+l'+2, D_j=l-l' \mid Y_{j-1}=l, \hat Y_{j-1}=l',
{\cal F}_j)
&=&u_{j, l}u_{j,l'}\\
\P(S_j=l+l', D_j=l-l'+2 \mid Y_{j-1}=l, \hat Y_{j-1}=l',
{\cal F}_j)
&=&(1-u_{j, l'})u_{j,l}.
\end{eqnarray*}
Let $A_{j,l,l'}^n$ denote
\begin{eqnarray}
\label{A}
\lefteqn{\sum_{i=j}^{n-1}
\Big(
\E(S_i^2;D_i=0 \mid S_j=l+l',D_j=l-l'-2)}\\
& &- \E(S_i^2;D_i=0 \mid S_j=l+l'-2,D_j=l-l')
\Big).\nonumber
\end{eqnarray}
The right hand side of~(\ref{4.6}) can be rewritten as
\beq
\label{4.04}
\sum_{l,l'}[A_{j,l,l'}^n\overline{u_{j,l'}(1-u_{j,l})}-A_{j,l,l'-2}^n
\overline{u_{j,l'}}+A_{j,l+2,l'}^n
\overline{u_{j,l'}u_{j,l}}]
\P( Y_{j-1}=l, \hat Y_{j-1}=l' \mid {\cal F}_{j-1}),
\eeq
where bar means centering (that is, $\bar X=X-\E(X)$, if $X$ is an
integrable random variable).
Substituting~(\ref{4.04}) into~(\ref{4.4}), we get that the
expectation inside the first sum on the right hand side
there can be upper bounded above by constant times the sum of three terms,
one of which has the following form
\begin{equation}
\label{sumA}
V_{n,j}^2:=\E\Big(\sum_{l,l'}
A_{j,l,l'}^n\overline{u_{j,l'}(1-u_{j,l})}
\P(Y_{j-1}=l,\hat Y_{j-1}=l' \mid {\cal F}_{j-1})\Big)^2,
\end{equation}
and the other two are similar, with
$A_{j,l,l'}^n\overline{u_{j,l'}(1-u_{j,l})}$ replaced by
$A_{j,l,l'-2}^n\overline{u_{j,l'}}$
and $A_{j,l+2,l'}^n\overline{u_{j,l'}u_{j,l}}$, respectively.
We will analyze~(\ref{sumA}) only.
The other two terms are similar and can be treated in an analogous way.
We go back to $A_{j,l,l'}^n$.
Observe that, from Lemma~\ref{l 2.5.4}, we can represent $S_n$
in one dimension
as $\sum_{i=1}^n\hat\xi_i\eta_i(D_{i-1})$, where
$\hat\xi_1,\hat\xi_2,\ldots$ are i.i.d.~with
$\P(\hat\xi_1=2)=\P(\hat\xi_1=-2)=1/2$ and
$\eta_i(l)\stackrel{d}{=}\1(\xi_i(l)\ne0)$ for all $i,l$, where
$\1(\cdot)$ is the indicator function.
Clearly, $\eta_i=\1(S_i\ne S_{i-1})$. Now, from the nearest neighbor character
of the jumps of $ Y,\hat Y$, we have that
$\1(S_i\ne S_{i-1})=\1(D_i=D_{i-1})$.
We can thus rewrite the summands of~(\ref{A}) as
\begin{eqnarray}
&&\E[(l+l'+S_i-S_j)^2;D_i=0 \mid D_j=l-l'-2]\nonumber\\
%
&&\quad\quad\quad\quad-\E[(l+l'-2+S_i-S_j)^2;D_i=0 \mid D_j=l-l']\nonumber\\
%
&=&
\E\Big[\Big(l+l'+\sum_{k=j+1}^i\hat\xi_k\eta_k(D_{k-1})\Big)^2;
D_i=0 \mid D_j=l-l'-2\Big]\nonumber\\
%
&&-\E\Big[\Big(l+l'-2+\sum_{k=j+1}^i
\hat\xi_k\eta_k(D_{k-1})\Big)^2;D_i=0 \mid D_j=l-l'\Big]
\nonumber\\
%
&=&\Big[(l+l')^2\P(D_i=0 \mid D_j=l-l'-2)-
(l+l'-2)^2\P(D_i=0 \mid D_j=l-l')\nonumber\\
%
&&+4\sum_{k=j+1}^i\Big\{\E[\eta_k(D_{k-1});D_i=0 \mid D_j=l-l'-2]
\nonumber\\
&&\quad\quad\quad -\E[\eta_k(D_{k-1});D_i=0 \mid D_j=l-l']\Big\}\Big]
\nonumber\\
%
&=&(l+l')^2[\P(D_i=0 \mid D_j=l-l'-2)-\P(D_i=0 \mid D_j=l-l')]\nonumber\\
%
&&+4(l+l'-1)\P(D_i=0 \mid D_j=l-l')\nonumber\\
%
&&+4\sum_{k=j+1}^i\Big[\P(D_i=0,D_k=D_{k-1} \mid D_j=l-2)
\nonumber\\
&& \quad\quad\quad -\P(D_i=0,D_k=D_{k-1} \mid D_j=l)\Big]\label{clt1}.
\end{eqnarray}
We write the expression within brackets in the sum above as
\begin{eqnarray}
\lefteqn{\sum_{h\in\Z}
\P(D_i=0 \mid D_k=h)
\gamma(h,h)
\big[\P(D_{k-1}=h \mid D_j=l-l'-2)}\nonumber\\
&&\quad\quad\quad-\P(D_{k-1}=h \mid D_j=l-l')\big]\nonumber\\
&=&\bar\gamma\sum_{h\in\Z}\P(D_{i-1}=0 \mid D_{k-1}=h)
\big[\P(D_{k-1}=h \mid D_j=l-l'-2)\nonumber\\
&&\quad \quad\quad-\P(D_{k-1}=h \mid D_j=l-l')\big]\nonumber\\
&+&(\gamma-\bar\gamma)\P(D_i=0 \mid D_k=0)
\big[\P(D_{k-1}=0 \mid D_j=l-l'-2)\nonumber\\
&&\quad\quad\quad-\P(D_{k-1}=0 \mid D_j=l-l')\big]\label{clt2},
\end{eqnarray}
where, as before, $\gamma=\gamma(0,0)$ and $\bar\gamma$ is the common value of
$\gamma(h,h)$ for $h\ne0$.
We have also used time homogeneity in the
equality above. The first term of the right hand side of~(\ref{clt2})
thus becomes
\begin{equation}
\bar\gamma(\P(D_{i-1}=0 \mid D_j=l-l'-2)-\P(D_{i-1}=0 \mid D_j=l-l')).\label{clt3}
\end{equation}
Substituting~(\ref{clt3}) into~(\ref{clt2}) and this into~(\ref{clt1}),
we get that $A_{j,l,l'}^n$ is the sum in $i=j,\ldots,n$ of
\begin{eqnarray}
\lefteqn{(l+l')^2[\P(D_i=0 \mid D_j=l-l'-2)-
\P(D_i=0 \mid D_j=l-l')]}\nonumber\\
&+&\!\!\!4(l+l'-1)\P(D_i=0 \mid D_j=l-l')\nonumber\\
%
&+&\!\!\!4\bar\gamma(i-j)[\P(D_{i-1}=0 \mid
D_j=l-l'-2)-\P(D_{i-1}=0 \mid D_j=l-l')]
\nonumber\\
%
&+&\!\!\!4(\gamma-\bar\gamma)\sum_{k=j+1}^i\P(D_i=0 \mid D_k=0)
[\P(D_{k-1}=0 \mid D_j=l-l'-2)\nonumber\\
&&\quad-\P(D_{k-1}=0 \mid D_j=l-l')]\nonumber\\
%
&=&\!\!\!(l+l')^2a_{i,j,l,l'}+4(l+l'-1)
\P(D_i=0 \mid D_j=l-l')+4\bar\gamma(i-j)a_{i-1,j,l,l'}\nonumber\\
%
&+&\!\!\!4(\gamma-\bar\gamma)\sum_{k=j+1}^i
\P(D_i=0 \mid D_k=0)a_{k-1,j,l,l'},\label{A'}
\end{eqnarray}
where $a_{i,j,l,l'}:=\P(D_i=0 \mid D_j=l-l'-2)-\P(D_i=0 \mid D_j=l-l')$.
We have that
$\sum_{i=j}^n a_{i,j,l,l'}$ is uniformly bounded in $l,l'$, $j$ and $n$
(see, e.g., \cite{ff}, Lemma~4.3). So is
$\sum_{i=j}^n |a_{i-1,j,l,l'}|$, by the nearest neighbor
character of $D$ (in $2\Z$), which makes
$a_{i,j,l,l'}\geq0$ for all $i\geq j$ or
$a_{i,j,l,l'}\leq0$ for all $i\geq j$.
We have also that
$\sum_{i=j}^n\P(D_i=0 \mid D_j=l-l')\leq\sum_{i=j}^n\P(D_i=0 \mid D_j=0)\leq\cc{238}
\sqrt n$ for all $j,n$, by~(\ref{eq 91}) and the remark after the
proof of Lemma~\ref{decay} above. From these bounds, we get
$\sum_{i=j}^n\sum_{k=j+1}^i\P(D_i=0 \mid D_k=0)|a_{k-1,j,l,l'}|
\leq\cc{239} \sqrt n$. We conclude from these facts and~(\ref{A'})
that $|A_{j,l,l'}^n|$ is bounded above by constant times
\begin{equation}
\label{A''}
(l+l')^2+|l+l'-1|{\sqrt n}+n.
\end{equation}
Thus~(\ref{sumA}) can be bounded above by constant times the sum
of three terms, the first of which can be written as
%%%%%%%%%%%%%%%%%%%%%%%% 1st term %%%%%%%%%%%%%%%%%%%%%%%%
\begin{eqnarray}
&&\sum_{l,l'}\sum_{k,k'} (l+l')^2 (k+k')^2
|\E(\overline{u_{j,l'}(1-u_{j,l})}
\overline{u_{j,k'}(1-u_{j,k})})|\,
\nonumber\\
&&\quad\quad\quad\times
\P(Y_{j-1}=l,Y'_{j-1}=l',\hat Y_{j-1}=k,\tilde Y_{j-1}=k'),
\label{lk}
\end{eqnarray}
where $Y,Y',\hat Y$ and $\tilde Y$ are independent given $\cal F$.
The inner expectation
above does not vanish only if $\{l,l'\}\cap\{k,k'\}\ne
\emptyset$. Thus~(\ref{lk})
can be bounded above by constant times
\begin{eqnarray*}
&&\sum_{{l,l',k}}(l+l')^2 (k+l')^2
\P( Y_{j-1}=l,Y'_{j-1}=\tilde Y_{j-1}=l',\hat Y_{j-1}=k)\\
&=&\E((Y_{j-1}+Y'_{j-1})^2(\hat Y_{j-1}+\tilde Y_{j-1})^2;\hat D_{j-1}= 0),
\end{eqnarray*}
where $\hat D=Y'-\tilde Y$. The latter term above can be
upper bounded, using Cauchy-Schwarz (and the fact that
$\hat D\stackrel{d}{=}D$), by
\beq
\label{1st}
\E((Y_{j-1}+Y'_{j-1})^4)\sqrt{\P(D_{j-1}= 0)}
\simeq j^{7/4}.
\eeq
\noindent{\bf Second term.}
The second term of the bound to~(\ref{sumA}) is
\begin{eqnarray}
&&n\sum_{l,l'}\sum_{k,k'} |(l+l'-1)(k+k'-1)|
|\E(\overline{u_{j,l'}(1-u_{j,l})}
\overline{u_{j,k'}(1-u_{j,k})})|
\nonumber\\
&&\quad\times
\P( Y_{j-1}=l,Y'_{j-1}=l',\hat Y_{j-1}=k,\tilde Y_{j-1}=k').\label{lk2}
\end{eqnarray}
Similarly as in the reasoning for the first term, there exists
$\cc{213}$ such that~(\ref{lk2})
is bounded above by constant times
\begin{eqnarray}
\nonumber
&&n\sum_{{l,l',k}}|(l+l'-1)(k+l'-1)|
\P( Y_{j-1}=l,Y'_{j-1}=\tilde Y_{j-1}=l',\hat Y_{j-1}=k)\\
\nonumber
&\le&n
\E(|(Y_{j-1}+Y'_{j-1}-1)(\hat Y_{j-1}+\tilde Y_{j-1}-1)|;\hat D_{j-1}=0)\\
\nonumber
&\le&n\sqrt{\E((Y_{j-1}+Y'_{j-1}-1)^2 (\hat Y_{j-1}+\tilde Y_{j-1}-1)^2)}
\sqrt{\P(D_{j-1}=0)}\\ \label{2nd}
&\le&n\sqrt{\E((Y_{j-1}+Y'_{j-1}-1)^4)}
\sqrt{\P( D_{j-1}=0)}\leq\cs{213}nj^{3/4}.
\end{eqnarray}
%%%%%%%%%%%%%%%%%%%%%%%% 3rd term %%%%%%%%%%%%%%%%%%%%%%%%
\noindent{\bf Third term.}
The third term of the bound to~(\ref{sumA}) is
\begin{eqnarray}
\lefteqn{n^2\sum_{l,l'}\sum_{k,k'}
|\E(\overline{u_{j,l'}(1-u_{j,l})}
\overline{u_{j,k'}(1-u_{j,k})})|}\nonumber\\
&&\times
\P( Y_{j-1}=l,Y'_{j-1}=l',\hat Y_{j-1}=k,\tilde Y_{j-1}=k').\label{lk3}
\end{eqnarray}
Similarly as in the reasoning for the first and second terms, there exists
$\cc{234}$ such that~(\ref{lk3})
is bounded above by constant times
\beq
\label{3rd}
n^2\sum_{l,l',k}
\P(Y_{j-1}=l,Y'_{j-1}=\tilde Y_{j-1}=l',\hat Y_{j-1}=k)
\leq n^2\P(D_j=0)\leq \cs{234}n^2j^{-1/2}.
\eeq
\vspace{.5cm}
From~(\ref{1st}),~(\ref{2nd}) and~(\ref{3rd}),
we conclude that there exists $\cc{36}$ such that
\beq
\label{v}
\sum_{j=1}^{n-1}V_{n,j}^2\leq \cs{36} n^{-3} n^{11/4}\to 0
\eeq
as $n\to\infty$.
Theorem~\ref{clt} is thus proved. \qed
\vskip 5mm
%%%%%%%%%%%%%%%%%% Acknowledgments %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\noindent{\bf Acknowledgments.} We thank P.~Ferrari for many discussions
on these and related questions and models.
\vskip 5mm
\begin{thebibliography}{19}
\bibitem{a} E.~Andjel, Invariant measures and long time
behaviour of the smoothing process, {\it Ann.~Probab.}~{\bf 13}
(1985) 62--71.
\bibitem{d} R.~Durrett, Stochastic Spatial Models.
{\it PCMI Lecture Notes}, IAS, Princeton (1996).
\bibitem{ff} P.A.~Ferrari and L.R.G.~Fontes, Fluctuations
of a surface submitted to a random average process, {\it
Electronic Journal of Probability\/}, {\bf 3}(6) (1998) 1--34.
\bibitem{hh} P.~Hall and C.C.~Heyde, {\it Martingale limit theory and its
application.} Academic Press, New York (1980).
\bibitem{h} J.M.~Hammersley, Harnesses,
{\it Proc. Fifth Berkeley Sympos.~Mathematical Statistics and Probability}
(Berkeley, Calif., 1965/66), Vol.~III: Physical Sciences, 89--117,
Univ.~California Press, Berkeley, Calif.
\bibitem{ls} T.M.~Liggett and F.~Spitzer, Ergodic theorems for
coupled random walks and other systems with locally interacting components,
{\it Z.~Wahrsch.~Verw.~Gebiete} {\bf 56} (1981)443--468.
\bibitem{l} T.M.~Liggett, {\it Interacting Particle Systems.}
Springer, Berlin (1985).
\bibitem{tese} D.P.~Medeiros, Processo de m\'edias
aleat\'orias com configura\c{c}\~ao inicial parab\'olica,
Uni\-ver\-si\-ty of S\~ ao Paulo PhD.~thesis (2001) 151 pp.~(in Portuguese).\\
Version in http://www.ime.usp.br/\,$\tilde{}$\,lrenato/tese.ps,
135 pp.~(in Portuguese).
\bibitem{s} F.~Spitzer, {\it Principles of random walk},
Springer-Verlag, New York (1976).
\bibitem{t} A.~Toom, Tails in harnesses,
{\it J.~Statist.~Phys.}~{\bf 88} (1997) 347--364.
\end{thebibliography}
\vskip 5truemm
\parindent -20pt
\leftline{Instituto de Matem\'atica e Estat\'\i stica ---
Universidade de S\~ao Paulo}
\leftline{R.~do Mat\~ ao 1010, Cidade Universit\' aria
--- 05508-900 S\~ao Paulo SP --- Brasil}
\leftline{lrenato@ime.usp.br, marina@ime.usp.br}
\vskip 5truemm
\parindent -20pt
\leftline{Instituto de Matem\'atica ---
Universidade Federal da Bahia}
\leftline{Av.~Ademar de Barros s/n, Campus de Ondina ---
40170-110 Salvador Ba --- Brasil}
\leftline{medeiros@ufba.br}
\end{document}
---------------0201171235641--
~~