\input amstex
\documentstyle{amsppt}
\magnification=1200
\TagsOnRight
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\topmatter
\title
Limit Theorem for the Local Time of\\
Bond-True Self-Avoiding Walk on $\Bbb Z$
\endtitle
\rightheadtext{Bond-True Self-Avoiding Walk on $\Bbb Z$}
\author
B\'alint T\'oth
\footnote""{Work supported by the Hungarian National Foundation
for Scientific Research, grant No. 1902\hfill\hfill\hfill}
\footnote""{Submitted for publication to {\sl The Annals of
Probability}, 17 May 1993\hfill\hfill\hfill}
\endauthor
\affil
Mathematical Institute of the\\
Hungarian Academy of Sciences
\endaffil
\address
{B. T\'oth\newline
Mathematical Institute of the\newline
Hungarian Academy of Sciences\newline
POB 127\newline
H-1364 Budapest, Hungary\newline
e-mail: h1222tot\@huella.bitnet}
\endaddress
\abstract
{
The bond-true self-avoiding walk is a nearest neighbour random
walk on $\Bbb Z$, for which the probability of jumping along a
bond of the lattice is proportional to
$\exp(-g\cdot\text{number of previous jumps along that bond})$.
We prove a limit theorem for the distribution of the local time
process of this walk. A consequence of the main theorem is a
limit law for $n^{-3/2}T_n$ where $T_n$ is the first hitting
time of the lattice site at distance $n$ from the origin.
}
\endabstract
\endtopmatter
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\document
\bigskip
\noindent
{\bf 1. Introduction and Result}
\medskip
In the present paper we consider a `bond-true' self-avoiding
walk (BTSAW)
$X_i$ on the one-dimensional integer lattice
$\Bbb Z$, defined as follows: the walk starts from the origin of
the lattice and at time $i+1$ it jumps to one of the two
neighbouring sites of $X_i$, so that the probability of jumping
along a bond of the lattice is proportional to
$$
\exp (-g\cdot \text{number of previous jumps along that bond}).
$$
More formally, for a nearest neighbour walk $\underline{x}_0^i=
(x_0,x_1,\dots,x_i)$ and a lattice site $y\in\Bbb Z$ we define
$$
\align
r(y|\underline{x}_0^i)&=\#\left\{\,0\le
j*n:S_n(t)=0\}.
\tag 1.9
$$
>From the definition of $S_n(\cdot)$ follows that $S_n(t)=0$
for $t\ge\omega_n$ and
$$
T_n=\sum_{t\ge0}v(n-t|\underline{X}_0^{T_n})=
2\sum_{t=0}^{\omega_n}S_n(t)+n.
\tag 1.10
$$
\proclaim{Proposition 1}
$$
\align
\omega_n&<\infty
\tag 1.11
\\
T_n&<\infty
\tag 1.12
\endalign
$$
almost surely.
\endproclaim
Let $W_s,\,s\in\Bbb R_+$ be a standard one-dimensional Wiener
process and $\omega$ the following stopping time:
$$
\omega=\inf\left\{\,s>1\,:\,W_s=0\,\right\}.
\tag 1.13
$$
Our main result is the following theorem
\proclaim{ Theorem 1}
$$
\left(
\frac{\omega_n}{n},
\frac{S_n([ns])}{\sigma\sqrt n}: 0\le s\le\frac{\omega_n}{n}
\right)
\Rightarrow
\left(
\omega, \left|W_{s}\right|:0\le s\le\omega
\right)
\tag 1.14
$$
in $\Bbb R_+\times D[0,\infty)$, where
$$
\sigma^2=\frac
{\sum_{x\in\Bbb Z}x^2\lambda^{x^2}}
{\sum_{x\in\Bbb Z}\lambda^{x^2}}.
\tag 1.15
$$
\endproclaim
Straightforward consequence of Proposition 1 and Theorem 1 is
the recurrence of the random walk considered:
\proclaim{Corollary 1}
The BTSAW visits infinitely often every lattice site, almost
surely.
\endproclaim
\noindent
>From (1.10) and (1.14) directly follows the limit theorem for
the first hitting times:
\proclaim{Corollary 2}
$$
\frac{T_n}{2\sigma
n^{3/2}}\Rightarrow\int_0^{\omega}\left|W_s\right|\text{d}s.
\tag 1.16
$$
\endproclaim
This last limit law shows that the BTSAW behaves {\it
over-diffusively\/} indeed, the suggested rate of diffusion
being $X_t\sim t^{2/3}$.
The paper contains three more sections and an Appendix. In
section 2 we describe the local time process $S_n(\cdot)$ as a
random walk on $\Bbb Z_+$. In section 3 we investigate in more detail
an auxiliary Markov chain arising naturally in the previous
description. Finally in section 4 we prove the results stated
above, by coupling the process $S_n(\cdot)$ to an appropriately
constructed homogeneous reflected random walk. The Appendix is
devoted to the proof of a technical lemma.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\bigskip
\noindent
{\bf 2. The Local Time Process as a Markov Chain}
\medskip
The clue of the proof is the observation that, with $n$
fixed, the local time process $S_n(\cdot)$ defined in (1.7) is
a Markov chain on $\Bbb Z_+=\Bbb Z\cap[0,\infty)$. Apparently
this trick has its origin in F.B. Knight's paper [K] and has
been rediscovered several times since then (see e.g. [KKS]).
However, as opposed to the previous applications of this trick,
the Markov process arising in our case will be more complicated
than a branching process [K], or a branching process with random
offspring distribution [KKS].
A finite walk which hits $n$ for the first time at time $i$
$$
0=x_0,x_1,\dots,x_{i-1},x_i=n,\qquad\min\{j:x_j=n\}=i
\tag 2.1
$$
determines uniquely a finite sequence
$$
\align
L(0)&=0
\\ \\
\underline l(t)&=\left\{\matrix\format\l&\l&\r\\
\left(l_0(t),\dots,l_{L(t-1)}(t)\right)
\quad&\text{ if }\quad&1\le t\le n
\\ \\
\left(l_1(t),\dots,l_{L(t-1)}(t)\right)
\quad&\text{ if }\quad& t > n
\endmatrix\right.
\tag 2.2
\\ \\
L(t)&=\sum_{k=0(1)}^{L(t-1)}l_k(t)
\endalign
$$
where $l_k(t)$ is the number of steps $(n-t)\to(n-t-1)$ between
the $k$-th and $(k+1)$-th steps $(n-t)\to(n-t+1)$ performed by
the walk. We shall refer to the sequence (2.2) as the system of
left steps of the walk (2.1). Finiteness of the system of left
steps means that there exists a $t_{\max}>n$ such that
$L(t_{\max})=0$. This correspondence between finite walks hitting
$n$ and finite systems of left steps is {\it one-to-one}: given
the sequence (2.2) one can reconstruct the complete walk (2.1)
univocally.
Let (2.1) be a finite walk hitting $n$ and (2.2) the
corresponding system of left steps. Since $T_n$ is a stopping
time, the probability that the BTSAW coincides with (2.1) till
$T_n$ is
$$
\bold
P\Big(\underline{X}_0^{T_n}=\underline{x}_0^i\Big)
=\prod_{j=0}^{i-1}
\frac{\lambda^{v((x_j+x_{j+1}+1)/2|\underline{x}_0^j)}}
{\lambda^{v(x_j|\underline{x}_0^j)}+
\lambda^{v(x_j+1|\underline{x}_0^j)}},
\tag 2.3
$$
cf (1.4), (1.5).
Written in terms of the $\underline l(t)$-s
$$
\bold
P\Big(\underline{X}_0^{T_n}=\underline{x}_0^i\Big)=
\left[\prod_{t=1}^{n-1}
\Cal{P}_{\!-}\big(L(t-1);\underline l(t)\big)\right]
\Cal{P}_{\!0}\big(L(n-1);\underline l(n)\big)
\left[\prod_{t>n}
\Cal{P}_{\!+}\big(L(t-1);\underline l(t)\big)\right],
\tag 2.4
$$
with the transition probabilities:
\allowbreak
$$
\align
\Cal{P}_{\!-}\big(L;\underline l^{\prime}\big)&=
\prod_{k=0}^{L}\left[\prod_{i=1}^{l^{\prime}_k}
\frac
{\dsize
\lambda^{2(\sum_{m=0}^{k-1}l^{\prime}_{m}+i-1)+1
}
}
{\dsize
\lambda^{2k}+
\lambda^{2(\sum_{m=0}^{k-1}l^{\prime}_{m}+i-1)+1
}
}
\right]
\frac
{\dsize
\lambda^{2k}
}
{\dsize
\lambda^{2k}+
\lambda^{2\sum_{m=0}^{k}l^{\prime}_{m}+1
}
}
\tag 2.5
\\
\Cal{P}_{\!0}\big(L;\underline l^{\prime}\big)&=
\prod_{k=0}^{L}\left[\prod_{i=1}^{l^{\prime}_k}
\frac
{\dsize
\lambda^{2(\sum_{m=0}^{k-1}l^{\prime}_{m}+i-1)
}
}
{\dsize
\lambda^{2k}+
\lambda^{2(\sum_{m=0}^{k-1}l^{\prime}_{m}+i-1)
}
}
\right]
\frac
{\dsize
\lambda^{2k}
}
{\dsize
\lambda^{2k}+
\lambda^{2\sum_{m=0}^{k}l^{\prime}_{m}
}
}
\tag 2.6
\\
\Cal{P}_{\!+}\big(L;\underline l^{\prime}\big)&=
\prod_{k=1}^{L}\left[\prod_{i=1}^{l^{\prime}_k}
\frac
{\dsize
\lambda^{2(\sum_{m=1}^{k-1}l^{\prime}_{m}+i-1)
}
}
{\dsize
\lambda^{2k-1}+
\lambda^{2(\sum_{m=1}^{k-1}l^{\prime}_{m}+i-1)
}
}
\right]
\frac
{\dsize
\lambda^{2k-1}
}
{\dsize
\lambda^{2k-1}+
\lambda^{2\sum_{m=1}^{k}l^{\prime}_{m}
}
}
\tag 2.7
\endalign
$$
Thus, the process $S_n(\cdot)$ is indeed a Markov chain on
$\Bbb Z_{+}$, homogeneous in the intervals $0\le tn$. \newline
{\sl Remark:\/} In case of STSAW this rearrangement of the
product, i.e. the transcription of (2.3) to (2.4) can not be
performed. This is the step, where the proof of a similar result
for the STSAW fails.
The transition probabilities $\Cal{P}_{\!\pm,0}$ look quite
threatening at the first sight, but we shall see soon, that they
have a transparent interpretation and can be tamed.
In order to see this we define an auxiliary Markov chain on
$\Bbb Z$ which will help understanding better the random walk
$S_n(\cdot)$.
For $z\in\Bbb Z$ let
$$
p(z)=\frac{\lambda^{2z+1}}{1+\lambda^{2z+1}},\quad
q(z)=\frac{1}{1+\lambda^{2z+1}},
\tag 2.8
$$
and for $x,y\in\Bbb Z$
$$
P(x,y)=\left\{\matrix\format\c&\l\\
\dsize\prod_{z=x}^{y}p(z)q(y+1)\quad&\text{ if }\quad x-1\le y
\\ \\
0 \quad&\text{ if }\quad x-1 > y
\endmatrix\right.
\tag 2.9
$$
(When $x=y+1$, the empty product is by definition equal to 1.)
>From $\prod_{z=x}^{\infty}p(z)=0$ follows that $P$ is the transition
matrix of a Markov chain
$\eta_k,\,\,k=0,1,2,\dots$ on $\Bbb Z$.
On the right hand side of (2.5) (resp. (2.7)) we have exactly the
probability distribution of the first $L+1$ steps (resp. the
first $L$ steps) of the Markov chain $\eta_{\cdot}$ starting
form $0$ (resp. starting from $-1$). More precisely
$$
\align
\Cal{P}_{\!-}(L;\underline l)&=\bold P\Big(
\eta_{k+1}-\eta_{k}=l_{k}-1,\,\,\,k=0,1,\dots,L \Big|\!\Big|
\eta_0=0\Big)
\tag 2.10
\\
\Cal{P}_{\!+}(L;\underline l)&=\bold P\Big(
\eta_{k}-\eta_{k-1}=l_{k}-1,\,\,\,k=1,\dots,L \Big|\!\Big|
\eta_0=-1\Big)
\tag 2.11
\endalign
$$
Thus, denoting by $\xi_n(t)$ the $t$-th
step of the random walk $S_n(\cdot)$, i.e.
$$
\xi_n(t)=S_n(t)-S_n(t-1),\qquad t\ge1
\tag 2.12
$$
we get
$$
\bold P\Big(\,\xi_n(t)=x\,\Big|\!\Big|\,S_n(t-1)=k\,\Big)=
\left\{\matrix\format\l&\l\\
P^{k+1}(0,x-1)\quad&\text{ for }\quad tn.
\endmatrix\right.
\tag 2.13
$$
The transition probabilities for \ $t=n$ \ are a little
different,
but this single step will not have any effect on the limit law
of the local time process.
\demo{Proof of Proposition 1, first part}
As $S_n(0)=0$ and (2.13) are bona fide transition probabilities,
i.e. $\bold
P\left(\xi_n(t)\in[-k,\infty)|\!|S_n(t-1)=k<\infty\right)=1$,
the local times $S_n(t)\in\Bbb Z_+$ are almost surely finite.
$\omega_n<\infty$ will be proved in Proposition 2, section 4.
\enddemo
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\bigskip
\noindent
{\bf 3. The Auxiliary Markov chain: Technical Lemmas}
\medskip
In this section we summarize the properties of
the Markov chain $\eta_i$ needed for the proof of Theorem 1.
Denote by $\pi$ the following
probability distribution on $\Bbb Z$:
$$
\pi(x)=\frac{\lambda^{x^2}}
{\sum_{x\in\Bbb Z}\lambda^{x^2}},
\qquad x\in\Bbb Z
\tag 3.1
$$
\proclaim{Lemma 1}
The unique stationary distribution of the
Markov chain $\eta$ is
$$
\rho(x)=\pi(x+1), \qquad x\in\Bbb Z.
\tag 3.2
$$
There exist constants $C_1<\infty$ and $C_2>0$ such that
for $x=0$ or $x=-1$
the following exponential bound holds:
$$
\sum_{y\in\Bbb Z} \left|P^k(x,y)-\rho(y)\right| y
\endmatrix\right.
\tag 3.4
$$
is straightforward. Hence
$$
\sum_{x\in\Bbb Z}\rho(x)P(x,y)=
\left[\sum_{x\le y+1}p(x)\prod_{z=x+1}^{y+1}q(z)\right]\rho(y)=
\rho(y)
\tag 3.5
$$
which proves that $\rho$ is a stationary distribution indeed.
The proof of uniqueness and exponential convergence (3.3) is a
bit lengthy, but consists of standard procedures. First of all
notice that due to stationarity of the distribution $\rho$
inequality
$$
P^k(x,y)\le\frac{\rho(y)}{\rho(x)}.
\tag 3.6
$$
holds, which provides an over-exponential bound, uniform in $k$,
on the tails of the distributions $P^k(x,\cdot)$. In consequence
all the expectations below make sense.
Denote by $\theta_{\pm}$ and $\sigma$ the following stopping
times
$$
\align
\theta_+&=\min\{k\ge0:\eta_k\ge 0\}
\tag 3.7
\\
\theta_-&=\min\{k\ge0:\eta_k\le 0\}
\tag 3.8
\\
\sigma_{\phantom{+}}&=\min\{k\ge1:\eta_k= 0\}
\tag 3.9
\endalign
$$
According to Theorem 6.14 and Example 5.5(a) of [NU] the
uniqueness of the stationary distribution and the exponential
convergence (3.3) follow from
$$
\bold E\Big(\exp(C_2\sigma)\Big|\!\Big|\eta_0=0\Big)<\infty,
\tag 3.10
$$
with $C_2>0$.
So, our goal is to prove (3.10). In the following expression we
use the fact that the Markov chain $\eta_{\cdot}$ doesn't jump
more than one lattice step to the left:
$$
\align
&\bold E\Big(\text{e}^{C_2\sigma}\Big|\!\!\Big|\eta_0=0\Big)=
\text{e}^{C_2}\sum_{y\ge 0}P(0,y)
\bold E\Big(\text{e}^{C_2\theta_-}\Big|\!\!\Big|\eta_0=y\Big)
\\
&\qquad+\text{e}^{C_2}P(0,-1)\sum_{y\ge 0}
\bold E\Big(\text{e}^{C_2\theta_+}1\!\!1(\eta_{\theta_+}=y)
\Big|\!\!\Big|\eta_0=-1\Big)
\bold E\Big(\text{e}^{C_2\theta_-}\Big|\!\!\Big|\eta_0=y\Big).
\tag 3.11
\endalign
$$
Next we use a special consequence of the structure (2.9) of the
transition kernel $P$. Namely, it is an easy computation to check
that given $\eta_0=x<0$ the random variables $\theta_+$ and
$\eta_{\theta_+}$ are independent and, for $y\ge 0$
$$
\bold E\Big(\text{e}^{C_2\theta_+}1\!\!1(\eta_{\theta_+}=y)
\Big|\!\!\Big|\eta_0=-1\Big)=
\frac{P(0,y)}{1-P(0,-1)}
\bold E\Big(\text{e}^{C_2\theta_+}
\Big|\!\!\Big|\eta_0=-1\Big).
\tag 3.12
$$
>From (3.11), (3.12):
$$
\align
\bold E\Big(\text{e}^{C_2\sigma}\Big|\!\Big|\eta_0=0\Big)=
\text{e}^{C_2}\sum_{y\ge 0}&P(0,y)
\bold E\Big(\text{e}^{C_2\theta_-}\Big|\!\!\Big|\eta_0=y\Big)
\\
&\times\left[1+
\frac{P(0,-1)}{1-P(0,-1)}
\bold E\Big(\text{e}^{C_2\theta_+}
\Big|\!\!\Big|\eta_0=-1\Big)\right].
\tag 3.13
\endalign
$$
>From a dominated convergence argument we can see that
$$
P(-\infty,y)=\lim_{x\to-\infty}P(x,y)=\prod_{z=-\infty}^{y+1}
p(z)q(y+1)
\tag 3.14
$$
is a probability distribution on $\Bbb Z$.
Let $x_1\le x_2$, then for $y\ge x_2-1$
$$
P(x_1,y)=\left[\sum_{z\ge x_2-1}P(x_1,z)\right]P(x_2,y)
\tag 3.15
$$
which implies that the distributions $P(x,\cdot)$ are
stochastically ordered: for $x_1From (3.16) follows that
$$
\bold P\Big(\theta_+>k\Big|\!\Big|\eta_0=-1\Big)\le r^k,
\quad\text{ where }\quad r=\sum_{y<0}P(-\infty,y)<1.
\tag 3.17
$$
Hence
$$
\bold E\Big(\exp(C_3\theta_+)\Big|\!\Big|\eta_0=-1\Big)<\infty
\tag 3.18
$$
for $C_3<-\log r$.
On the other hand, since $p(z)>p(z+1)$, (see (2.8)),
the distributions $P(x,x+\cdot)$ are stochastically ordered in
the opposite sense: for $x_1
\prod_{z=0}^{y_0}p(x_2+z)=\sum_{y\ge y_0}P(x_2,x_2+y).
\tag 3.19
$$
Hence, for $y\ge0$
$$
\bold P\Big(\min_{0\le j\le k}\eta_j\le 0
\Big|\!\Big|\eta_0=y\Big)\ge
\bold P\Big(\min_{0\le j\le k}\sum_{i=1}^j\zeta_i\le-y\Big),
\tag 3.20
$$
where the $\zeta_i$-s are i.i.d. random variables with
distribution:
$$
\bold P\Big(\zeta_i=x\Big)=P(0,x).
\tag 3.21
$$
Now using the explicit form of the distribution $P(0,x)$ we find
that
$$
\forall\alpha\in\Bbb R:\qquad
\sum_{x\in\Bbb Z}P(0,x)\text{e}^{\alpha x}<\infty,
\tag 3.22
$$
and
$$
\sum_{y\in\Bbb Z}P(0,x)x<0
\tag 3.23
$$
for any $\lambda\in(0,1)$. From (3.20), (3.21), (3.22) and (3.23)
follows that, if $y\ge0$
$$
\bold E\Big(\text{e}^{C_4\theta_-}\Big|\!\!\Big|\eta_0=y\Big)
\le \text{e}^{\beta y}
\tag 3.24
$$
with some $C_4>0$ and $\beta<\infty$.
(3.10), with $C_2=\min\{C_3,C_4\}$, follows from (3.13),
(3.18), (3.22) and (3.24).
\hfill \hfill \qed
\enddemo
The next lemma establishes an over-exponential bound on the rate
of decay of the tails of the distributions $P^k(0,\cdot)$,
uniform in $k$. This bound is much stronger than what we
actually need in the proof of the Theorem.
\proclaim{Lemma 2}
There exists a constant $C_5<\infty$ such that for any $k\ge0$
and $x\ge0$
$$
P^k(0,x+1)\le C_5\lambda^xP^k(0,x).
\tag 3.25
$$
\endproclaim
\noindent
{\sl Remark:\/} A similar bound can be established for the left
tail of the distributions, too, but we do not need it in the
proof of the Theorem.
\demo{Proof}
We prove the lemma by induction on $k$ and $x$. (3.25) clearly
holds for $k=0$ and any $x\ge0$. As, due to (3.1), (3.2) and
(3.3) $\lim_{k\to\infty}P^k(0,1)/P^k(0,0)=\lambda^3<\infty$,
$$
C_5=\sup_{k\ge0}\frac{P^k(0,1)}{P^k(0,0)}<\infty.
\tag 3.26
$$
Thus, (3.25) holds for $\forall k\ge0$ \ and $x=0$. We proceed
now by induction. Given an arbitrary probability distribution $r$ on
$\Bbb Z$ the following identity holds:
$$
\big[rP\big](x+1)=\frac{p(x+1)}{q(x+1)}q(x+2)\big[rP\big](x)+
q(x+2)r(x+2).
\tag 3.27
$$
Assume (3.25) holds for $(k,x+1)$ and for $(k+1,x)$. Using
(3.27) we get
$$
\align
P^{k+1}(0,x+2)
&=\frac{p(x+2)}{q(x+2)}q(x+3)P^{k+1}(0,x+1)+q(x+3)P^k(0,x+3)
\\
&\le\frac{p(x+2)}{q(x+2)}q(x+3)C_5\lambda^xP^{k+1}(0,x)+
q(x+3)C_5\lambda^{x+2}P^k(0,x+2)
\\
&=C_5\lambda^{x+1}\frac{\lambda q(x+3)}{q(x+2)}\left[
\frac{p(x+1)}{q(x+1)}q(x+2)P^{k+1}(0,x)+
q(x+2)P^k(0,x+2)\right]
\\
&=C_5\lambda^{x+1}\frac{\lambda q(x+3)}{q(x+2)}P^{k+1}(0,x+1).
\tag 3.28
\endalign
$$
Since
$$
\frac{\lambda q(x+3)}{q(x+2)}=
\frac{\lambda(1+\lambda^{2x+5})}{1+\lambda^{2x+7}}<1,
\tag 3.29
$$
(3.28) yields (3.25) for $(k+1,x+1)$.\hfill\hfill\qed
\enddemo
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\bigskip
\noindent
{\bf 4. General Formulation and Proofs}
\medskip
In the proofs we use only some of the qualitative features
(formulated in the lemmas of the previous section) and not the
explicit form of the transition probabilities of the random walk
$S_n(\cdot)$. We are going to formulate our main Theorem in
these more general terms.
Let
$$
S_n(t)=\sum_{s=1}^t\xi_n(s)
\tag 4.1
$$
be a space-time inhomogeneous random walk on $\Bbb Z_+$ with the
following law:
$$
\bold P\Big(\xi_n(t+1)=x\Big|\!\Big|S_n(t)=k\Big)=
\pi_n(x|k,t)
\tag 4.2
$$
$$
\sum_{x\ge-k}\pi_n(x|k,t)=1.
\tag 4.3
$$
{\sl Remark:\/} Compare (4.2), (4.3) with (2.13).
The time inhomogeneity and dependence on $n$ of the law of the
random walk is just a nuisance we have to live with. As all the
estimates below will be uniform in $t$ and $n$ this will cause
no real trouble. All the constants arising in various
inequalities below are absolute constants not depending on $n$
or $t$.
We assume the following behaviour of the step distributions
$\pi_n(\cdot|k,t)$: \newline
(1) The step distributions $\pi_n(\cdot|k,t)$ converge in
$\ell_1(\Bbb Z)$, uniformly in $n$ and $t$, exponentially fast
as $k\to\infty$, to an asymptotic distribution $\pi$:
$$
\sum_{x\in\Bbb Z}\left|\pi_n(x|k,t)-\pi(x)\right|0$ such that
$$
\pi_n(x|k,t)\le C_6\exp(-C_7|x|).
\tag 4.7
$$
For technical purposes we assume even more about the decay of
the right tail of the distributions $\pi_n(\cdot|k,t)$: there is
an $x_0\in\Bbb Z$ \ independent of $k,\,\,t$ or $n$ such that for
$x\ge x_0$:
$$
\pi_n(x+1|k,t)<\text{e}^{-C_7}\pi_n(x|k,t).
\tag 4.8
$$
{\sl Remark:\/} Compare (4.7) with (3.6) and (4.8) with
(3.25). \newline
(3) The random walk is not trapped in a bounded domain or in a
domain away from the origin. There is a constant $C_8>0$
independent of $t$ or $n$ such that
$$
\align
&\forall\,\, k\ge0:\quad\sum_{x>0}\pi_n(x|k,t)>C_8>0.
\tag 4.9
\\
&\forall\,\, k >0: \quad\sum_{x<0}\pi_n(x|k,t)>C_8>0.
\tag 4.10
\endalign
$$
The $\omega$-stopping-time of the random walk $S_n(\cdot)$ is
$$
\omega_n=\inf\{t\ge n: S_n(t)=0\}.
\tag 4.11
$$
Denote by $\theta_{I}$ the exit time from the interval $I
\in \Bbb Z_+$
$$
\theta_{I}=\min\{t\ge 0:S_n(t)\not\in I\}.
\tag 4.12
$$
\proclaim{Lemma 3} ({\it i.}) There exists a constant $C_9>0$,
such that for any $0\le a< k< b$ in $\Bbb Z_+$
$$
\bold P\Big(S_n(\theta_{(a,b)})\ge b\Big|\!\Big|S_n(0)=k\Big)
>C_9\frac{k-a}{b-a},
\tag 4.13
$$
and for any $0\le a< k< b$ in $\Bbb Z_+$
$$
\bold P\Big(S_n(\theta_{(a,b)})\le a\Big|\!\Big|S_n(0)=k\Big)
>C_9\frac{b-k}{b-a}.
\tag 4.14
$$
({\it ii.}) There exists a constant $C_{10}>0$, such that for any
$0\le k< b$ in $\Bbb Z_+$
$$
\bold E\Big(\theta_{[0,b)}\Big|\!\Big|S_n(0)=k\Big)
b_n\},
\tag 4.20
$$
and define the following sequence of sampling times:
$$
\matrix\format\l&\l\\
\tau_n(-1)&=-1
\\
\tau_n(t) &=\min\{s>\tau_n(t-1): S_n(s)>b_n\},\qquad 0\le
t<\infty.
\endmatrix
\tag 4.21
$$
$\tau_n$ is the inverse function of $\sigma_n$ in the sense that
$$
\sigma_n\left(\tau_n(t)\right)=t
\quad\text{ for }\quad 0\le t<\infty.
\tag 4.22
$$
Let
$$
\tilde\xi_n(t)=\xi_n(\tau_n(t-1)+1)
\tag 4.23
$$
and
$$
\tilde\zeta_n(t)=\zeta_n(\tau_n(t-1)+1).
\tag 4.24
$$
As the $\tilde\zeta_n$-s are selected according to an increasing
sequence of sampling times they are still i.i.d. random
variables with the common distribution $\pi$. On the other hand,
due to (4.19)
$$
\forall t\in\Bbb N:\qquad
\bold P\Big(\tilde\xi_n(t)\not=\tilde\zeta_n(t)\Big)<
C_1\exp(-C_2n^\epsilon).
\tag 4.25
$$
We define two `truncated' processes $\tilde S_n(\cdot)$ and
$\tilde Y_n(\cdot)$:
$$
\align
&\tilde S_n(0)=0,\quad
\tilde S_n(t)=\Big(\tilde S_n(t-1)+\tilde\xi_n(t)\Big)\bigvee0,
\tag 4.26
\\
&\tilde Y_n(0)=0,\quad
\tilde Y_n(t)=\Big(\tilde Y_n(t-1)+\tilde\zeta_n(t)\Big)\bigvee0.
\tag 4.27
\endalign
$$
And finally, let the random walk $Y_n$ be defined as follows:
$$
Y_n(0)=0,\quad
Y_n(t)=\left|Y_n(t-1)+\tilde\zeta_n(t)\right|.
\tag 4.28
$$
Due to the symmetry of the distribution $\pi$, $Y_n(\cdot)$ is a
homogeneous random walk reflected at the origin, and $\tilde
Y_n(\cdot)$ differs from $Y_n(\cdot)$ only by the cutoffs of the
overshootings of the origin: using the constructions (4.27) and
(4.28) one can easily check
$$
0\le Y_n(t)-\tilde Y_n(t)\le \max_{0\le s\le
t}\left|\tilde\zeta_n(s)\right|.
\tag 4.29
$$
We also need the $\omega$-stopping times of the processes $\tilde
Y_n(\cdot)$ and $Y_n(\cdot)$:
$$
\align
\omega^{\prime}_n&=\inf\{t\ge n: \tilde Y_n(t)=0\},
\tag 4.30
\\
\omega^{\prime\prime}_n&=\inf\{t\ge n: Y_n(t)=0\}.
\tag 4.31
\endalign
$$
The standard invariance principle holds for $Y_n(\cdot)$
(see [Bi], [L]), and consequently:
$$
\left(
\frac{\omega^{\prime\prime}}{n},
\frac{Y_n([ns])}{\sigma\sqrt{n}}: 0\le s<\infty
\right)
\Rightarrow
\left(
\omega,\left|W_s\right|: 0\le s<\infty
\right)
\quad \text{ in } \quad \Bbb R\times D[0,\infty).
\tag 4.32
$$
Due to the closeness of the $\tilde Y_n(\cdot)$ and $Y_n(\cdot)$
paths, (4.29), the same convergence in distribution is easily
proved for the process $\tilde Y_n(\cdot)$:
$$
\left(
\frac{\omega^{\prime}}{n},
\frac{\tilde Y_n([ns])}{\sigma\sqrt{n}}: 0\le s<\infty
\right)
\Rightarrow
\left(
\omega,\left|W_s\right|: 0\le s<\infty
\right)
\quad \text{ in } \quad \Bbb R\times D[0,\infty).
\tag 4.33
$$
We omit the details of this straightforward step.
We shall prove the Theorem by proving
$$
n^{-1/2}\max_{0\le t\le n^{1+\epsilon}}
\left|S_n(t)-\tilde Y_n(t)\right|
\buildrel{\bold P}\over{\to}0
\tag 4.34
$$
and
$$
n^{-1}\left|\omega_n-\omega^{\prime}_n\right|
\buildrel{\bold P}\over{\to}0.
\tag 4.35
$$
Consider first (4.34):
$$
\align
&
\max_{0\le t\le n^{1+\epsilon}}
\left|S_n(t)-\tilde Y_n(t)\right|\le
\max_{0\le t\le n^{1+\epsilon}}
\left|S_n(t)-\tilde S_n(\sigma_n(t))\right|+
\\
&\qquad\qquad
\max_{0\le t\le n^{1+\epsilon}}
\left|\tilde S_n(\sigma_n(t))-\tilde Y_n(\sigma_n(t))\right|+
\max_{0\le t\le n^{1+\epsilon}}
\left|\tilde Y_n(\sigma_n(t))-\tilde Y_n(t)\right|.
\tag 4.36
\endalign
$$
In order to estimate the three terms on the right hand side of
(4.36) we define the following events:
$$
\align
\Cal A_n&=\Big\{\max_{1\le t\le
n^{1+\epsilon}}\left|\xi_n(t)\right|From the weak convergence of (4.33) of $\omega^{\prime}_n/n$
clearly follows
$$
\bold P\Big(\Cal F_n\Big)\to 1,
\tag 4.53
$$
as $n\to\infty$.
To prove
$$
\bold P\Big(\Cal G_n\Big)\to 1
\tag 4.54
$$
as $n\to\infty$, notice that, by construction of $\tilde
S_n(\cdot)$, $\tilde S(\sigma_n(\omega_n))=0$ and consequently,
on $\Cal B_n$ \ $\tilde Y(\sigma_n(\omega_n))=0$, too. Thus, either
$\omega^{\prime}_n\le\sigma_n(\omega_n)<\omega_n$ or
$\sigma_n(\omega_n)\le n<\omega_n$. The first of these
alternatives implies $\Cal G_n$, from the second alternative
follows
$$
\Cal G_n^{\text{c}}\cap\Cal B_n\cap\Cal C_n\cap\Cal D_n\cap\Cal
F_n\subset \Big\{\tilde Y_n(n)From Lemma 3 ({\it i.}) and ({\it ii.}), (4.14) and (4.15),
easily follows, that
$$
\bold P\Big(
\theta_{(0,\infty)}1.
\tag A.2
$$
Here once again we used the symmetry (4.5) of the distribution
$\pi$.
Due to (A.2) we can choose $a_1\in\Bbb Z_+$ and a finite constant
$C_{13}<\infty$ such that for $k\ge a_1$
$$
C_{13}\left(\sum_{x\in\Bbb
Z}\pi_k(x)\text{e}^{-C_2x}-1\right)\ge C_{11}.
\tag A.3
$$
It is straightforward to check that from (A.3) and (A.1)
follows that both
$$
\pm S(t)+C_{13}\exp(-C_{12} S(t)),\qquad 0\le t\le
\theta_{(a_1,\infty)}
\tag A.4
$$
are submartingales.
To prove (4.13) notice first that due to the strong
exponential bound (4.8) on the decay rate of
the right tails, for any $k\in(a,b)\subset\Bbb Z_+$
$$
\bold E\Big(S(\theta_{(a,b)})\Big|\!\Big|
[S(0)=k]\land[S(\theta_{(a,b)})\ge b]\Big)\le
(b+C_{13})
\tag A.5
$$
with some $C_{13}<\infty$. Let $a_2=\max\{a_1, C_{13}\}$. For
$a_2\le a***b\Big|\!\Big|S(0)=k\Big)
\tag A.7
\endalign
$$
which implies (4.13) with some constant $C_{15}>0$ for
$a_2\le a< k< b$.
Changing the constant $C_{15}$ to a smaller one
$$
C_9=
\min_{0\le a^{\prime} a_2
\Big|\!\Big|S(0)=k^{\prime}\Big)
C_{15}>0
\tag A.8
$$
the inequality extends to any $0\le a< k< b$ and (4.13) is
proved.
(4.14) is proved in a very similar way
considering now the submartingale
$$
-S(t)+C_{13}\exp(-C_{12}S(t))-\left[-b+C_{13}\exp(-C_{12}b)\right],
\qquad 0\le t\le \theta_{(a,b)},
\tag A.9
$$
with $a_1\le a****a_4$ we have
$$
\bold E\Big(S^2(t+1) - \frac{\underline\sigma^2}{2}(t+1)
\Big|\!\Big| S(t)=k\Big)\ge k^2-\frac{\underline\sigma^2}{2}t
\tag A.12
$$
and consequently
$$
S^2(t)-\frac{\underline\sigma^2t}{2}, \qquad
t=0,1,\dots,\theta_{(a_4,b)}
\tag A.13
$$
is a submartingale.
Hence
$$
\bold E\Big(S^2(\theta_{(a_4,b)})-
\frac{\underline\sigma^2}{2}\theta_{(a_4,b)}
\Big|\!\Big|S(0)=k\Big) \ge k^2.
\tag A.14
$$
>From (4.8), similarly to (A.5), we get
$$
\bold E\Big(S^2(\theta_{(a,b)})\Big|\!\Big|
[S(0)=k]\land[S(\theta_{(a,b)})>b]\Big)\le
(b+C_{16})^2.
\tag A.15
$$
(A.14) and (A.15) imply
$$
\frac{\underline\sigma^2}{2}\bold E\Big(\theta_{(a_4,b)}
\Big|\!\Big|S(0)=k\Big) \le (b+C_{16})^2,
\tag A.16
$$
and, finally (4.13) and (A.16) lead directly to
$$
\bold E\Big(\theta_{[0,b)}\Big|\!\Big|S(0)=k\Big) \le
C_9^{-1}b\left(\max_{0\le k^{\prime}\le a_4}
\bold E\Big(\theta_{[0,a_4]}\Big|\!\Big|S(0)=k^{\prime}\Big) +
\frac{4}{\underline\sigma^2}(b+C_{16})^2\right).
\tag A.17
$$
Due to the non-trapping condition (4.9)
$\max_{0\le k^{\prime}\le a_4}
\bold E\Big(\theta_{[0,a_4]}\Big|\!\Big|S(0)=k^{\prime}\Big)$ is
finite. Thus (A.17) proves (4.15).
\hfill\hfill\qed
\enddemo
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\bigskip
\newpage
\noindent
{\bf Acknowledgement:}
A very pleasant and instructive discussion with Erwin
Bolthausen, about self-repelling random walks, lead me to the
formulation of the problem. Helpful conversations with Endre
Cs\'aki and P\'eter Major during the preparation of this work
are also gratefully acknowledged.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\bigskip
\noindent
{\bf References}
\parindent=1.5cm
\item{[APP]} Amit, D., Parisi, G., Peliti, L.: Asymptotic
behaviour of the `true' self-avoiding walk.
{\sl Phys. Rev. B} {\bf 27} 1635-1645, (1983).
\item{[Bi]} Billingsley, P.: {\sl Convergence of Probability
Measures.\/} J. Wiley \& Sons, New York, 1968.
\item{[Bo]} Bolthausen, E.: On self-repellent one-dimensional
random walks. {\sl Probab. Th. Rel. Fields\/}
{\bf 86} 423-441, 1990
\item{[Br]} Breiman., L.: {\sl Probability.\/}, Addison-Wesley,
Reading, MA, 1968.
\item{[BS]} Brydges, D., Spencer, T.: Self-avoiding walk in 5
or more dimensions. {\sl Commun. Math. Phys.\/}
{\bf 97} 125-148 (1985).
\item{[CM]} Cs\'aki, E., Mohanty, S.G.: Excursion and meander in
random walk. {\sl The Canadian J. of Stat.\/} {\bf
9} 57-70, (1981)
\item{[D]} Davis, B.: Reinforced random walk. {\sl Probab. Th.
Rel. Fields.\/} {\bf 84} 203-229 (1990).
\item{[HS]} Hara, T., Slade, G.: The lace expansion for
self-avoiding walk five and more dimensions. {\sl
Reviews in Math. Phys.\/} {\bf 4} 235-327 (1992).
\item{[KKS]} Kesten, H., Kozlov, M.V., Spitzer, F.: A limit law
for random walk in random environment. {\sl
Compositio Mathematica\/} {\bf 30} 145-168, 1975.
\item{[K]} Knight, F.B.: Random walks and a sojourn density
process of Brownian motion. {\sl Transactions of
AMS\/} {\bf 109} 56-86, (1963)
\item{[L]} Lindwall, T: Weak convergence of probability measures
and random functions in the function space
$D[0,\infty)$. {\sl J. Appl. Probab.\/} {\bf 10} 109,
1973.
\item{[N]} Nummelin, E.: {\sl General Irreducible Markov Chains
and Non-Negative Operators.\/} Cambridge Univ. Press,
1984.
\item{[P]} Pemantle, R.: Phase transition in reinforced random
walk and RWRE on trees. {\sl The Annals of Probab.\/}
{\bf 16} 1229-1242 (1988).
\medskip
\enddocument
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
**