\magnification = 1200
\hfuzz=10pt
\hsize=4.8in
\vsize=7.3in
\baselineskip=18pt
\hoffset=0.35in
\voffset=0.1in
\parindent=3pt
\def\v{\par\noindent}
\def\di{\displaystyle}
\def\seq#1#2{(#1_0, #1_1, \ldots , #1_{#2})}
\def\rest#1#2{\sigma|_{\di [ #1, #2]}}
\def\R{I\!\!R}
\def\C{I\!\!\!\!C}
\def\N{I\!\!N}
\def\Q{I\!\!\!\!Q}
\def\Z{I\!\!\!\!Z}
\def\ui{[0,1]}
\def\O{\Omega_{\geq}}
\def\o{\omega}
\def\t{\theta}
\def\z{\zeta}
\def\limsup{\mathop{\overline{\rm lim}}}
\def\liminf{\mathop{\underline{\rm lim}}}
\def\ut{{\tilde u}}
\def\dz{\zeta^{\prime}}
\def\S{\Sigma_{\geq}}
\def\SE{\Sigma}
\def\s{\sigma}
\def\a{\alpha}
\def\k{\kappa}
\def\b#1#2{\exp ( #1 ( \log #2) ^\a )}
\def\eps{\epsilon}
\def\sp{\sigma^{\prime}}
\def\A{{\cal A}}
\def\L{{\cal L}}
\def\P{{\cal P}}
\def\I{{\cal I}}
\def\df{f^{\prime}}
\def\ddf{f^{\prime \prime}}
\def\dphi{\phi^{\prime}}
\def\dpsi{\psi^{\prime}}
\def\dg{g^{\prime}}
\centerline{\bf INFINITE INVARIANT MEASURES}
\centerline{\bf FOR NON-UNIFORMLY EXPANDING TRANSFORMATIONS OF $\ui$:}
\centerline{\bf WEAK LAW OF LARGE NUMBERS WITH ANOMALOUS SCALING.}
\vglue 0.2cm
\vglue 1.0cm
\centerline{ Massimo Campanino\footnote{$^1$}
{Work supported by EC grant
SC1-CT91-0695} }
\vglue 0.2cm\centerline{Stefano Isola}
\vglue 0.4cm
\centerline{\it Dipartimento di Matematica,
Universit\'a degli Studi di Bologna,}
\centerline{\it piazza di Porta S.Donato 5, I-40127 Bologna, Italy}
\vskip 1cm
\centerline {March 15, 1994}
\vskip 1cm
{\bf Abstract.} We consider a class of maps of $\ui$
which are expanding everywhere but at a
fixed point where the derivative has modulus one.
Using the invariant ergodic probability measure
of a suitable, everywhere expanding, induced transformation
we are able to study the infinite invariant measure
of the original map in some detail.
Given a continuous function with compact support in $]0,1]$,
we prove that its time averages satisfy a `weak law of large numbers'
with anomalous scaling $n/\log n$ and give an upper bound for the decay of
correlations.
\vfill \eject
{\bf 1. Introduction.}
\vskip 0.5cm
We consider a smooth
map $f$ of the interval $\ui$ into itself
with a
neutral fixed point, such as those modelling Pomeau-Manneville type 1
intermittency [M.P]. When an orbit
falls in the vicinity of this fixed point stays there for a time
that can be arbitrarily long before reaching again the `turbulent
region'. Due to this fact, the SRB measure of
this dynamical system is simply
the Dirac delta measure concentrated at the indifferent fixed
point.
However, even though the ordinary Cesaro average along a typical
orbit would converge to the above trivial measure,
the main result of this paper shows
that Cesaro averages rescaled by the logarithm of the time yield
a convergence in measure to a $\s$-finite invariant absolutely continuous
measure which describes the statistical properties of intermittent
events. A $L^1$ convergence to this measure
was already proved by Collet and Ferrero in [C.F] with
different techniques.
On the other hand, Aaronson showed in [A] that
convergence almost everywhere cannot hold, even for a single observable
(see also below: the remark after the proof of Theorem 3.1),
so that our result provides, in a sense, a sort of
`optimal' ergodic theorem for this class of transformations.
In the next Section, we introduce the model and
define a suitable everywhere expanding induced map
that was proved in [C.I] to preserve an absolutely continuous probability measure
which is also ergodic and uniformly mixing (for a similar construction see also [C.G]).
Using the explicit relation between this measure and the infinite invariant
measure of the original map some properties of the latter are studied in detail.
In particular, in Section 3, by studying the typical occupation times of
subintervals belonging to a suitable partition,
we prove a weak law of large number for continuous observables with
compact support in $]0,1]$
and show that it cannot
be sharpened to a strong one (see also [A]).
An upper bound for the decay of correlations is also obtained.
\vfill \eject
\vskip 1cm
{\bf 2. The basic model and its induced version.}
\vskip 0.5cm
We first recall the basic setting.
Let $f$ be the map of the unit interval $\ui$ defined as follows:
\item{(i)} $f(0)=0, \; f(1)=1$;
\item{(ii)} $f$ is monotone and non decreasing on $I_0 = [0,{1\over
2}[$ and $I_1=]{1\over 2},1]$;
\item {(iii)} For each $I_i, i=0,1$, $f$ extends to a $C^2$ function
$f_i$ on its closure which is onto $\ui$.
\item{(iv)} There are two numbers $\alpha > 1$ and $L > 0$ such that:
$$
\df_{|I_1} \geq \alpha, \;\; \df(0)=1,\;\; \df_{|]0,1/2[}
\geq 1, \;\; \sup_{x\in \ui}|\ddf(x)/\df(x)|\leq L.
$$
\item{(v)} $\ddf(0)\not= 0$ which implies $\ddf (0) >0$.
\vskip 0.2cm
Let us consider the sequence of points $c_k$, $k\geq 0$,
given by $$
c_0=1, \;\; c_k=f_0^{-1}(c_{k-1}),\;\;\; k\geq 1.
$$
This sequence generates a countable partition of
$\ui$ into the intervals $A_k=[c_{k},c_{k-1}]$, $k\geq 1$,
which is a Markov partition. In particular,
$f(A_k)=A_{k-1}$, $k\geq 1$.
\vskip 0.5cm
{\it Example.}
One simple example of such
maps is given by the following formula:
$$
f(x)= \cases{x/(1-x), &if $x\leq 1/2$; \cr
2x-1, &if $x>1/2$. \cr}
$$
In this example we find $c_k= 1/(k+1)$, $k\geq 0$, and thus
$|A_k|=1/k(k+1)$, $k\geq 1$, where $|I|$ denotes the length of $I$.
More generally, one can easily realize that if
$f(x)-x \sim x^{\delta}$ when $x\to 0$ (where, in order to
satisfy the fourth assumption, $\delta \geq
2$) then $c_k \sim k^{-\beta}$, where $\beta
=1/(\delta -1)$ and $a(k)\sim b(k)$ means $a(k)/b(k)\to 1$ when
$k\to \infty$ (see Lemma 2.4 below). However, if the fifth assumption above has to be
satisfied, then $\delta$ must be equal to $2$.
\vskip 0.5cm
We now construct a coding. Let $\O$ be the set of one-sided
sequences $\o = (\o_0,\o_1,\dots )$, $\o_i\in\{1,2,\dots\}$
satisfying the compatibility condition: given $\o_i$ then either
$\o_{i-1}=\o_i +1$ or $\o_{i-1}=1$. Then, the map
$$
\phi :\o \rightarrow \phi(\o) =x\quad\hbox{according to}\quad
f^i(x)\in A_{\o_i}, i\geq 1
$$
is a
bijection between $\O$ and the points of $\ui$ which are not
preimages of the origin. Moreover, $\phi$ conjugates the map
$f$ with the shift $\tau$ on $\O$.
For every integer $i\geq 1$ we denote by $x_i$ the projection on
the $i^{th}$ symbol, i.e. $x_i(\o)=\o_i$,
and define the "free" probability measure $\mu$ by
$$
\mu(\o_i) = |A_{\o_i}|,\;\; i\geq 1 \eqno(2.1)
$$
With slight abuse of language we shall again denote by $\mu$ the measure
$\mu\circ\phi^{-1}$, i.e. the Lebesgue measure on $\ui$.
\vskip 0.5cm
We now introduce the infinite sequence
$\tau_j$, $j\geq 1$, of successive entrance times in the state $1$:
$\tau_1(\o)=\inf\{i\geq 0\;:\; x_i(\o)=1\}$ and, for $j\geq 2$,
$\tau_j(\o)=\inf\{i>\tau_{i-1}\;:\; x_i(\o)=1\}$.
Furthermore, we define a sequence of integer valued random
variables by
$$
\s_j(\o)=\tau_{j+1}-\tau_j,\;\; j\geq 0\eqno(2.2)
$$
with the convention that $\tau_0=-1$.
\vskip 0.5cm
{\bf Remark.} If $f$
is affine on each interval $A_k$ of the Markov partition, with slope
$s_k=|A_{k-1}|/|A_k|$ (with $|A_0|=1$), then the stochastic process on
$\O$ given by $x_i(\o)=\o_i$, $i\geq 1$, is a Markov chain with conditional
probabilities
$p_{ij}= \mu(f_0^{-1}(A_j)\bigcap A_i)/\mu(A_i)$. As a consequence,
the random variables $\s_j$ are
independent and identically distributed. Their common law is
given by: ${\rm Prob}(\s_j=k)=\mu(k)$ for any $j\geq 0$ and $k\geq 1$.
\vskip 0.5cm
{\bf Definition 2.1.} {\it The `first passage' map (on the interval $A_1$),
is the map $g\; :\;\ui\to \ui$
induced by $f$ in the following way}:
$$
x\rightarrow g(x) = f^{n(x)}(x)
\quad\hbox{where}\quad n(x)=1+\min \{n\geq 0 \;:\; f^n(x)\in
A_1\;\}\eqno(2.3)
$$
Notice that the usual return time function $r(x)$ in the interval $A_1$ is
then given by $r(x)=\min \{n\geq 1 \;:\; f^n(x)\in
A_1\;\}=n\circ f(x)$.
Equivalently, the map $g$ is given by
$$
x\rightarrow g(x) = g_k(x)=f^k(x)
\quad\hbox{if}\quad x\in A_k,\;\; k\geq 1.
\eqno(2.4)
$$
and defined arbitrarily on the countable set $c_k$, $k\geq 0$
(for similar constructions see also [T2],[P],[P.S],[C.G]).
This map is
expanding and surjective, i.e.:
\item{(1)} $\dg(x)_{|A_k}\geq\alpha>1$, for any $k\geq 1$;
\item{(2)} $g$ maps each $A_k$ monotonically onto $\ui$, more
precisely ${\overline {g_k([c_k,c_{k-1}])}}=\ui$, $k\geq1$.
Moreover, one can easily realize that the following property
holds:
\item{(3)} for any $\o\in \O$, let $x=\phi(\o)$. Then
$g^j(x)\in A_{\s_j}$, $j\geq 1$, where
the integers $\s_j=\s_j(\o)$ are defined in (2.2).
Furthermore, let $\S$ be the set of {\it all} one-sided
sequences $\s$ of the form $\s =(\s_0,\s_1,\dots )$,
$\s_j\in\{1,2,\dots\}$. Then, the map
$$
\pi :\s \rightarrow \pi(\s) =x\quad\hbox{according to}\quad
g^j(x)\in A_{\s_j},\;\; j\geq 1\eqno(2.5)
$$
is a
bijection between $\S$ and the points of $\ui$ which are not
preimages of zero. Moreover, $\pi$ conjugates the map
$g$ with the shift $\tau$ on $\S$.
\vskip 1cm
In [C.I] we have constructed an invariant
ergodic probability measure $\rho$ for the dynamical system $(\ui, g)$ as a
Gibbs state on $\S$ for the function
$$
V(\s)=\log (\dphi_{\s_0}(\pi(\s_1\s_2\dots )))
$$
where $\phi_k : \ui \to [c_k,c_{k-1}]$ is the inverse function of $g_k$.
Furthermore, $\rho$ is proved to be
absolutely continuous with respect to the Lebesgue measure $\mu$
(with bounded Radon-Nikodym derivative) and
to satisfy the uniform mixing property (see [R]).
\vskip 0.5cm
We now prove the following:
\vskip 0.2cm
{\bf Lemma 2.1.} {\it Set $D_k = \bigcup_{l\geq k}A_k$. Then the measure
$\nu$ defined for any Borel subset $E$ of $\ui$ by
$$
\nu (E) = \sum_{k\geq 1} \rho (f^{-k+1}E \cap D_k) \eqno(2.6)
$$
is invariant under $f$. }
\vskip 0.2cm
{\sl Proof.} Clearly $D_k=D_{k+1}\cup A_k$. Therefore,
$$
\eqalign{\nu (f^{-1}E) &=
\sum_{k\geq 1} \rho (f^{-k+1}(f^{-1}E) \cap D_{k+1}) +
\sum_{k\geq 1} \rho (f^{-k+1}(f^{-1}E) \cap A_k) \cr
&= \sum_{k\geq 2} \rho (f^{-k+1}E \cap D_k) +
\sum_{k\geq 1} \rho (g^{-1}E \cap A_k) \cr
&= \sum_{k\geq 1} \rho (f^{-k+1}E \cap D_k) = \nu (E) \cr }.
$$
Q.E.D.
In particular (2.6) yields
$$
\nu (A_k) = \sum_{l\geq k} \rho ( A_l) = \rho ( D_k)\eqno(2.7)
$$
and the mean return time in the interval $A_1$ can be computed using Kac's
formula and (2.7) as
$$
\nu (\ui) =\int_{A_1}r(x)d\nu(x)=
\int_0^1n(x)d\rho(x) = \sum_{k\geq 1}k\rho (A_k)\eqno(2.8)
$$
On the other hand, we know that $\rho(A_k) \sim C/k^2$ as
$k\to \infty$ for some constant $C>0$ (see Lemma 2.4 below).
Hence the above sum diverges logarithmically.
Let us now consider the transfer operators $\P$ and $\L$ defined by:
$$
{\P} h(x) = \sum_{y:f(x)=y}{h(y)\over |\df(y)|} \qquad\hbox{and}\qquad
{\L} h(x) = \sum_{y:g(x)=y}{h(y)\over |\dg(y)|}
$$
An important property of these operators is that their fixed points
must be the densities of the corresponding
absolutely continuous invariant measures.
Indeed, if we consider for instance the operator $\L$,
we know from [C.I] that $g$ is
leaves invariant the measure $\rho = h\mu$
where $\mu$ denotes the Lebesgue measure and $h$ is
the unique fixed point of $\L$ in the space of all real H\"older continuous functions
on $]0,1]$.
Thus,
$$
\rho (g^{-1}E) = \int_{g^{-1}E}h(x)d\mu(x) = \int_E (\L h)(x)d\mu (x)
= \int_E h(x)d\mu (x) = \rho (E)
$$
and conversely.
\vskip 0.2cm
{\bf Lemma 2.2.} {\it Let $e$ be such that ${\P}e=e$, then,
$$
e=h+\sum_{i=1}^{\infty}{\P}_0^i h\eqno(2.9)
$$
which is identical to (2.6) provided $\nu=e\mu$. }
\vskip 0.2cm
{\sl Proof.}
Let $\psi_k : \ui \to I_k$, $k=0,1$ be such that $\psi_k\circ
f_k(x)=x$, then $\P$ can be decomposed as
$$
{\P} h(x)={\P}_0 h(x)+{\P}_1 h(x)=
h \circ \psi_0 (x) \cdot \dpsi_0 (x) +h \circ \psi_1 (x)
\cdot \dpsi_1 (x) \eqno(2.10)
$$
and moreover one can easily check that
$$
{\L} h(x) = \sum_{i=1}^{\infty} h \circ \phi_i (x) \cdot \dphi_i (x) =
{\P}_1 h(x) + \sum_{i=1}^{\infty}{\P}_0^i {\P}_1
h(x)\eqno(2.11)
$$
The statement follows immediately from (2.10) and (2.11). Q.E.D.
\vskip 0.2cm
{\bf Remark.} We point out that Lemmas 2.1 and 2.2 hold in a
sligthly
more general setting: assumption (v) on
$f$ is not needed and assumption (iv) can be partially relaxed in such a
way to include transformations $f$ such that
$f(x)-x \sim x^{\delta}$ when $x\to 0$ with any $\delta > 0$
(we recall that if $\delta < 2$ the invariant measure $\nu$ is
finite) and obviously uniformly expanding maps like for instance:
$$
f(x)= \cases{(2-a)x/(1-ax), &if $x\leq 1/2$; \cr
2x-1, &if $x>1/2$. \cr}
$$
where $a=1-\epsilon$, $\epsilon >0$.
\vskip 0.2cm
We conclude this Section with the following estimates
(see [T1], [C.F] for related results):
\vskip 0.2cm
{\bf Lemma 2.3.} {\it Under the assumptions (i)-(v) above
$$
{C_1 \over x}\leq e(x) \leq {C_2 \over x} \eqno(2.12)
$$
where $C_1,C_2$ are positive constants.}
\vskip 0.2cm
{\sl Proof.}
We know from [C.I] that $h$ satisfies the bound
$d^{-1}\leq h \leq d$ for some positive constant $d$.
Then from Lemma 2.2
$$
d^{-1}\sum_{k=0}^{\infty}{\P}_0^k1(x) \leq e (x)
\leq d \sum_{k=0}^{\infty}{\P}_0^k1(x)\eqno(2.13)
$$
Furthermore,
$$
\sum_{k=0}^{\infty}{\P}_0^k1(x) =\sum_{k=0}^{\infty}(\psi_{0}^k)^{\prime}(x)
$$
where $\psi_{0}^k\circ f_0^k(x)=x$.
We now use a reasoning similar to that of [T1].
We have $\psi_{0}(x)0$ and $u$, $v$ are continuous functions defined on $\ui$
such that $u(x)/x^2$ and $v(x)/x^2$ tend to zero as $x\to 0$. Putting together (2.15)
and (2.16) we easily find
$$
{1\over x}\left( {1\over a+u(x)/x^2}\right) \le \sum_{k=0}^{\infty}{\P}_0^k1(x)
\le {1\over x}\left( {1\over a - v(x)/x^2}\right)
\eqno(2.17)
$$
which immediately yields the result.
Notice that for the paradigmatic example: $f_0(x)=x/(1-x)$, we simply have
$\psi_0(x)=(1+1/x)^{-1}$ and $\psi_{i0}(x)=(i+1/x)^{-1}$, so that
$\dpsi_{i0}(x) = (1+ix)^{-2}$ and (2.12) follows at once with
the choice $C_1^{-1}=C_2=d$.
Q.E.D.
\vskip 0.2cm
{\bf Lemma 2.4.} {\it Let $\mu$ be the free probability measure over $\N$ defined in
(2.1) and $\rho$ the probability measure invariant under $g$.
Then, under the assumptions (i)-(v) above,
$$
\mu(k) = |A_k| \sim {1\over k^2}\eqno(2.18)
$$
and
$$
\rho(A_k) \sim {C\over k^2}\eqno(2.19)
$$
where $C$ is a positive constant and $a(k)\sim b(k)$ means $a(k)/b(k)\to 1$ when $k\to \infty$.}
\vskip 0.2cm
{\sl Proof.}
We first prove that there are constants $C,D>0$ and $\gamma>0$ such that
for $k$ large enough,
$$
C - D c_{k-1}^{\gamma}\leq {\rho(A_k)\over |A_k|}
\leq C + D c_{k-1}^{\gamma}\eqno(2.20)
$$
Indeed, let $\{x_k\}_{k\ge 1}$ be the sequence defined by
$$
\rho(A_k) = \int_{A_k}h(x)dx = h(x_k)|A_k|\qquad x_k\in A_k,\;\;k\ge 1
$$
Obviously $c_{k}\le x_k\le c_{k-1}$.
We know from [C.I] that $h$ is $\gamma$-H\"older for some
$\gamma>0$, so that, for some $L>0$,
$$
|h(x_k) - h(x_{k+1})| \leq L |x_k - x_{k+1}|^{\gamma} \to 0\quad\hbox{as}
\quad k\to \infty
$$
Hence $\{h(x_k)\}_{k\ge 1}$ is a Cauchy sequence whose limit
we denote by $h_0$. Then we write,
$$
{\rho(A_k)\over |A_k|} = h_0 + (h(x_k)-h_0)
$$
The result follows immediately form the inequality
$|h(x_k)-h_0|\leq Lx_k^{\gamma}\leq Lc_{k-1}^{\gamma}$.
Now, from Lemma 2.3 we have that if $k$ is sufficently large there exist constants
$0< M_1\le M_2$ such that
$$
M_1\log{(c_{k-1}/c_k)}\le
\nu(A_k)\equiv \int_{c_k}^{c_{k-1}} e(x) dx \le M_2\log{(c_{k-1}/c_k)}\eqno(2.21)
$$
On the other hand, we know from Lemma 2.1 that
$$
\nu(A_k)=\rho(D_k)=\sum_{l\ge k}\rho(A_k)\eqno(2.22)
$$
so that
$$
\nu(A_{k-1})-\nu(A_k)=\rho(A_k)\eqno(2.23)
$$
>From (2.21) and (2.23) we get
$$
K_1\log{(c_{k-2}/c_k)}\le \rho(A_k) \le K_2\log{(c_{k-2}/c_k)}\eqno(2.24)
$$
for suitable constants $00$ and for any
${\tilde \nu}\ll \nu$, such that ${\tilde \nu}(\ui) < \infty$,
$$
\lim_{n\to \infty}{\tilde \nu}\biggl(\left\{ x: \left|
{1\over a_n}\sum_{k=0}^{n-1}u(f^k(x)) - \nu(u) \right| \geq \epsilon
\right\} \biggr) = 0 \eqno(3.5)
$$
where $a_n \sim c n/\log n$ and $c>0$ is a positive constant.}
\vskip 0.2cm
We prove Theorem 3.1 through a sequence of Lemmas.
\vskip 0.2cm
{\bf Definition 3.1.} {\it Let $u:\; ]0,1] \to \R$ be any real function.
Its} induced {\it version $\ut$ is defined by }
$$
\ut (x) = \sum_{s=0}^{n(x) -1}u(f^sx)\quad\hbox{where}\quad
n(x)=1+\min \{k\geq 0\; : \; f^k(x)\in A_1 \}\eqno(3.6)
$$
Notice that from (2.5) it follows immediately that
$n(g^k(x))=\s_k$ where $\s=(\s_0,\s_1,\dots)=\pi^{-1}(x)$.
\vskip 0.2cm
{\bf Lemma 3.1.} {\it For any $u$ as in Theorem 3.1,}
$$
\nu(u)=\rho(\ut) \eqno(3.7)
$$
{\sl Proof.} From Lemma 2.2, part (i), we have
$$\eqalign{
\nu(u) &= \int_{\ui}u(x)e(x)d\mu(x) =
\sum_{k=0}^{\infty}\int_{\ui} u(x) ({\P}_0^kh)(x) d\mu(x) \cr
&= \sum_{k=0}^{\infty}\int_{f_0^{-k}(\ui)} u(f_0^kx) h(x) d\mu(x) \cr }
$$
On the other hand, one can easily check that
$$
\sum_{k=n(x)}^{\infty}\int_{f_0^{-k}(\ui)} u(f_0^kx) h(x) d\mu(x)
=\sum_{k=0}^{n(x)-1}\int_{\ui \setminus f_0^{-k}(\ui)}
u(f_0^kx) h(x) d\mu(x)=0
$$
so that
$$\eqalign{
\sum_{k=0}^{\infty}\int_{f_0^{-k}(\ui)} u(f_0^kx) h(x) d\mu(x)
&=\sum_{k=0}^{n(x)-1}\int_{\ui} u(f_0^kx) h(x) d\mu(x) \cr
=\int_{\ui} \sum_{k=0}^{n(x)-1}u(f^kx) h(x) d\mu(x) &= \rho (\ut) \cr }
$$
Q.E.D.
\vskip 0.2cm
Let us now consider an orbit $\{f^kx\}_{k=0}^{n-1}$, for some $x\in ]0,1]$ and
denote by $N(n,x)$ the number its passages in $A_1$, or, in other terms,
the number of symbols in its (truncated) $\s$-coding $(\s_0,\s_1,\dots \s_{N(n,x)-1})$.
We can thus write:
$$
\sum_{k=0}^{n-1}u(f^kx) = \sum_{s=0}^{N(n,x)-1}\ut (g^sx) + R(n,x,u)
\eqno(3.8)
$$
where the remainder is given by
$$
R(n,x,u) = \sum_{s=m(n,x)}^{n-1}u(f^sx)\quad\hbox{with}\quad
m(n,x)=\sum_{k=0}^{N(n,x)-1}n(g^kx)\eqno(3.9)
$$
\vskip 0.2cm
{\bf Lemma 3.2.} {\it For any $u$ as in Theorem 3.1,}
$$
\lim_{n\to \infty}{1\over N(n,x)}\sum_{s=0}^{n-1}u (f^sx)
=\nu(u )\quad\hbox{a.e.}\eqno(3.10)
$$
{\sl Proof.} Obvioulsly $N(n,x)\to \infty$ as $n\to \infty$
for any $x\in ]0,1]$.
Moreover,
since $u$ is bounded and compactly supported on $]0,1]$ then
$\ut$ is bounded as well and in particular is in $L^1$. Therefore we may
apply (3.2) so that,
$$
\lim_{n\to \infty}{1\over N(n,x)}\sum_{s=0}^{N(n,x)-1}\ut (g^sx)
=\rho(\ut )\quad\hbox{a.e.}\eqno(3.11)
$$
Finally,
$$
|R(n,x,u)|\leq \sum_{s=m(n,x)}^{n(m(n,x))-1}|u(f^sx)|
$$
and the r.h.s. is uniformly bounded. This entails
$$
\lim_{n\to \infty}{R(n,x,u)\over N(n,x)} =0
$$
The assertion is now
is an immediate consequence of (3.11) and Lemma 3.1. Q.E.D.
We now prove the key Lemma:
\vskip 0.2cm
{\bf Lemma 3.3.} {\it For any $\epsilon >0$,
$$
\lim_{n\to \infty}\rho \biggl(\left\{ x:
1-\epsilon < {N(n,x)\over a_n}< 1+ \epsilon
\right\} \biggr) = 1 \eqno(3.12)
$$
where $a_n $ is as in Theorem 3.1.}
\vskip 0.2cm
{\sl Proof.}
We first consider the function $m(n,x)$ defined in (3.9)
and observe that for $x\in ]0,1]$
$$
{m(n,x)\over n} \rightarrow 1 \quad\hbox{as}\quad n\to \infty
\eqno(3.13)
$$
in measure. This implies that the sequence
$a_n$ of Theorem 3.1 can be replaced by the sequence $a_{m(n,x)}$
without altering (3.12).
We then first consider the following inverse problem:
Let $N$ and $x\in ]0,1]$ be given and let $(\s_0, \dots ,\s_{N-1})$
be the first $N$ symbols of the sequence $\pi^{-1}(x)$.
We want to prove a `weak law of large numbers' for the sequence
$$
m(N,x)= \sum_{i=0}^{N-1} \s_i\eqno(3.14)
$$
We partition the non negative integers into some intervals
$J_k=[l_k, r_k]$ for $k \geq 0$ that are defined below.
Correspondingly we define the events
$$
E^{(i)}_k =\{\s_i \in J_k^N\},\qquad i\geq 0\eqno(3.15)
$$
and the sequences
$$
Z_k(N) = \sum_{i=0}^{N-1} \chi_{ E^{(i)}_k }\eqno(3.16)
$$
Thus, we have
$$\sum_{k=0}^{\infty} l_k Z_k(N) \le
m(N) \le
\sum_{k=0}^{\infty} r_k Z_k(N) \eqno(3.17)
$$
Set
$\varepsilon_N =\b {-c_1} N$ where $c_1$ is some positive constant.
Then, the intervals $J_k=J_k(N)=[l_k, r_k]$
are defined inductively as follows:
$l_0=0$, $r_0=1$ and
$$
l_k = r_{k-1}+1, \quad
r_k=[(1+\varepsilon_N)l_k]\quad\hbox{for}\quad k\geq 1\eqno(3.18)
$$
so that $r_k \le (1+\varepsilon_N) l_k$ for any $k\geq 1$\footnote{$^{1}$}{
Notice that a finite number of the $J_k$'s is made out of a single point.}.
Let $\a$ be a real constant $0 < \a < 1$.
We then introduce two integer-valued sequences
$\{g_N\}$ and $\{h_N\}$ defined by
$$
\eqalign{
g_N &= \left[ N \exp ( -c_3 ( \log N)^\a ) \right] \cr
h_N &= \left[ N \exp ( c_4 ( \log N)^\a ) \right] \cr}\eqno(3.19)
$$
where $c_3$, $c_4$ are positive constants to be specified later on,
and divide the intervals $J_k$'s into three categories
denoted by $\I_1$, $\I_2$, $\I_3$:
the intervals contained in $[1, g_N]$ belong to $\I_1$, those intersecting
$[g_N +1, h_N]$ belong to $\I_2$ and $\I_3$ contains the remainder.
We see at once that the intervals in $\I_3$ can be neglected
in our analysis.
Indeed, let the event $F^{(3)}$ be defined by
$$
F^{(3)}= \{ \max _{0 \le i \le N-1} \sigma _i > h_N \}\eqno(3.20)
$$
Then, for $N$ large enough, we can find a constant $G>0$ such that
$$
\rho \left(F^{(3)} \right) \le G {N \over h_N}\eqno(3.21)
$$
so that it goes to $0$ for $N \to \infty$.
Now, in order to exploit the exponential mixing property
of the random variables $\sigma_i$'s we decompose each
$Z_k$ as
$$
Z_k=Z_k^{(0)}+ \ldots +Z_k^{(s-1)}\eqno(3.22)
$$
where $s_N =\b {c_2} N$ with $c_2>0$ to be specified later, and
$$
Z_k^{(j)} = \sum_{\scriptstyle 0\leq i \leq N-1 \atop
\scriptstyle i = j({\rm mod}\, s)}\chi_{ E^{(i)}_k }\eqno(3.23)
$$
Set moreover
$$
p_k = \rho(E^{(0)}_k).\eqno(3.24)
$$
Now, by virtue of Lemma 2.4, we can find two positive constant $b, c$ such that have
for $k$ large enough,
$$
p_k \geq \sum_{n=l_k}^{r_k} { c \over n^2}
\geq b \left( {1 \over l_k} - {1 \over r_k} \right)
\eqno(3.25)
$$
We now want to estimate the probability of the event
$F^{(1)}_k$ defined for $k$ corresponding to an interval in $\I_1$ by
$$
F^{(1)}_k =
\left\{ | Z_k - N p_k |
\ge t \, s_N \, \sqrt {{ N p_k (1 - p_k)\over s_N} } \right\}
$$
where $t$ will be specified later.
On the other hand, if $J_k$ belongs to $\I_1$ and we
choose the constants $c_1, c_2, c_3$ in
such a way that $c_3 > c_1 + c_2$ then
$$
{ N \over s_N} p_k \ge R\exp ( \gamma ( \log N ) ^ {\a} )
$$
where $\gamma = c_3 - (c_1 + c_2) >0$ and $R$ is a
suitable positive constant.
Indeed, from (3.25) we have
$$
{N\over s_N} p_k \geq b {N \over s_N} \left(
{1 \over l_k} - { 1 \over r_k} \right) \ge
{b \over l_k}{N \over s_N} \left(1 - { 1 \over 1+\varepsilon_N } \right)
\ge R\exp ( \gamma ( \log N ) ^ {\a} )
$$
This allows us to apply the law of large numbers to
the intervals $J_k$'s in $\I_1$, in the following way:
for any $j\in \{1, \dots ,s_N\}$ and for any real positive $\beta$,
$$
\eqalign{
&\rho \left(
Z^{(j)}_k -{\di N p_k \over \di s_N}
\ge t \sqrt {{\di N p_k (1 - p_k) \over \di s_N} }\right) = \cr
&\rho \left(
\exp \beta\left( Z^{(j)}_k -{\di N p_k \over \di s_N} \right)
\ge \exp{ \beta t \sqrt {{\di N p_k (1 - p_k) \over \di s_N}} }\right)
\le \cr
& \rho \left( \prod_l e^{ \di \beta
( \chi _{\di E^{(l)}_k} -p_k ) } \right)
\exp \left( -\beta t\sqrt {{N p_k (1 - p_k)\over s_N} } \right) \le \cr
& \left( \rho ( e^{ \di \beta ( \chi_{E_k^{(0)}} -p_k) }) + e^{\beta}
R\eta^{\di s_N} \right)^{\di N/s_N}
\exp \left( -\beta t \sqrt {{N p_k (1 - p_k)\over s_N} } \right) \cr }
$$
where the product ranges over the
$l$'s of the sublattice corresponding to $Z^{(j)}_k$ and the term
involving $0<\eta <1$ and the constant $R>0$ accounts
for the exponential mixing property.
We now make the choices
$$
t = \exp ( c_5 ( \log N) ^\a ), \qquad
\beta = {c_6 t \over \sqrt{ {N p_k (1 - p_k) \over s_N} } }
$$
and, by expanding the generating function at $\beta =0$, we find
$$ \eqalign{
\rho \left(
Z^{(j)}_k -{ N p_k \over s_N}
\ge t \sqrt {{ N p_k (1 - p_k)\over s_N} } \right) &\le
Be^{- b t^2 /2} \cr
&= B\exp{( -b \exp ( 2c_5 ( \log N) ^\a ) /2) }\cr }
\eqno(3.26)
$$
for suitable positive constants $B$ and $b$.
In the same way we prove that
$$
\rho \left(
Z^{(j)}_k -{ Np_k \over s_N}
\le - t \sqrt {{Np_k (1 - p_k) \over s_N} } \right) \le
Be^{- b t^2 /2}\eqno(3.27)
$$
By adding over $j=1, \ldots, s_N$,
we finally obtain
$$
\rho \left( F^{(1)}_k \right) \le
2Bs_N \exp ( -b \exp ( 2c_5 ( \log N) ^\a ) /2)\eqno(3.28)
$$
Set
$$
F^{(1)} = \bigcup_{k\in \I_1} F^{(1)}_k
$$
where, with slight abuse of language $k\in \I_1$ means that $k$
ranges over the values
corresponding to $J_k \in \I_1$. Using (3.28) we can
estimate the $\rho$-probability of the above event as
$$
\rho \left( F^{(1)}\right) = \sum_{k\in \I_1}
\rho \left( F^{(1)}_k\right) \leq B^{\prime} g_N
s_N \exp ( -b \exp ( 2c_5 ( \log N) ^\a ) /2)\eqno(3.29)
$$
so that
$$
\lim_{N\to \infty} \rho \left( F^{(1)}\right) = 0\eqno(3.30)
$$
Now, the contribution of $\I_1$ to $m(N,x)$,
which we denote by $m_1(N)$,
is
$$
\sum_{k\in \I_1} l_k Z_k(N) \le
m_1(N) \le
\sum_{k\in \I_1} r_k Z_k(N)
$$
and the results obtained above immediately imply that,
with probability approaching $1$ as $N\to \infty$, it is of order
$$
m_1(N) \sim \k_1 N \log g_N \sim \k N \log N \eqno(3.31)
$$
where $\k_1$, $\k$ are suitable positive constants.
\vskip 0.2cm
Let us now consider the intervals in $\I_2$.
For $J_k\in \I_2$
we want to estimate the $\rho$-probability of the event
$$
F^{(2)}_k = \{ Z_k \ge t N p_k \}\eqno(3.32)
$$
where this time we keep $t$ constant, say $t= 4$.
We perform the
same decomposition of $Z_k$ as in (3.21)
and estimate for any $j=1, \dots ,s$,
$$
\rho \left(
Z^{(j)}_k \ge t{N p_k \over s_N} \right)
$$
By proceeding as above with the choice $\beta=$constant, say $\beta =1$,
we obtain, for $N$ large enough,
$$
\rho \left(
Z^{(j)}_k \ge t{N p_k\over s_N} \right) \le
\left( \left( 1 + p_k ( e^{\beta} -1) + e ^{\beta} \eta^{s_N} \right)
e^{-\beta t p_k} \right)^{N/s_N} \le De^{(-dN/s_N)}
$$
for some constants $D,d>0$.
By adding over $j=1, \dots, s$ we find
$$
\rho ( F^{(2)}_k ) \le Ds_N e^{(-dN/s_N)}\eqno(3.33)
$$
and finally, setting
$$
F^{(2)} = \bigcup_{k\in \I_2} F^{(2)}_k
$$
we find
$$
\rho \left( F^{(2)}\right) = \sum_{k\in \I_2}
\rho \left( F^{(2)}_k\right) \leq D^{\prime} (h_N - g_N)
s_N e^{(-dN/s_N)}
$$
so that, again,
$$
\lim_{N\to \infty} \rho \left( F^{(2)}\right) = 0
$$
This implies that the contribution to $m(N,x)$ of $I_2$, which satisfies
$$
m_2(N) \le
\sum_{k\in \I_2} r_k Z_k(N)
$$
is, with large probability, of order strictly smaller than
$N \log N$ and therefore can be neglected in the limit $N\to \infty$.
Indeed, using (3.32) and (3.33), it can be estimated as follows
$$
m_2(N) \leq r_1 N (\log h_N - \log g_N) \leq r_2 N (\log N)^\a
$$
for a suitable choice of the constants $r_1$, $r_2$.
To summarize, we have proved so far that there is a constant
$\k >0$ such that,
with probability approaching $1$ as $N\to \infty$,
$m(N,x) \sim \k N \log N$. Otherwise stated, for any $\eps >0$
$$
\rho\biggl(\left\{ x:
1-\eps < {m(N,x)\over \k N \log N}< 1+ \eps
\right\} \biggr) \to 1 \quad\hbox{as}\quad N\ \to \infty \eqno(3.34)
$$
This concludes the first part of the proof.
\vskip 0.2cm
We now proceed to reverse the above result.
Let $M$ be fixed and let us denote by ${\overline N_1} (M)$
the maximum integer for which the inequality
$$
(1+\eps) \k {\overline N_1} \log {\overline N_1} \le M
$$
is satisfied.
Similarly let ${\overline N_2} (M)$ be
the minimum integer satisfying
$$
(1-\eps) \k{\overline N_2} \log {\overline N_2} \ge M
$$
>From (3.34) we have
$$
\lim_{M \to \infty}
\rho \left(\{\, x :
m({\overline N_1},x) \le M \le m({\overline N_2},x) \,\} \right) =1 \eqno(3.35)
$$
Finally, define
$$
\eqalign{
N_1(M,x) &=\max \{ N\, :\, m(N,x) \le M\} \cr
N_2(M,x) &=\min \{ N\, :\, m(N,x) \ge M\} \cr}
$$
so that
$$
N_1(M,x) \le N(m,x) \le N_2(M,x)\eqno(3.36)
$$
Then, from (3.34), (3.35) and (3.36) we have immediately that,
$$
\lim_{m \to \infty}
\rho \left( \left\{ \, x : (1-\eps) { cm\over \log m} \le N(m,x)
\le (1+\eps) {cm \over \log m}\, \right\} \right) = 1
\eqno(3.37)
$$
with $c=1/\kappa$.
Lemma 3.3 now follows by putting together (3.37) and (3.13). Q.E.D.
\vskip 0.2cm
{\it Proof of Theorem 3.1.} Theorem 3.1 is now an immediate consequence
of Lemmas 3.1, 3.2, 3.3. Q.E.D.
\vskip 0.2cm
{\bf Remark.} For the particular
situation considered in this paper one can give a simple argument
showing that the above result cannot be sharpened (for a more general result see [A]).
Consider for instance
eq. (3.34). It tells us that if we set
$$
m_N=\sum_{i=1}^{N}\s_i\qquad \hbox{and} \qquad b_N= \k N \log N
$$
then
$$
\lim_{N\to \infty} {m_N\over b_N} = 1 \qquad\hbox{in probability}\eqno(3.38)
$$
One may then wonder if the above limit property
holds in a strong sense, i.e. if
$$
\rho \left(\lim_{N\to \infty} {m_N\over b_N}=1\right) = 1 \eqno(3.39)
$$
We now show that it does not and in fact
$$
\rho \left(\lim_{N\to \infty} {m_N\over b_N}=1\right) = 0 \eqno(3.40)
$$
Indeed, from Lemma 2.4 we have that for $a$ large enough one can choose a suitable
constant $c$ such that for any $N\ge 0$
$$
\rho \left(\s_N > a\right) \ge {c\over a}\eqno(3.41)
$$
Hence, for any constant $\ell > 1$ and $N$ sufficently large
$$
\rho \left( \s_N > \ell \, b_N\right) \ge {c\over \ell \, b_N}
={c\over \ell \k N \log N}\eqno(3.42)
$$
and thus
$$
\sum_{N=1}^{\infty} \rho \left(\s_N > \ell \, b_N \right) = \infty \eqno(3.43)
$$
Therefore, from the extension of the Borel-Cantelli lemma to non-disjoint events
(see e.g. [B]) it follows that
$$
\rho \left( {\s_N \over b_N} > \ell\;\; \hbox{infinitely often}\,\right) = 1
$$
so that
$$
\rho \left( {m_N\over b_N} > \ell\;\; \hbox{infinitely often}\,\right) = 1\eqno(3.44)
$$
and consequently
$$
\rho \left(\limsup_{N\to \infty} {m_N\over b_N}=\infty\right) = 1 \eqno(3.45)
$$
which implies (3.40). We can actually say more. Since (3.38) implies the convergence a.e. on
a subsequence, (3.40) is valid for {\it every} sequence of constants $b_N$.
\vskip 0.2cm
A further consequence of our construction is an upper bound on
the decay of correlations.
\vskip 0.2cm
{\bf Theorem 3.2.} {\it For any pair of bounded continuous
functions $u, \, v$ compactly supported on $]0,1]$ and such that
$\nu(u) = \nu(v) =0$, there is a constant $R >0$ such that}
$$
\left| \int_{\ui} u(f^tx)\,v(x)\, d\nu(x) \right| \leq {R \over t} \eqno(3.46)
$$
\vskip 0.2cm
{\it Proof.} By virtue of Lemma 3.1,
$$
\int_{\ui} u(f^tx)\,v(x)\, d\nu(x) =
\int_{\ui} \sum_{l=0}^{n(x)-1} u(f^{l+t}(x))\,v(f^l(x))\,d\rho(x)
\eqno(3.47)
$$
Now, for $x\in ]0,1]$ we write
$$
t = \sum_{s=0}^{\tau(t,x) -1} n(g^sx) + r(t,x) \eqno(3.48)
$$
where $r(t,x) < n(g^{\tau(t,x)}(x))$.
Then,
$$
\int_{\ui} u(f^tx)\,v(x)\, d\nu(x) =
\int_{\ui} \sum_{l=0}^{n(x)-1} u(g^{\tau(t,x)}(f^{l+r(t,x)}(x)))\,
v(f^l(x))\,d\rho(x)
\eqno(3.49)
$$
Furthermore, one can easily check that the r.h.s. of (3.49) can be rewritten as
$$\eqalign{
&\int_{\ui} \sum_{l=0}^{n(x)-1} u(g^{\tau(t,x)}(f^{l+r(t,x)}(x)))\,
v(f^l(x))\,d\rho(x)= \cr
&\sum_{k=0}^{\infty}
\int_{D_{k+1}} u(g^{\tau (t,x)} (f^{k+r(t,x)}(x)))\, v(f^{k}(x)) \, d\rho(x)
= \cr
&\sum_{k=0}^{\infty}
\int_{\ui} u(g^{\tau (t,x)} (f^{r(t,x)}(x))) \, v(x) \,
d(f_{0*}^k\rho)(x) \cr }
$$
where the sets $D_k$'s are as in Lemma 2.1 and
$\psi_*\rho$ denotes the measure obtained by pushing forward $\rho$ with $\psi$, i.e.
$\psi_*\rho(A) = \rho(\psi^{-1}(A))$.
On the other hand, since $u, \, v$
are compactly supported in $]0,1]$, the above sum is actually finite.
Hence, we can find a constant $M>0$ (depending of $u$ and $v$) such that
$$
\left| \int_{\ui} u(f^tx)\,v(x)\, d\nu(x) \right| \leq
M \sup_{0\le k\le M}\left| \int_{\ui} u(g^{\tau (t,x)}( f^{r(t,x)}(x))) \, v(x)
\, d(f_{0*}^k\rho)(x)\right|
\eqno(3.50)
$$
After a short reflection one easily realizes that an
upper bound to the integral in the r.h.s. of (3.50) can be found by estimating
the $f_{0*}^k\rho$-probability of those `bad' points $x\in ]0,1]$ such that
$\tau (t,x)$ grows at the smallest possible rate
(in particular smaller than the most probable rate
$c t/\log t$ prescribed by Lemma 3.3).
Now, the worst situation is attained when $\tau (t,x)$ is just a constant,
in particular equal to $0$. On the other hand it is easy to see that the above
probability is maximal for $k=0$. Indeed, we have
$$
\eqalign{
&f_{0*}^k\rho \{ x : \tau (t,x) = 0 \} = f_{0*}^k\rho \{ x : n (x) > t \}=
\rho \{ x : n (f_0^{k}x) > t \}= \cr
& \rho \{ x : n (x) > t+k \}
\leq \rho \{ x : n (x) > t \}= \rho \{ x : \tau (t,x) = 0 \}\cr }
\eqno(3.51)
$$
and, using Lemma 2.4, we find
for $t$ large enough,
$$
\rho \{ x : \tau (t,x) =0\} = \rho \{ x : n (x) > t \} \leq {C \over t}
\eqno(3.52)
$$
Moreover, it is not difficult to check that $\rho \{ x : \tau (t,x) = m \}$
is, for any $m\geq 1$, of order strictly smaller than (3.52).
Finally, using (3.50), (3.51), (3.52) and the boundedness of $u, \, v$,
one immediately obtains (3.47). Q.E.D.
\vskip 0.2cm
\vfill\eject
{\bf References.}
\vskip 1cm
\item{ [A]} Aaronson J., {\sl The asymptotic distributional behaviour of
transformations preserving infinite measures},
Journal d'analyse math\'ematique, {\bf 39}, 203-234 (1981)
\vskip 0.2cm
\item{ [B]} Billingsley P., {\sl Convergence of Probability Measures},
Wiley, New York, 1968
\vskip 0.2cm
\item{ [C.F]} Collet P. and Ferrero P., {\sl Some limit ratio
theorem related to a real endomorphism with a neutral fixed point},
Ann. Inst. H. Poincar\'e {\bf 52}, 283 (1990)
\vskip 0.2cm
\item{ [C.G]} Collet P. and Galves A., {\sl Statistics of close visit to the
indifferent fixed point of an interval map},
Jour. Stat. Phys. {\bf 72}, 459 (1993)
\vskip 0.2cm
\item{ [C.I]} Campanino M. and Isola S.,
{\sl Statistical properties of long return times in type I
intermittency}, (1993) Forum Math., to appear.
\vskip 0.2cm
\item{ [L.M]} Lasota A. and Mackey M.C., {\sl Probabilistic
properties of deterministic systems},
Cambridge University Press, 1985
\vskip 0.2cm
\item{ [M.P]} Manneville P. and Pomeau Y., {\sl Intermittent transition
to turbulence in dissipative dynamical systems },
Commun. Math. Phys. {\bf 74}, 189 (1980)
\vskip 0.2cm
\item{ [P]} Pianigiani G., {\sl First return map and invariant measures},
Isr. Jour. of Math. {\bf 35}, 32-48 (1980)
\vskip 0.2cm
\item{ [P.S]} Prellberg T. and Slawny J., {\sl Maps of intervals with
indifferent fixed points: thermodynamic formalism and phase transitions},
Jour. of Stat. Phys. {\bf 66}, 503-514 (1992)
\vskip 0.2cm
\item{ [R]} Ruelle D., {\sl Thermodynamic Formalism},
Addison-Wesley Publ. Co., 1978
\vskip 0.2cm
\item{ [T1]} Thaler M., {\sl Estimates of the invariant densities of
endomorphisms with indifferent fixed points}, Isr. Jour. of Math. {\bf 37},
303-313 (1980)
\vskip 0.2cm
\item{ [T2]} Thaler M., {\sl Transformations in $\ui$ with infinite
invariant measures}, Isr. Jour. of Math. {\bf 46}, 67-96 (1983)
\vskip 0.2cm
\end