\magnification=\magstep1
\hfuzz=2pt
%%%%%%%%%%%%%%%%%%%%%%%%%%% structure de page %%%%%%%%%%%%%%%%%%%%%%%%%%%%
\headline={\ifnum\pageno=1 \hfil\else\hss{\tenrm\folio}\hss\fi}
\footline={\hfil}
\hoffset=-.2cm
%%%%%%%%%%%%%%%%%%%%%% polices %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\font\titlefont= cmbx10 scaled \magstep3
%\font\titlefont= cmbx10 scaled \magstep1
%\font\titlefont= ambx10
%\font\titlefont= ambx10 scaled \magstep1
%%%%%%%%%%%%%%%%%%% general math defs %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\def\integer{\mathchoice{\rm I\hskip-1.9pt N}{\rm I\hskip-1.9pt N}
{\rm I\hskip-1.4pt N}{\rm I\hskip-.5pt N}}
%\def\integer{\mathchoice{\bf N}{\bf N}{\bf N}{\bf N}}
\def\real{\mathchoice{\rm I\hskip-1.9pt R}{\rm I\hskip-1.9pt R}
{\rm I\hskip-.8pt R}{\rm I\hskip-1.9pt R}}
%\def\real{{\bf R}{\bf R}{\bf R}{\bf R}}
\def\Real{\real}
\def\Log{\mathop{\rm Log}}
\def\Romannumeral#1{\uppercase\expandafter{\romannumeral#1}}
\def\date{\line{\number\day/\number\month/\number\year\hfil}}
\def\expectation{\mathchoice{\rm I\hskip-1.9pt E}{\rm I\hskip-1.9pt E}
{\rm I\hskip-.8pt E}{\rm I\hskip-1.9pt E}}
\def\zinteger{{\rm Z\hskip-1.9pt\slash}}
%\def\zinteger{\mathchoice{{\bf Z}{\bf Z}{\bf Z}{\bf Z}}}
\def\Oun{{\cal O}(1)}
\def\oun{{\hbox{\sevenrm o}(1)}}
\def\proof{\noindent{\bf Proof. }}
\def\longto{\mathop{\longrightarrow}}
%%%%%%%%%%%%%%%%%%% definitions pour les references %%%%%%%%%%%%%%%%%%%%%%%%%
\newskip\refskip\refskip=4em
\def\refsize{\advance\leftskip by \refskip}
\def\ref#1#2{\noindent\hskip -\refskip\hbox to
\refskip{[#1]\hfil}{\noindent #2\hfil}\medskip}
%%%%%%%%%%%%%%%%%%%%%%%%% local defs %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\def\proba{\mathchoice{\rm I\hskip-1.9pt P}{\rm I\hskip-1.9pt P}
{\rm I\hskip-.9pt P}{\rm I\hskip-1.9pt P}}
\def\logd{\log\log}
\def\shift{{\cal S}}
\def\bvnorm{|||}
\def\bvn{\bvnorm}
\def\bvnt{\bvnorm_{\theta}}
\def\exo{\par\noindent{\bf Exercise. }}
\vglue 3cm
\centerline{\titlefont ASYMPTOTIC DISTRIBUTION}
\vskip .5cm
\centerline{\titlefont OF ENTRANCE TIMES FOR }
\vskip .5cm
\centerline{\titlefont EXPANDING MAPS OF AN INTERVAL }
\vskip 2cm
\centerline{by}
\vskip 1cm
\centerline{P.Collet\footnote{${}^{1}$}{Centre de Physique Th\'eorique, Ecole
Polytechnique, F-91128 Palaiseau Cedex (France) Laboratoire UPR 14 du
CNRS.} and A.Galves\footnote{${}^{2}$}{Instituto de Matem\'atica e Estat\'\i
stica,
Universidade de S\~ao Paulo, BP 20570 (Ag. Iguatemi),
01498 S\~ao Paulo SP (Brasil).}}
\vskip 2cm
\noindent{\bf Abstract.} We prove that the entrance time into a small
interval is asymptotically exponentially distributed. We also prove
that under suitable hypothesis the sequence of visits to a small
interval converges to a Poisson point process.
\beginsection{I. Introduction}
In a dynamical system given by a transformation $f$ of a
space $X$ with an ergodic invariant measure $\mu$,
there is a natural time scale
associated to a set of positive measure, namely the inverse of it's
measure. For example. the ergodic theorem tells us that the asymptotic
fraction of time that a typical trajectory spends into a given set is
precisely the measure of that set.
It is therefore natural to consider the events associated to a set
$K$ (contained in $X$) at this natural time scale $1/\mu(K)$. Examples
of such events are the first entrance time in $K$ of a trajectory, or
the number of
returns to $K$ in a given time interval. For a general set $K$ of
positive measure, one can only expect to derive rather simple results.
For example, if the set $K$ has positive measure, ergodicity implies
that the entrance time in $K$ is almost surely finite. One can hope
however to obtain more precise information by looking at the asymptotic
when the set $K$ is getting small, or in some large deviation regime
(see [Ke.] for such results).
In this paper we will derive several asymptotic results fro small
intervals in the
case of piecewise regular expanding maps of the interval. We recall
that such maps $f$ of the interval $[0,1]$ are defined on
a finite number $r$ of disjoint open intervals $]a_{j},a_{j+1}[$
whose union
$$
\bigcup_{j=0}^{r-1}]a_{j},a_{j+1}[
$$
is dense in $[0,1]$. The restriction of the map $f$ to each of these
intervals is monotone, regular and extends to a regular map on each
closed interval. Moreover, the map is expanding, that is to say that
there is a number $\rho>1$ such that the slope of each branch of $f$
is in modulus larger than $\rho$.
It is well known that expanding maps of the interval has an
absolutely continuous invariant measure and we will assume that this
measure is ergodic and mixing. We will always denote by $\mu$ this measure and
by $h$ it's density with respect to the Lebesgue measure denoted by
$l$. This function $h$ is known to be of bounded variation.
We refer to [C.] or [H.K.] and [L.Y.] for proofs of
the fundamental properties of these dynamical systems. In
particular, the mixing property implies the following exponential
mixing (also called exponential decay of correlations).
There is a positive constant $C$ and a positive number
$\gamma<1$ such that if $u_{1}$ is a function
of bounded variation, and $u_{2}$ an integrable function, we have
$$
\left|\int u_{1}(x)u_{2}(f^{n}(x))d\mu(x)-
\int u_{1}(x)d\mu(x)\int u_{2}(x)d\mu(x)\right|
$$
$$
\le C\gamma^{n}
\left(\vee u_{1}+\int |u_{1}(x)|dl(x)\right)\int |u_{2}|(x)dl(x)\;,\eqno(D.C)
$$
where $\vee u_{1}$ denotes the total variation of $u_{1}$.
The results of the present paper can be proven under more general
(or slightly different) hypothesis,
for example in cases with a countable number of branches
under a uniform distortion hypothesis, or for the case of
Gibbs measures over a subshift of finite type (see [W.], and [H.K.]
for the definitions and basic results about such systems). The proofs
in these general cases are almost word for word translations of the
proofs presented below, but we refrain to present the more general
setting which is often obscured by unnatural abstract details. We also
conjecture that the present results are true for unimodal maps under
the standard hyperbolicity hypothesis of the critical orbit (see [K.N.]
and [L.S.Y.] for the mixing results).
We now introduce the definition of entrance time and recurrence time
in a set $K$.
\proclaim{Definition}. {The first entrance time of the trajectory of a
point $x$ into the set $K$ is the integer $\tau_{K}(x)$ given by
$$
\tau_{K}(x)=\min\{n\ge0\,|\,f^{n}(x)\in K\}\;.
$$}
As explained above, since $\mu$ is ergodic the integer valued function
$\tau_{K}$ is $\mu$ almost surely finite. Our first result can be
formulated as follows.
\proclaim{Theorem 1}. {Let $\left(K_{j}\right)_{j\in\integer}$ be a
sequence of intervals such that $\mu(K_{j})>0$ for any integer $j$ and
$\lim_{j\to\infty}l(K_{j})=0$. Then there is a diverging sequence of
positive numbers $T_{j}$ such that for any positive real number $t$,
we have
$$
\lim_{j\to\infty}\mu(\{\tau_{K_{j}}>tT_{j}\})=e^{-t}\;.
$$}
In other words, the sequence of laws of the random variables
$\tau_{K_{j}}/T_{j}$ converges in the weak* topology to an
exponential distribution with parameter 1. From now on we will simply
say that a sequence of random variables converges in law instead of
saying that the sequence of laws converge in the weak* topology.
We recall that the
exponential distribution with parameter $\alpha>0$ is the measure on
$\real^{+}$ whose density is $\alpha e^{-\alpha x}$.
Note that at this point we have no estimate on the time scale $T_{j}$.
We will need another hypothesis to ensure that it behaves like
$1/\mu(K_{j})$ (see below for the precise conditions,
and [H.] for a counter example).
We refer to [G.S.] for a version of this result adapted to large
deviation sets, and to [C.G.] for a related result under different
hypothesis.
%{\tt ANTONIO, il faudrait arranger cette phrase ?}
The proof of this result can be extended to the case of sequences
of finite union of intervals provided the number of intervals does not
grow too fast.
Our second result will give more precise information but this requires
stronger hypothesis. We will look at the sequence of successive times
at which the orbit of a point $x$ belongs to an interval $K$. This is
conveniently described by an atomic measure on the time axis which gives a
weight equal to one to the instants where the orbit is in $K$. We will
of course consider this measure on the time axis scaled by the natural time
scale $1/\mu(K)$, and it will be denoted by $N(x,K)$. In other words, we have
$$
N(x,K)=\sum_{n=0}^{\infty}\chi_{K}\circ f^{n}(x)\delta_{n\mu(K)}\;
$$
where $\delta_{t}$ denotes as usual the Dirac measure at the point
$t$. We now observe that if we choose $x$ according to the invariant
measure $\mu$, we obtain a random atomic measure on the positive real
line, that is to say a point process on $\real^{+}$ (we refer to [N.]
for the definitions and basic properties). In other words, we have
defined a measure ${\cal M}_{K}$ on the positive $\sigma$-finite
measures over $\real^{+}$. By considering a sequence of intervals
$(K_{j})$ we construct a sequence of such measures
$({\cal M}_{K_{j}})$ and we can ask if
this sequence converges. This is indeed the case under the hypothesis
of the following theorem.
\proclaim{Theorem II}. {Let $\left(K_{j}\right)_{j\in\integer}$ be a
sequence of intervals satisfying the assumptions of Theorem I and
moreover, assume
that there is a non decreasing diverging sequence of integers
$\left(m_{j}\right)_{j\in\integer}$ such that for any $j$ we have
$$
f^{m}(K_{j})\cap K_{j}=\emptyset\qquad\hbox{\rm for}\quad 00\;,
$$
then the sequence of measures $({\cal M}_{K_{j}})$ is tight, and the
limit is the random measure of a Poisson point process of parameter 1.
}
\bigskip
%{\tt ANTONIO ca se dit comme ca ?}
We recall that a Poisson point process of parameter $\alpha>0$ on
$\real^{+}$ is an increasing sequence of random variables whose
increments are independent and have exponential distribution with
parameter $\alpha$. We refer to [N.] for definitions and basic
properties of point processes, and to [S.] for some other
motivations and applications.
% We remark that this can be seen as a random atomic
%measure which is $\sigma$-finite
We observe that the previous hypothesis are typical for the measure
$\mu$. Indeed $\mu$ almost every point $x$ is a point of continuity
of $h$ such that $h(x)>0$ which is our second hypothesis. We can
also assume moreover that the point $x$ is not periodic (since the
number of periodic points is countable).
For such points it is easy to construct a decreasing sequence of
intervals which converges to $x$ and satisfies the hypothesis of
Theorem II.
Note also that in our first hypothesis, we have not put any constraint
on the rate of divergence of the sequence
$\left(m_{j}\right)_{j\in\integer}$. Since the map is expanding, $m_{j}$
can be at most of the order of $-\log\mu(K_{j})$.
Part 1) of Theorem II gives the precise and natural time scale for the
events we are considering. We see that this natural time scale holds
almost everywhere, but not everywhere as follows from an example of
[H.].
Several interesting results follow easily from the above theorem, some
particular cases have already been reported in the literature with
more complicated proofs. One
can for example consider the succession of entrance times in a
set $K$ which can be defined recursively as follows. If
$\tau^{1}_{K}=\tau_{K}$, $\tau^{2}_{K},\cdots,\tau^{l}_{K}$ have
already been defined, we define $\tau^{l+1}_{K}$ by
$$
\tau^{l+1}_{K}(x)=\min\{n>0\,|\,
f^{n+\sum_{j=1}^{l}\tau_{k}^{j}(x)}(x)\in K\}\;.
$$
As for the first return time, all these numbers are $\mu$ almost
surely finite. We recall that for the measure $\mu|_{K}/\mu(K)$ these
numbers form a stationary sequence (see [B.]), however we are
concerned here with the measure $\mu$ itself. An easy consequence of
Theorem II is the following result (we refer to [N.] for the proof
using Theorem II).
\proclaim{Theorem III}. {Let $\left(K_{j}\right)_{j\in\integer}$
satisfying the assumptions of Theorem II.
Then the process $\mu(K_{j})\tau^{.}_{K_{j}}$ converges to a Poisson
point process with parameter 1.
}
\bigskip
This was proven in [H.] for the case of maps possessing
the topological Markov property using the zeta function formalism.
In our more general case which is not Markov, this follows at once
from Theorem II whose proof is more direct and simpler.
An easy corollary gives the limit distribution of the number of
visits to the sets $(K_{j})$ in time intervals of the order of the
natural time scale. For any set $K$, and any positive number $t$, we
define the number of visits to $K$ during the period $[0,t]$ as the
random number
$$
N_{K}(t)=\sum_{0\le j\le t}\chi_{K}\circ f^{j}\;.
$$
We can now state our result.
\proclaim{Corollary IV}. {If the sequence of intervals $(K_{j})$
satisfies the hypothesis of Theorem II, then for any fixed
positive number $s$, the sequence of random variables
$(N_{K_{j}}(s/\mu(K_{j})))$ converges in law to a Poisson random variable
with parameter $e^{-s}\;.$}
This result was proven in [P.] for the particular case of Markov
chains with a finite number of states using different techniques.
We mention that it is also possible to derive using our techniques
generalized versions of Theorem II and of it's consequences. One can
for example weight the visits according to the point of landing in
the set $K$. This leads to the definition of the following point process.
Let $u$ be a real function of bounded variation
with compact support. For any point $x\in[0,1]$ and any positive real
number $\epsilon$, we define an atomic measure
$N_{\epsilon}(x)(\cdot)$ by
$$
N_{\epsilon}(x)=\sum_{n=0}^{\infty}
u\left(f^{n}(y)-x\over\epsilon\right)\delta_{n\epsilon} \;.
$$
The proof of Theorem II can be copied almost word for word to prove
that when $\epsilon\to0$ the family of random measures
$N_{\epsilon}(\cdot)$ is $x$
almost surely tight, and the limit distribution is that of
a random variable $X$ with characteristic function
$$
\expectation\left( e^{itX}\right)=e^{h(x)\int (e^{itu(y)}
-1)dy}\;.
$$
%{\tt ANTONIO comment dire ca ?}
Finally we mention also that all our results can be proven if we
start with a non equilibrium measure which is absolutely continuous
with respect to $\mu$ and with a density of bounded variation. This is
due to the exponential decay of correlations ([H.K.], [C.]).
The rest of this paper is devoted to the proofs of Theorem I,
which is given in section II, while the proof of Theorem II is given
in section III.
\noindent{\bf Acknowledgment.} The authors are grateful to
M.Pollicott for pointing out the work of M.Hirata.
This work was partially supported by
FAPESP Projeto Tem\'atico grant number 90/3918-5. A.G. was partially
supported by CNPq grant 301301/79. We acknowledge the kind
hospitality of the Centre de Physique Th\'eorique of the Ecole
Polytechnique and of the Instituto de Matem\'atica e Estat\'\i stica
of the Universidade de S\~ao Paulo.
\beginsection{\Romannumeral{2} Proof of Theorem I.}
The proof relies on a perturbation argument for the transfer operator
$P$. We recall that this operator is defined by
$$
Pu(x)=\sum_{f(y)=x}u(y)/|f'(y)|\;.
$$
The main basic result about the operator $P$ is the following estimate
due to Lasota and Yorke [L.Y.]. Under our hypothesis on the map
$f$, there are two positive numbers $\Gamma$ and $\alpha<1$ such that
for any integer $q$ large enough (independently of $g$), we have
$$
\bigvee P^{q}(g)\le \alpha^{q}\vee g+\Gamma \|g\|_{L^{1}(dl)}\;.\eqno(L.Y.)
$$
This is the basic estimate used in [L.Y.] to prove the existence of
the invariant and absolutely continuous measure $\mu$. It is also the
main ingredient for the spectral theory of the operator $P$ developed
in [H.K.] (or [C.]). We recall that under our mixing condition, in the
Banach space $\cal B$ of functions of bounded variation equipped with
the norm
$$
\bvnorm g\bvnorm=\vee g+\|g\|_{L^{1}(dl)}\;,
$$
the operator $P$ has a simple eigenvalue equal to 1 with eigenvector
$h$ and whose associated linear form is the integration over the Lebesgue
measure $l$. The rest of the spectrum is contained in a disk of radius
$\rho<1$.
We now briefly explain how perturbation theory comes into play. Let
$K$ be a fixed interval, then we have for any positive integer $n$
$$
\chi_{\{\tau_{K}>n\}}=\prod_{j=0}^{n}\chi_{K^{c}}\circ f^{j}\;,
$$
where $K^{c}$ denotes the complement of a set $K$, and $\chi_{K}$ is
the characteristic function of that set.
Using the elementary properties of the operator $P$, it is easy to see
that the probability of the event $\{\tau_{K}>n\}$ is given by
$$
\mu(\{\tau_{K}>n\})=\int \chi_{K^{c}}\left(P\chi_{K^{c}}\right)^{n}h \;dl\;.
$$
It is intuitively clear that for large integer $n$, the largest eigenvalue of
$P\chi_{K^{c}}$ (if there is one) will give the asymptotic behavior.
Moreover since $P\chi_{K^{c}}=P-P\chi_{K}$, one is tempted to try to
argue that the operator $P\chi_{K}$ is small. This seems to be
reasonable in the present context since $K$ will be a small interval.
However this is not so simple because a direct estimate involves the variation
norm of the function $\chi_{K}$ which is always equal to 2. In the
next section we will indeed develop this idea but using first a remark
on the preimages of small intervals, and then a change of norm.
This spectral problem is related to several questions
about the exit time of the
neighborhood of a repeller, and in particular to the exponential tail
of its distribution. We refer to [E.R.], [L.Y.2] and [Ke.] for more
details on these other topics.
As explained above, the main tool in the proofs will be a perturbative
control of the operator
$$
P\chi_{K^{c}}=P-P\chi_{K}\;.
$$
In order to control some constants we will instead consider the
operator
$$
\left(P\chi_{K^{c}}\right)^{q}\;
$$
where $q$ will be a fixed large integer chosen once for all
independently of $K$
for small $K$. Let us call $\Delta_{q}$ the operator satisfying
$$
\left(P\chi_{K^{c}}\right)^{q}=P^{q}+\Delta_{q}\;,
$$
and we will prove that the operator $\Delta_{q}$ is small in some sense.
We start by a Lemma which gives a detailed control of the
operator $\Delta_{q}$.
\proclaim{Lemma 5}. {There is a positive constant $M$ such that for
any fixed integer $q$ and any small enough interval $K$, we have for
any function $g$ of bounded variation
$$
\vee(\Delta_{q}g)\le(6q^{2}\alpha^{q}+Mql(K))\vee g+
Mq(1+ql(K))\|g\|\;,
$$
and
$$
\|\Delta_{q} g\|\le M q^{2}l(K)\bvnorm g\bvnorm\;.
$$}
\proof At first sight, the number of terms in
$\Delta_{q}$ is $2^{q}-1$, each of them of order one. Our first goal
will be to prove that this large sum can be rearranged in a sum with
a much smaller number of terms.
After some easy algebra we get
$$
\left(P\chi_{K^{c}}\right)^{q}=
P^{q}-\sum_{j=0}^{q-1}P^{j}P\chi_{K}P^{q-j-1}+
\sum_{i+j\le q-2}P^{i}P\chi_{K}
\left(P\chi_{K^{c}}\right)^{q-i-j-2}P\chi_{K}P^{j}\;.
$$
If we assume $K$ small enough (i.e. the length smaller than the
diameter of the smallest interval of monotonicity of $f^{q}$ and also
smaller than the minimal non zero distance between the points of the
orbits up to time $q$ of the points $(a_{n})$), then
for any integer $l\le q$, and for any
point $x$ of the interval, there are at most two preimages of order $l$ of
$x$ in $K$.
%The second sum can be different from zero only if $K$ is in a small
%neighborhood of a periodic point of $f$ of period at most $m$. Using
%the return map to this small neighborhood, it is easy to verify that
This implies that for $K$ small enough, the set
$$
f^{-l}(K)\cap K
$$
is an interval (eventually empty) or the union of two intervals (this
special case can occur if two branches of $f^{l}$ of opposite slope have
an extreme point in common which is a periodic point of period at most
$q$). By
looking at the orbit up to $l$ of these intervals, we conclude that
$$
f^{-l}(K)\cap_{j=1}^{l-1}f^{-j}(K^{c})\cap K
$$
is also an interval or the union of two intervals.
This immediately implies that
$$
P^{i}P\chi_{K}\left(P\chi_{K^{c}}\right)^{q-i-j-2}P\chi_{K}P^{j}
=P^{q-j}\chi_{{\tilde K}_{i,j}}P^{j}
$$
where ${\tilde K}_{i,j}$is an interval or the union of two intervals.
This implies that the above sum contains at most $2q^{2}$ terms which
can be written
$$
P^{l+1}\chi_{\tilde K}P^{q-l-1}
$$
where $l$ is an integer smaller than $q$, and $\tilde K$ is an
interval which depends on $l$ and with length at most the length of $K$.
Moreover, for each
fixed $l$, there is at most $2q$ such terms.
We will now estimate the variation of a function $P^{r}\chi_{\tilde
K}P^{q-r}g$. Using formula (L.Y.) and the classical estimate
$$
\vee(g_{1}g_{2})\le \vee g_{1}\|g_{2}\|_{\infty}+\vee
g_{2}\|g_{1}\|_{\infty}
$$
for the variation of the product of two functions, we deduce
$$
\vee(P^{r}\chi_{\tilde K}P^{q-r}g)\le\alpha^{r}\vee(\chi_{\tilde
K}P^{q-r}g)+\Gamma\|\chi_{\tilde K}P^{q-r}g\|\le
$$
$$
\alpha^{r}(\vee(P^{q-r}g)+2\|P^{q-r}g\|_{\infty})+
\Gamma\|\chi_{\tilde K}P^{q-r}g\|
\le\alpha^{r}\vee(P^{q-r}g)+(2\alpha^{r}+\Gamma l(K))\|P^{q-r}g\|_{\infty}\;.
$$
Using again formula (L.Y.) and the bound
$$
\|g\|_{\infty}\le\vee g+\|g\|\,
$$
we obtain
$$
\vee(P^{r}\chi_{\tilde K}P^{q-r}g)\le(3\alpha^{q}+\Gamma
l(K)\alpha^{q-r})\vee g+(\alpha^{r}+(1+\Gamma)(2\alpha^{r}+\Gamma l(K))\|g\|\;.
$$
The first result follows by summing over $r$ and multiplying by $2q$.
The second result is even easier to obtain. We have
$$
\|P^{r}\chi_{\tilde K}P^{q-r}g\|= \|\chi_{\tilde K}P^{q-r}g\|\le
l(\tilde K)\|P^{q-r}g\|_{\infty}\le l(K)(\alpha^{q-r}\vee
g+(1+\Gamma)\|g\|)\;,
$$
and the result follows by summation over $r$ and multiplication by
$2q$.
The main point in the above result is that the two estimates are very
dissymetric. The only large constant $Mq$ appears so to speak in an off
diagonal term. This is reminiscent of the numerical problems
associated to ill balanced matrices, and suggests a natural change of
norms. We introduce a one parameter family of norms
$\bvnorm\,\bvnorm_{\theta}$ defined by
$$
\bvnorm g\bvnorm_{\theta}=\theta\vee g+\|g\|\;,
$$
where $\theta$ is a positive number to be chosen later on. We will
denote by ${\cal B}_{\theta}$ the associated Banach space. Of course
all these norms are equivalent and we have in particular the obvious
inequalities
$$
\bvnorm\,\bvnorm_{\theta}\le\bvnorm\,\bvnorm\le\theta^{-1}
\bvnorm\,\bvnorm_{\theta}\;.
$$
The advantage of the new norm can be seen in the following Lemma which
is the technical ingredient needed for the perturbation theory. We
first recall the main spectral result on the transfer operator $P$
(see [H.K.] or [C.]).
Under the hypothesis of the previous section on the map $f$,
in the Banach space $\cal B$ (and also in ${\cal B}_{\theta}$), the
operator $P$ has a spectral decomposition
$$
P=P_{0}+R\;,
$$
where $P_{0}$ is a rank one operator with eigenvalue $1$, and whose
eigenvector is the density $h$ of the absolutely continuous invariant
measure $\mu$. The associated linear form $\phi$ is integration with
respect to the Lebesgue measure. There is also a positive number $C$
and a positive number $\gamma<1$ such that for any integer $n$
$$
\bvnorm R^{n}\bvnorm\le C\gamma^{n}\;.
$$
\proclaim{Lemma 6}. {Under the above hypothesis we have for any fixed
integer $q$ and $K$ small enough the estimates
$$
\bvn\Delta_{q}\bvnt\le6q^{2}\alpha^{q}+Mq[\theta+(q\theta^{-1}+1)l(K)]\;,
$$
$$
\bvn R^{q}\bvnt\le C\theta^{-1}\gamma^{q}\;,
$$
$$
\bvn P_{0}\bvnt\le 1+\Gamma\theta\;.
$$
}
\proof
The first inequality is an immediate consequence of Lemma 5.
The second inequality follows at once from the equivalence of the norms.
For the third inequality, we observe that for any integer $s$, we
have
$$
P_{0}=P^{s}-R^{s}\;.
$$
Therefore from the definition of the new norm and equation (L.Y.) we get
$$
\bvn P_{0}\bvnt\le \bvn P^{s}\bvnt +C\gamma^{s}\theta^{-1}\le
\alpha^{s}+1+\theta\Gamma+C\theta^{-1}\gamma^{s}\;,
$$
and the result follows by letting $s$ diverge.
We can now harvest the advantages of the change of norm. We can take
for example $\theta=\gamma^{q/2}$ and then $q$ large enough such that
the number $6q^{2}\alpha^{q}+q\gamma^{q/2}$ is small enough. Then we
take $K$ small enough accordingly and the above Lemma shows that the
perturbation is small. We collect the results from perturbation
theory in the following proposition.
\proclaim{Proposition 7}. {For $q$ large enough and $K$ small enough
(with $\mu(K)>0$)
accordingly, the operator $P\chi_{K^{c}}$ has a spectral decomposition
in the space ${\cal B}_{\theta}$ (and also in $\cal B$)
$$
P\chi_{K^{c}}=P_{K}+R_{K}\;,
$$
where $P_{K}$ is a rank one operator with a real positive eigenvalue
$\lambda(K)<1$, the corresponding
eigenvector $h_{K}$ is a function with uniformly bounded variation and
it can be taken positive of integral $1$. The
associated linear form $\phi_{K}$ is then positive.
There is a positive constant $C_{1}$ and a number $\eta<1$ such that
uniformly in $K$ small enough, we have
$$
|1-\lambda(K)|\le C_{1}l(K)\;,
$$
and for any integer $n$
$$
\bvn R_{K}^{n}\bvnt\le C_{1}\eta^{n}\;.
$$
Finally, when $l(K)\to0$, $h_{K}$ converges to $h$ in $L^{1}$, and
$\phi_{K}$ converges weakly to $\phi$ (which is the integration over $dl$) in
${\cal B}_{\theta}'$.}
\proof
The fact that there is a simple largest eigenvalue for small enough
$K$ and the uniform estimate on the spectral decomposition is a direct
consequence of perturbation theory in the new norm (see [K.]). This is
however not enough to control the convergence when $l(K)\to0$, and we
do this directly.
We first observe that if $g$ is a non negative function, we have
$0\le P\chi_{K}g\le Pg$, and therefore the spectrum of $P\chi_{K}$ must
be contained in the unit disk. Therefore, if $g$ is of bounded variation, and
$u$ is an integrable positive function, we have
$$
\phi_{K}(g)\int uh_{K}dl=\lim_{n\to\infty}\lambda(K)^{-n}\int u
\left(P\chi_{K}\right)^{n}g dl\;,
$$
which implies that $\lambda(K)$ is real and positive, that $h_{K}$ is
a non negative function, and that $\phi_{K}$ is a non negative linear
form.
>From perturbation theory we have also
$$
h_{K}=h+r_{K}
$$
with $r_{K}$ uniformly bounded for $K$ small. The eigenvalue relation
$P\chi_{K}h_{K}=\lambda(K)h_{K}$ implies immediately the relation
$$
R^{q}r_{K}+\Delta_{q}(h+r_{K})=(\lambda(K)-1)h+\lambda(K)^{q}r_{K}\;.
$$
If we integrate over $dl$, and use $\phi\circ R^{q}=0$ and
$\phi(r_{K})=0$, we obtain
$$
1-\lambda(K)^{q}=-\int dl \Delta_{q}(h+r_{K})\;.
$$
Since $\Delta_{q}$ is a finite sum, we get
$$
|1-\lambda(K)|\le\Oun l(K)\;.
$$
>From perturbation theory, it follows that when $l(K)\to0$, $h_{K}$ is
uniformly bounded in ${\cal B}$, and therefore precompact in $L^{1}$.
Similarly, $\phi_{K}$ is uniformly bounded and therefore weakly
precompact. Let $h_{*}$ and $\phi_{*}$ be two respective accumulation
points. If $g$ is of bounded variation and $u$ is in $L^{\infty}$, we
have for any integer $n$
$$
\int u (P\chi_{K^{c}})^{n}g=\phi_{K}(g)\lambda(K)^{n}\int
uh_{K}dl+{\cal O}(\eta^{n}\|u\|_{\infty}\bvn g\bvn)\;.
$$
For a fixed $n$, we let $l(K)$ tend to zero in such a way that $h_{K}$
converges to $h_{*}$ in $L^{1}$ and $\phi_K$ converges weakly to
$\phi_{*}$. We obtain
$$
\int u P^{n}g=\phi_{*}(g)\int
uh_{*}dl+{\cal O}(\eta^{n}\|u\|_{\infty}\bvn g\bvn)\;,
$$
and by letting $n$ diverge we conclude that $h_{*}=h$
and $\phi_{*}=\phi$. Moreover, since we have only one accumulation
point, we have convergence.
This ends the proof of the proposition.
It is easy to verify that the linear functional
$$
u\longto\phi_{K}(uh_{K})
$$
initially defined on functions of bounded variation extends to a
positive measure which is invariant and ergodic by $f$.
This measure is supported by the invariant set of points whose orbit
does not meet $K$. It is often called (rather unfortunately) a quasi
invariant measure. In that direction one may wonder how large can be
the set $K$ in order to expect that the same results hold (see
[Ke.]). One can however easily construct examples where the situation
is completely different. Consider the map of the interval
$$
f(x)=\cases{2x,& if $x<1/2$,\cr 2-2x& if $x\ge1/2$.}
$$
It is easy to verify that if $K$ is the set $[0,2/3]$, then either
$x\in K$, or $f(x)\in K$. In other words,
$$
(P\chi_{K})^{2}=0\;.
$$
We now have all the necessary tools for the proof of Theorem I.
\noindent{\bf Proof}\ (of Theorem I).
Let $T_{j}=-\log\lambda(K_{j})$, then if $[a]$ denotes the integer par
of a positive real number $a$, we have for any $t>0$ as explained in the
introduction
$$
\mu(\{\tau_{K_{j}}>tT_{j}\})=\int dl \chi_{K^{c}_{j}}
\left(P\chi_{K^{c}_{j}}\right)^{[tT_{j}]}h\;.
$$
Using proposition 7, we obtain immediately
$$
\mu(\{\tau_{K_{j}}>tT_{j}\})=\lambda^{[tT_{j}]}\phi_{K_{j}}(h)
\int\chi_{K^{c}_{j}} h_{K_{j}}dl+{\cal O}(\eta^{tT_{j}})\;.
$$
The result follows immediately from Proposition 7 since
$\lim_{j\to\infty}T_{j}=\infty$.
Note that the above proof works also with any sequence of numbers
$(T_{j})$ such that
$$
\lim_{j\to\infty}-T_{j}/\log\lambda(K_{j})=1\;.
$$
\beginsection{\Romannumeral{3}. Proof of Theorem II}
We will denote by $K$
one of the intervals of the sequence $(K_{j})$ with $j$ large enough.
Let $\tilde m_{K}=\inf\{m_{K},1/\sqrt{\mu(K)}\}$, where we have denoted by
$m_{K}$ the number $m_{j}$ if $K=K_{j}$.
Using again the decomposition $h_{K}=h+r_{K}$ as in the proof of
proposition 7, the eigenvalue relation
$P\chi_{K^{c}}h_{K}=\lambda(K)h_{K}$, and the hypothesis $f^{n}(K)\cap
K=\emptyset$ for $0From proposition 7, we have for $j$ large enough a uniform bound on
the numbers $\vee r_{K_{j}}$, and also the sequence
$(\|r_{K_{j}}\|_{1})$ converges to zero. Therefore, we have
$$
1-\lambda(K_{j})^{\tilde m_{j}}=\tilde m_{j}\mu(K_{j})+\mu(K_{j})
\Oun(1+\tilde m_{j}\|r_{K_{j}}\|_{1})\;,
$$
and the first result follows by letting $j$ diverge.
We now come to the proof of the second part.
From Theorem I.16 in [N.], the result
follows from the convergence of the Laplace transform of the measures
$({\cal M}_{K_{j}})$. In other words, it is enough to verify that for
any function $g$ in $C_{0}^{+}(\real^{+})$, we have
%{\tt ANTONIO c'est la notation ? }
$$
\lim_{j\to\infty}\int e^{-\sum_{n=0}^{\infty}\chi_{K_{j}}(f^{n}(x))
g(n\mu(K_{j}))}
d\mu(x)=e^{-\int(1-e^{-g(y)})dy}\;.
$$
We observe that this is of course a statement about the sequence of random
variables
$$
N_{j}(g)(x)=\sum_{n=0}^{\infty}\chi_{K_{j}}(f^{n}(x))g(n\mu(K_{j}))\;,
$$
and we will prove that the sequence of characteristic functions of these random
variables converges. More precisely, we claim that for any complex
number $z$ we have
$$
\lim_{j\to\infty}\int e^{-z\sum_{n=0}^{\infty}\chi_{K_{j}}(f^{n}(x))
g(n\mu(K_{j}))}
d\mu(x)=e^{-\int(1-e^{-zg(y)})dy}\;.
$$
We now observe that on both sides of the above formula we have entire
functions. This is obvious for the function in the right hand side,
for the function on the left hand side this follows from the fact that
the sum over the integer $n$ is finite since $g$ has compact support.
This implies that the convergence of the characteristic functions
follows from the convergence of the moments (we refer to [B.] for a proof).
We have for the moment of order $r$ of $N_{j}(g)$
$$
\expectation(N_j(g)^{r})=\sum_{0\le n_{1},\cdots,0\le n_{r}}
\int h(x) dl(x)\prod_{l=1}^{r}g(\mu(K_{j})n_{l})\chi_{K_{j}}(f^{n_{l}}(x))\;.
$$
We now observe that from our hypothesis,
$$
\chi_{K_{j}}\;\chi_{K_{j}}\circ f^{n}=0\qquad\hbox{\rm if}\qquad 1\le
n\le m_{j}\;.
$$
This suggests to rearrange the sum in the
expression of $\expectation(N_j(g)^{r})$ into a sum over
different indices,
and by the above remark these different indices must differ by at
least $m_{j}$. We get
$$
\eqalign{&\expectation (N_j(g)^{r})=\cr
&\sum_{l=1}^{r}\sum_{0< t_{1}\,\cdots,0m_{j}\; \hbox{\sevenrm for}\; p\neq q}}\,
\hskip -3 pt\int h(x)dl(x)
\prod_{s=1}^{l}g^{t_{s}}(n_{s}\mu(K^{j}))\chi_{K_{j}}(f^{n_{s}}(x))\;.\cr}
$$
%{\tt ANTONIO la combinatoire ?}
At this point we would like to apply recursively the decay of
correlations (D.C.). This is where we use our second hypothesis
$$
\liminf_{j\to\infty}\inf_{K_{j}}h>0\;.
$$
Let $a$ be a positive number such that for $j$ large enough
$\inf_{K_{j}}h>a$. If $B$ is a measurable set contained in $K_{j}$,
the decay of correlation (D.C) implies since $\vee \chi_{K_{j}}=2$
$$
\left|\int\chi_{K_{j}}(x)\chi_{B}(f^{n}(x))d\mu(x)-
\mu(K_{j})\int \chi_{B}(x)d\mu(x)\right|
\le 3C\gamma^{n}a^{-1}\int \chi_{B}(x)d\mu(x)\;,
$$
or in other words
$$
(1- 3C\gamma^{n}a^{-1})\int\hskip -1pt \chi_{B}(x)d\mu(x)\le
\int\hskip -1pt\chi_{K_{j}}(x)\chi_{B}(f^{n}(x))d\mu(x)
\le(1+ 3C\gamma^{n}a^{-1})\int\hskip -1pt \chi_{B}(x)d\mu(x)\;.
$$
Using recursively this estimate and the fact that for any integer $p$
$$
\lim_{\epsilon\to0^{+}}\epsilon\sum_{n=0}^{\infty}g^{p}(n\epsilon)=\int
g^{p}(y) dy\;,
$$
we derive that the moment of order $r$ of the random variable
$N_j(g)$ converges when $j$ diverges to the number
$$
\sum_{l=1}^{r}\qquad
\sum_{0< t_{1}\,,\cdots,0