Content-Type: multipart/mixed; boundary="-------------0312121401860" This is a multi-part message in MIME format. ---------------0312121401860 Content-Type: text/plain; name="03-541.keywords" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="03-541.keywords" Heat kernel, Thermodynamic limits, Correlations, Gibbs states ---------------0312121401860 Content-Type: application/x-tex; name="nov-03.tex" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="nov-03.tex" \def\Q{{\rm I}\!\!\!{\rm Q}} \def\R{{\rm I}\!{\rm R}} \def\R{{\rm I}\!{\rm R}} \def\Q{{\rm I}\!\!\!{\rm Q}} \def\Z{{\rm Z}\!\!{\rm Z}} \def\N{{\rm I}\!{\rm N}} \def\a{\alpha } \def\f{\varphi } \def\y{\psi } \def\k{{\bf C}} \def\j{\theta } \def\C{C^{\infty }} \def\W{\Omega } \def\L{\Lambda } \def\w{\omega } \def\e{\varepsilon } \def\d{\delta } \def\n{\widetilde } \def\l{\lambda } \def\q{\partial } \def\O{{\cal O}} \def\@{\infty } \def\&{\rightarrow } \def\V{\forall } \def\d{\partial } \def\k{{\bf C}} \font\un = cmbx10 at 14pt \null \vskip 2cm \centerline {\un THERMODYNAMIC LIMITS FOR A QUANTUM CRYSTAL} \medskip \centerline {\un BY HEAT KERNEL METHODS.} \vskip 1cm \centerline {\bf L. AMOUR, C. CANCELIER, P. LEVY-BRUHL and J. NOURRIGAT} \medskip \centerline {D\'epartement de Math\'ematiques, UMR CNRS 6056} \centerline {Universit\'e de Reims. B.P. 1039. 51687 Reims Cedex 2. France} \bigskip \centerline {Abstract} \medskip We consider a $d$-dimensional quantum anharmonic crystal, where the interaction between the ions satisfies hypotheses, based on the idea that the ions are not too far from the points $\Z ^d$, and that the interaction between them decreases exponentially with their distance. Under these conditions, we study carefully the heat kernel of the Hamiltonian related to each finite set of $\Z^d$, with constants in the inqualities that are independent of this set, (thus, improving an earlier result of Sj\"ostrand). Then, taking the limit when the finit set `tends to $\Z^d$', we define as usual a Gibbs state on the algebra of quasilocal observables, proving the convergence in norm, with an exponential rate, for the usual limit defining the state. Then, proving first an exponential decay of the correlations for finite sets and passing to the limit, we prove some properties, (mixing, triviality at infinity when restricted to a suitable subalgebra), of our Gibbs state. The decay of correlation relies on the study of the heat kernel, and is itself used for estimating the rate of convergence in the thermodynamic limit. Another consequence of this decay of correlations is the continuity of the mean energy per site with respect of the temperature, but all the results in this paper are valid under some inequalities between the temperature, the coupling constant, and the Planck's constant, making impossible to let the temperature tend to $0$ when the other parameters are fixed. \vskip 1cm \noindent {\bf 1. Introduction.} \medskip Let us consider a quantum $d$-dimensional lattice of particles, each of them moving in ${\R}^{p}$. For each finite subset $\Lambda $ of ${\Z}^d$, we denote by $H_{\Lambda }(\varepsilon)$ the following differential operator in $({\R}^{p })^{\Lambda }$, depending on the Planck's constant $h$ and on another small parameter $\varepsilon$ (measuring the decay of interactions between particles of the lattice): $$H_{\Lambda }(\varepsilon ) \ = \ -{h^2\over 2} \ \sum _{\lambda \in \Lambda } \Delta _{x_{\lambda }} \ +\ V_{\Lambda , \varepsilon }(x) \leqno (1.1)$$ where $x= (x_{\lambda })_{\lambda \in \Lambda }$ denotes the variable of $({\R}^{p })^{\Lambda }$, each variable $x_{\lambda }$ being in ${\R}^{p}$. We suppose that the potential $V_{\Lambda , \varepsilon }\in C^{\infty } (({\R}^{p})^{\Lambda })$ is of the following form, with $A\in C^{ \infty } ( {\R}^{p }, \R )$ and $B_{\lambda }\in C^{ \infty } ({\R} ^{2p },\R )$ $$V_{\Lambda ,\varepsilon} (x)\ = \ \sum _{\lambda \in \Lambda } A(x_{\lambda } )\ +\ \ \sum _{{\lambda , \mu \in\Lambda \atop \lambda \not=\mu }} \varepsilon ^{\vert \lambda -\mu \vert } B_{\lambda - \mu }(x_{\lambda },x_{\mu }) \hskip 1cm x = (x_{\lambda })_{\lambda \in \Lambda }. \leqno (1.2)$$ We assume that, for some $C>0$, $C^{-1} | x | \leq A(x) \leq C | x | $ for $ | x |\geq C$, that all the $B_{\lambda }$ are uniformly bounded, and that, for each integer $q\geq 1$, the derivatives of order $q$ of $A$ and $B_{\lambda }$ are uniformly bounded. In (1.2), $\vert \lambda \vert $ is the $\ell ^{\infty }$ norm in ${\Z}^d$. \bigskip The first part of this paper (Sect. 2 to 4) will be devoted to the description of the integral kernel of $e^{-tH_{\Lambda}(\varepsilon )}$, with inequalities where the constants are independent of $\Lambda $ and $\varepsilon $. Sj\"ostrand began this study in [22]. \bigskip We denote by $\nabla _ {x_{\lambda} }$ the partial differential with respect to the variable $x_{\lambda }\in \R^p$, and by $\nabla _ {\lambda }$ with respect to $(x_{\lambda }, y_{\lambda })\in \R^{2p}$ ($\lambda \in \Lambda$) of a $\C $ function $f$ on $(\R^{2p})^{\Lambda }$. The norm of $\nabla _{\lambda}f(x)$ is its norm in $(\R^{2p})^{\star}$. We denote by ${\rm diam}(A)$ the diameter, for the norm $ \ell ^{\infty }$, of a subset $A$ of $\Z^d$. \bigskip \noindent {\bf Theorem 1.1.} {\it Under the previous hypotheses, the integral kernel $U_{\Lambda }(x, y, t, h, \varepsilon )$ of $e^{-tH_{\Lambda }(\varepsilon )}$ can be written under the form $$U_{\Lambda }(x, y, t, h, \varepsilon ) \ =\ (2\pi th^2)^{-p \vert \Lambda \vert /2} \ e^{-{ | x-y | ^2\over 2th^2}} \ e^{-\psi_{\Lambda }(x, y, t, h, \varepsilon )}. \leqno (1.3)$$ where $\psi_{\Lambda } $ is a $C^{\infty }$ function in $(\R^p)^{\Lambda } \times (\R^p)^{\Lambda } \times [0, + \infty [$, depending on the parameters $h\in ]0, 1]$ and $\varepsilon \in [0, 1]$. Moreover, there exists a constant $\sigma _0>0$ and, for each integer $m$, there exists a constant $C_m>0$ and a function $\gamma \rightarrow \varepsilon _m (\gamma)$ from $]0, 1[$ in itself, with the following properties. For each finite subset $\Lambda $ of $\Z ^d $, and for each points $\lambda ^{(1)}$, ... $\lambda ^{(m)}$ in $\Lambda $, we have $$\Vert \nabla _{\lambda ^{(1)}} ... \nabla _{\lambda ^{(m)}}\psi _{\Lambda} \Vert _{L^{\infty }((\R^{2p})^{\Lambda })} \leq t C_{m}\ \varepsilon ^{\gamma \ {\rm diam } (\{ \lambda ^{(1)}, ...\lambda ^{(m)}\} )} \hskip 1cm if \ \ \ h t \leq \sigma _0 \ \ \ and \ \ \ \varepsilon \leq \varepsilon _m(\gamma ). \leqno (1.4)$$} \bigskip Let us recall that Sj\"ostrand proved in [22] that, near the diagonal, the integral kernel $U_{\Lambda }$ can be written in the form (1.3), and he proved that an approximation modulo ${\cal O}(h^{\infty })$ of the function $\psi_{\Lambda } $ satisfies inequalities which are probably equivalent to (1.4), but are written in a different form, using the concept of $0-$standard function, introduced in the same article. \bigskip \bigskip The first application of our results on the heat kernel is a construction of Gibbs states, as thermodynamic limits, on the $C^{\star }$-algebra ${\cal A}$ of quasi-local observables on the lattice, and an estimationof the rate of convergence. Let us recall the definition of ${\cal A}$ (see B. Simon [21], section II.1, for more details). To each finite set $\Lambda $ of $\Z^d$, we associate the Hilbert space ${\cal H}_{\Lambda }= L^2( (\R^p)^{\Lambda })$. If $\Lambda _1 \subseteq \Lambda _2$, we have a natural identification of ${\cal L}({\cal H}_{\Lambda _1})$ into ${\cal L}({\cal H}_{\Lambda _2})$. Classically, ${\cal A}$ is the closure of the union of (the equivalence classes of ) all these ${\cal L}({\cal H}_{\Lambda })$. On this algebra ${\cal A}$, we shall define Gibbs states by the usual limit (1.6) below, taking a sequence $\Lambda _n$ of finite subsets `tending to $\Z^d$'. For simplicity, we shall consider, for each $n\geq 1$, $$\Lambda _n= \{ -n, \ldots , n \}^d . \leqno (1.5)$$ The existence of limits like (1.6) below has been proved, in some other different situations, by many authors. Sometimes, it is necessary to take a subsequence of $\Lambda _n$, sometimes the RHS of (1.6) converges for each fixed $A$, but not in norm, and often probabilistic methods are used. In Theorem 1.2, other new things are the rate of convergence in norm, and the method of the proof, which relies (via Theorem 1.5 below) on the estimations of the heat kernel given by Theorem 1.1. \bigskip Recently, Minlos, Pechersky, Verbeure and Zagrebnov ([16] and [17]) proved the existence of Gibbs states on ${\cal A}$ by similar limits, but for potentials of a very different type, and by techniques of Feynmann integrals. S. Albeverio, Kondratiev, Pasurek and R\"ockner ([1] and [2]), also constructed Gibbs space by probabilistic methods. \bigskip We tried to give results as global as possible, but we need, as many authors (cf Minlos [15]), some inequalities between our three parameters: $t$, (the inverse of the absolute temperature), $h$ and $\varepsilon$. We set $M(t) = \sup (t, 1)$. \bigskip \noindent {\bf Theorem 1.2.} {\it Let $Q_0$ be a finite subset of $\Z^d $, and let $A$ be an element of ${\cal L}({\cal H}_{Q_0})$ (a local observable). (Thus, for $n$ large enough, $\Lambda _n$ contains $Q_0$, and $A$ can be considered as an element of ${\cal L}({\cal H}_{\Lambda _n})$). Then, with our hypotheses on the potentials, for each integer $N$, there exists a function $\gamma \rightarrow \varepsilon _0 (\gamma, N)$ from $]0, 1[$ in itself such that the following limit exists: $$\omega (A) \ =\ \lim _{n \rightarrow + \infty } { tr \ (e^{-tH_{\Lambda _n}(\varepsilon )} A) \over \ tr \ (e^{-tH_{\Lambda _n}(\varepsilon )}) } \leqno (1.6) $$ if the following conditions are satisfied $$ht \leq \sigma _0, \hskip 1cm 0 < \varepsilon \leq \varepsilon _0 (\gamma , \sharp (Q_0)) \ \ \ \ 0< \gamma < 1 \hskip 1cm M(t) \varepsilon ^{\gamma} \leq {1 \over 2}\leqno (1.7)$$where $\sigma _0$ is the constant of Theorem 1.1. Moreover, there exists a function $(h, t) \rightarrow K(t, h, N)$, bounded on each compact set of $]0, + \infty [ \times ]0, + \infty [$ such that, under the conditions (1.7), $$\left |\omega (A) - { tr \ (e^{-tH_{\Lambda _n}(\varepsilon )} A) \over \ tr \ e^{-tH_{\Lambda _n}(\varepsilon )} }\right | \leq K(t, h, {\rm diam}(Q_0)) \ (M(t) \varepsilon ^{\gamma } )^{{ n \over 10}} \Vert A \Vert \leqno (1.8)$$ } \bigskip Let us remark that $|\omega (A)|\leq \Vert A\Vert$, and therefore that the linear form $A \rightarrow \omega (A)$, defined on the qunion of all the ${\cal L}({\cal H}_Q)$, can be extended to a state on ${\cal A}$. (It satisfies $\omega (I)=1$ and $\omega (A^{\star }A )\geq 0$). \bigskip Our technique can be extended to observables which are not necessarily bounded, and therefore do not belong to the $C^{\star}-$algebra. We shall restrict ourselves to the case where $A$ is the multiplication by a polynomially bounded function. \bigskip \noindent {\bf Theorem 1.3.} {\it With the notations of Theorem 1.2, let $f \in C^{\infty }((\R^p)^{Q_0})$ be a function satisfying, for some positive integer $m$, and for some constant $N_m(f)$,$$|f(x_{Q_0})| \ \leq \ N_m(f)\ (1 + |x_{Q_0}|)^m \hskip 1cm \forall x_{Q_0} \in (\R^p)^{Q_0}, \leqno (1.9)$$ and $A$ be the operator (in $L^2((\R^p)^{Q_0})$) of multiplication by $f$. (Then, if $\Lambda $ contains $Q_0$, the operator $e^{-t H_{\Lambda }(\varepsilon )}A$ is well defined in ${\cal L}({\cal H}_{\Lambda })$, and of trace class). Then there exists a constant $\sigma _1>0$ and a function $\gamma \rightarrow \varepsilon _0 ( \gamma , m, \sharp (Q_0))$ from $]0, 1[$ in itself such that the limit (1.6) exists if $$h^2 (t + t^2) \leq \sigma _1,\ \ \ \ \ 0< \varepsilon \leq \varepsilon _0(\gamma , m, \sharp (Q_0)), \ \ \ \ \ \ \gamma \in ]0, 1[, \ \ \ \ \ \ M(t) \varepsilon ^{\gamma } \leq {1 \over 2}.$$ } \bigskip For example, if $d=1$, if $Q_0= \{ 1, 2 \} $, and if $A$ is the multiplication by the function $f(x_1 , x_2 ) = x_2 - x_1$, we can think than $\omega (A)$ is related to the dilation coefficient of the crystal. \bigskip With our hypotheses, J. Sj\"ostrand [22] proved that the following limit (free energy) exists $$P (t, h)\ =\ \lim _{n\rightarrow + \infty} \ {1\over \sharp ( \Lambda _n ) } \ ln\ \left ( tr \ \left ( e^{-tH_{\Lambda _n}(h)} \right ) \right ) \leqno (1.10) $$ He proved also that $P (t, h)$ has an expansion in powers of $h$ when $h\rightarrow 0$, while $\varepsilon>0$ is fixed. In the literature on solid state physics (Kittel [13] or Ashcroft-Mermin [3]), it appears that the partial derivative of $P (t, h)$ with respect of $t$ is supposed to exist and to represent the mean energy $U(1/t, h)$ of the crystal (per site) at the temperature $1/t$. Therefore it seemed interesting to prove mathematically that this derivative exists and that $${\partial P (t, h)\over \partial t} =\ - \lim _{n\rightarrow + \infty} \ {1\over \sharp ( \Lambda _n ) } {\left ( tr \ ( H_{\Lambda _n}(h)e^{-tH_{\Lambda _n}(h)} \right ) \over \left ( tr \ e^{-tH_{\Lambda _n}(h)} \right )} \leqno (1.11) $$ Another application of our techniques is the proof of the following \bigskip \noindent {\bf Theorem 1.4. } {\it With the preceding hypotheses, there exist constant $\varepsilon _0$ and $\sigma _1>0$ such that, if the positive parameters $t$, $ h$ and $ \varepsilon $ satisfy $$h^2(t+ t^2) \leq \sigma _1, \ \ \ \ \ 0 < \varepsilon < \varepsilon _0, \ \ \ \ M(t) \varepsilon ^{1 \over 5} \leq {1 \over 2}, \leqno (1.12)$$ then $P (t, h, \varepsilon)$ is derivable with respect to $t$, and the derivative is given by (1.11) and is continuous as a function of its first variable in a neighborhood of $t$. Moreover, there exists a function $F(t)$, bounded on each compact set of $]0, + \infty [$, such that, under the conditions (1.12) $$ \left \vert {\partial P(t, h)\over \partial t} + \ {1\over \sharp ( \Lambda _n ) } {\left ( tr \ ( H_{\Lambda _n}(h)e^{-tH_{\Lambda _n}(h)} \right ) \over \left ( tr \ e^{-tH_{\Lambda _n}(h)} \right )} \right \vert \leq { F(t)\over n} \leqno (1.13)$$ } \bigskip \bigskip This estimation shows that, for each $h$ and $\varepsilon $ (the last being small enough), the mean energy is a continuous function of the temperature ${1\over t}$ in the interval in which (1.12) is satisfied. \bigskip The main step of the proof of Theorems 1.2, 1.3 and 1.4 is a result on the decay of the quantum correlation of two local observables. Let us define this notion. For each finite set $\Lambda $ of $\Z^d$, and for each (bounded, or satisfying suitable other hypotheses) operator $A$ in ${\cal H}_{\Lambda}$, we can define the 'mean value' of $A$ as $$E_{ \Lambda , \varepsilon }(A) = { tr\ \left ( e^{-tH_{\Lambda }(\varepsilon) }A \right ) \over tr\ \left ( e^{-tH_{\Lambda }(\varepsilon) } \right )}\ . \leqno (1.14)$$ If $E_1$ and $E_2$ are disjoints subsets of $\Lambda$, and if $A$ (resp. $B$) is an operator in ${\cal H}_{E_1}$ (resp. in ${\cal H}_{E_2}$), we can consider $A$ and $B$ as commuting operators in ${\cal H}_{\Lambda }$, and define their quantum correlation as $$cov _{ \Lambda , \varepsilon } \ (A, B)= E_{ \Lambda , \varepsilon }(AB)-E_{ \Lambda , \varepsilon }(A) E_{ \Lambda , \varepsilon }(B) \leqno (1.15)$$ \bigskip We denote by $\delta (E, F)$ the distance, for the $\ell ^{\infty }$ norm, of two subsets $E$ and $F$ of $\Z^d$. \bigskip \noindent {\bf Theorem 1.5. } {\it With the previous notations, we can find, for each integers $N_1$ and $N_2$, a function $(t, h) \rightarrow K(t, h, N_1, N_2)$, bounded on each compact set of $]0, + \infty [ \times ]0, + \infty [$, and a function $\gamma \rightarrow \varepsilon _0(\gamma , N_1 , N_2 )$ from $]0, 1[$ to itself such that, for each disjoint subsets $E_1$ and $E_2$ of any box $\Lambda$ of $\Z^d$, for each $A\in {\cal L}({\cal H}_{E_1})$ and $B\in {\cal L}({\cal H}_{E_2})$, we have, $$cov _{\Lambda , \varepsilon } (A, B) \leq K \big ( t, h, {\rm diam}(E_1), {\rm diam} (E_2) \big ) \ \ \big ( (M(t) \varepsilon ^ { \gamma} \big )^{{1 \over 5} \delta ( E_1, E_2)} \Vert A \Vert \ \Vert B\Vert, \leqno (1.16)$$ (where $ M(t)= \sup (t, 1)$), if $$ht \leq \sigma _0, \ \ \ \ \ \ \ 0< \varepsilon \leq \varepsilon _0(\gamma , \sharp (E_1) , \sharp (E_2) ), \ \ \ \ \ \ \ M(t) \varepsilon ^{\gamma} \leq {1 \over 2}. \leqno (1.17)$$ If $B$ is the multiplication by a bounded $C^{\infty }$ function, $K$ may be chosen independent of ${\rm diam}(E_2)$ and $ \varepsilon _0$ may be chosen independent of $\sharp (E_2)$. If both $A$ and $B$ are the multiplications by bounded $C^{\infty }$ functions, $\varepsilon _0$ depends only on $\gamma$ and we can take $K= 4 \inf (t, 1)\ \inf ( \sharp (E_1), \sharp (E_2))$. } \bigskip If, in the definition of $H_{\Lambda \varepsilon } $, the potential $V_{\Lambda \varepsilon }$ is replaced by a family, depending on an auxiliary parameter, such that the hypotheses of the introduction are satisfied uniformly, then the constants in the theorems 1.5 can be chosen independent of this parameter. \bigskip As a consequence, we see that the state $\omega $ defined in (1.6) satisfies the following mixing property, in the sense of B. Simon [21], Chapter III, Definition (III.1.21), adapted to the quantum case. For each pair of local observables $A\in {\cal L}({\cal H}_{E_1})$ and $B\in {\cal L}({\cal H}_{E_2})$, where $E_1$ and $E_2$ are finite sets of $\Z^d$, for each $t$, $h$ and $\varepsilon$ satisfying (1.7) (with $Q_0 = E_1 \cup E_2$), and (1.17), if $K(t,h, N_1, N_2)$ is the function of Theorem 1.5, we have $$\vert \omega (AB)- \omega (A) \omega (B)\vert \leq K(t, h, {\rm diam}(E_1), {\rm diam} (E_2)) \ (M(t) \varepsilon ^ { \gamma})^{{1 \over 5} \delta ( E_1, E_2)} \Vert A \Vert \ \Vert B\Vert. \leqno (1.18) $$ If $A$ and $B$ are multiplication by bounded functions, the constant $K$ depends only on $\inf ( \sharp (E_1), \sharp (E_2))$. Therefore, for each $A$ and $B$ in the algebra ${\cal A}$ of quasilocal observables, we have, denoting by $\tau _h$ the translation with respect to $h\in \Z^d$, $$\lim _{\vert h \vert \rightarrow + \infty } \Big [ \omega ( A \circ \tau _h (B)) - \omega (A) \omega (\tau _h(B)) \Big ] = 0,$$ which seems to be the good analogous of the Definition (III.1.21) of the {\it mixing property} in [21]. \bigskip Another property, defined also in [21], is the {\it triviality at infinity}, which is characterized in [21] in the Theorem IV.1.4 (of Lanford-Ruelle, see the reference in [21]). If we restrict the state $\omega$ to the subalgebra ${\cal A}^{(0)}$ of ${\cal A}$, contructed in the same way as ${\cal A}$, but taking only, for each finite set $\Lambda $, the algebra ${\cal A}_{\Lambda}^{(0)}$ of multiplications by bounded functions on $(\R^p)^{\Lambda }$, and then taking the union and completion as before, then $\omega$ satisfies the condition of the Lanford-Ruelle theorem for this subalgebra ${\cal A}^{(0)}$: for each $A\in {\cal A}^{(0)}$ and for each $\varepsilon >0$, there exists a finite set $\Lambda $ of $\Z^d$ such that, for each finite set $E$ of $\Z^d \setminus \Lambda $ and for each $B$ in ${\cal A}_E^{(0)}$, we have $$ \vert \omega ( AB ) - \omega (A) \omega (B) \vert \leq \varepsilon \Vert B \Vert . $$ \bigskip Similar results are proved in Helffer-Sj\"ostrand [12] in the case of classical mechanics, and with very different hypotheses on the potential, and by a different method. (See also [8], [9], [10], and Bach-M\"oller [4]). The Theorems 1.2 to 1.4 are consequences of the two last statements of Theorem 1.5, and the proof of this theorem uses Theorem 1.1. \bigskip In section 2, we shall prove the global existence of $\psi _{\Lambda } $, such that (1.3) is satisfied, in other words, such that $\psi _{\Lambda } $ is the solution of the Cauchy problem in $(\R^p)^{\Lambda }$ $${\partial \psi _{\Lambda } \over \partial t}\ + \ {x-y \over t}\ .\ \ \nabla _x \psi _{\Lambda } - {h^2 \over 2 } \Delta \psi _{\Lambda } \ =\ V_{\Lambda , \varepsilon }(x) \ -\ {h^2 \over 2} \vert \nabla \psi _{\Lambda } \vert ^2 \leqno (1.19)$$ $$\psi _{\Lambda }(x, y, 0, h, \varepsilon )= 0\leqno (1.20)$$We shall also prove preliminary estimates, in which the constants still depend on $\sharp (\Lambda)$, and a variant of the maximum principle, which are used in further sections. Section 4 is devoted to the proof of Theorem 1.1 (estimations (1.4)). In sections 5 and 6, we shall decompose $\psi _{\Lambda }$ as a sum of terms associated to clusters (cubes) of $\Lambda$, with an estimation of each term. In section 7, we study the case where $\Lambda $ is the union of two disjoint subsets $\Lambda _1$ and $\Lambda _2$. In section 8, we study the behavior at infinity of $\psi _{\Lambda}$. In Sections 9 and 10, we prove Theorem 1.5, and, in section 9, a variant in which $A$ and $B$ are multiplications by polynomially bounded functions. In Section 11, we prove Theorems 1.2 to 1.4. \medskip In the case of a quadratic potential, our result follow from the explicit computations of C. Royer [19]. \bigskip We are very grateful to C. G\'erard and V. Zagrebnov for useful discussions. \bigskip \noindent {\bf 2. Some Lemmas on non linear parabolic equations. } \bigskip Here, we shall study the Cauchy problems of the form (1.19), (1.20). For the results given here, the parameter $\varepsilon $ and the fact that the variables are indexed by the points of a cube play no role, and therefore we may rewrite the Cauchy problem with simpler notations, in which the space dimension will be denoted by $n$. $${\partial \psi \over \partial t}\ + \ {x-y \over t}\ .\ \ \nabla _x \psi - {h^2 \over 2 } \Delta \psi \ =\ V(x) \ -\ {h^2 \over 2} \vert \nabla \psi \vert ^2 \leqno (2.1)$$ $$\psi (x, y, 0, h )= 0\leqno (2.2)$$ where $y$ is a point of $ {\bf R} ^n $,which plays the role of a parameter, like $h\in ]0, 1]$. We assume that $V$ is a $C^{\infty }$ function on ${\bf R}^n$, such that all its derivatives of order $\geq 1$ are bounded. In this section the constants in the inequalities may depend on the dimension $n$, excepted when they are explicit, but they are independent on $y$. More precisely, the aim of this section is the following result. \bigskip \noindent {\bf Theorem 2.1. } {\it Assume that $V$ is a $\C $ real-valued in $\R^n$, such that all its derivatives of order $\geq 1$ are bounded. Then, there exist a unique global classical solution $\psi$ to (2.1), (2.2). Moreover, for all $t,h>0$, $$ \left\Vert{\partial\psi\over\partial x_j}(\cdot,\cdot,t,h)\right\Vert_\infty\leq \ {t\over 2} \ \sup_{j\in\{1,\dots,n\}} \left\Vert{\partial V\over\partial x_j}\right\Vert_{\infty}, $$ and, for all $T,h>0$, for all $\alpha\in\N^n$ with $\vert\alpha\vert\geq 2$, there exists $M>0$ (which may depend on the dimension $n$ and on $T$) such that, $$ \left\Vert\partial^\alpha_x\psi(\cdot,\cdot,t,h)\right\Vert_\infty\leq M \hskip 1cm \forall\,t\in(0,T],\ . $$ } \bigskip The two first subsections are devoted to preliminaries to the proof, and section 2.3 to the end of the proof. \bigskip \noindent {\it 2.1. Linearized equation.} \bigskip As a first step, we shall solve explicitely the linearized equation, with a given right hand side $f$, $${\partial \psi \over \partial t}\ + \ {x-y \over t} \nabla _x \psi - {h^2 \over 2 } \Delta \psi \ = \ f(x, t) \ \ \ t>0 \hskip 1cm \psi (x, y, 0)= 0\leqno (2.3)$$ We shall have also the analogous problem,with an initial data $g$ at a point $t_0>0$ $${\partial \psi \over \partial t}\ + \ {x-y \over t} \nabla _x \psi - {h^2 \over 2 } \Delta \psi \ = \ f(x, t) \ \ \ t>t_0 \hskip 1cm \psi (x, y, t_0)= g(x)\leqno (2.4)$$ \bigskip For each $s$ and $t$ such that $00$ $$G_h(x, x', y, s, t)\ =\ (2\pi h^2)^{-n/2} \ a(s, t)^{n/2}e^{ -{a(s, t) \over 2 h^2} | x' - m(x, y, s, t) | ^2} \leqno (2.6)$$ \bigskip \noindent {\bf Proposition 2.2.} {\it The solution of the Cauchy Problem (2.3) is given by $$\psi _h(x, y, t)\ =\ \int _{{x'\in {\bf R} ^n \atop 0 0 \hskip 1cm \int _{ {\bf R} ^n } G_h(x, x', y, s, t) \ dx' = \ 1\leqno(2.7)$$ There exists $C>0$, independent of all the parameters, such that $$ \int _{ {\bf R} ^n }| \partial _{x_j} G_h(x, x', y, s, t) | dx' \ \leq \ {C\over h}\ \sqrt {{s\over t(t-s)}}\leqno (2.8)$$ For each $00)$, and $u$ be a function in $C({\bf R}^n \times [0, T]) \bigcap C^2 ({\bf R}^n \times ]0, T])$ such that $u$ and ${\partial u \over \partial x_j}$ $(1\leq j \leq n)$ are bounded in ${\bf R}^n \times ]0, T])$ and $u(x,0)=0$. Assume that the function $f$ defined by the following equality (where $h>0$) is bounded $${\partial u \over \partial t } \ +\ {x-y \over t} . \nabla u \ - \ {h^2 \over 2} \Delta u \ + \ \sum _{k=1}^n a_k(x, t) {\partial u \over \partial x_k} = f(x, t)\leqno (2.10)$$ Then we have, for each $t\in [0, T]$ $$\Vert u(.\ , t)\Vert _{\infty } \ \leq \ \int _0^t \Vert f(., \ s)\Vert _{\infty } \ ds\leqno (2.11)$$ where $\Vert \ \ \Vert _{\infty }$ denotes the norm of $L^{\infty } ({\bf R}^n)$. } \bigskip \noindent {\it Proof.} Here $y$ is fixed and it is often omit from the notations. Let $\chi\in C^\infty([0,+\infty))$ satisfying $\chi(x)=0$ if $x\geq 2$, $\chi(x)=1$ if $x\leq 1$, and $\Vert \chi\Vert_\infty$, $\Vert \chi'\Vert_\infty$, $\Vert \chi''\Vert_\infty\leq 1$. For $R\geq 1$, define $\chi_R(x,t)=\chi \left( \left ( {\Vert x-y\Vert\over Rt}\right )^2 \right)$, for all $x\in\R^n$ and $t>0$. Let $L={ x-y\over t}\cdot \nabla_x - { h^2\over 2}\Delta_x+ \sum_{k=1}^na_k(x,t){\partial\over\partial x_k}$. We have, $$\left({\partial\over\partial t}+L\right)\chi_Ru= f_R,\quad{\rm on}\ \R^n\times [0,T], \leqno(2.12)$$ where $$ f_R = \chi _R f - h^2\left( \nabla u\cdot\nabla\chi_R +{1\over 2}u\,\Delta \chi_R \right) +u\,a\cdot\nabla \chi_R + u{\partial \chi_R\over\partial t} + u{ x-y\over t}\cdot \nabla\chi_R.$$ Our choice of $\chi_R$ implies that the sum of last two terms vanishes. We define $K(x,y,s,t)$, $0\leq s\leq t\leq T$, $(x,y)\in\R^n\times\R^n$, by $$ \left({\partial\over\partial t}+L\right)K(s,t)=0,\qquad K(s,s)=f_R(s). \leqno(2.13) $$ Fix $t_0\in (0,T]$. Following (2.12)(2.13), we get for $t\in[t_0,T]$, $$ (\chi_Ru)(t)-(\chi_Ru)(t_0)=\int_{t_0}^t K(s,t)\,ds. $$ Let $U=\{x\in\R^n,\ \Vert x-y\Vert<\sqrt {2} RT\}$. Thus, $U$ is an open bounded set of $\R^n$, with a smooth boundary. One may look at (2.12) and (2.13) with $x\in U$ instead of $x\in \R^n$. Since the coefficients of $L$ are bounded and continuous on $U\times [t_0,T]$, the standard maximum is applied to $(2.13)$. One get, $$ \Vert K(\cdot,y,s,t)\Vert_{L^\infty(\R^n)}\leq \Vert f_R(\cdot,y,s)\Vert_{L^\infty(\R^n)}. $$ Consequently, $$ \Vert \chi_Ru(\cdot,y,t)\Vert_{L^\infty(\R^n)}- \Vert \chi_Ru(\cdot,y,t_0)\Vert_{L^\infty(\R^n)}\leq \int_{t_0}^t \Vert f_R(\cdot,y,s)\Vert_{L^\infty(\R^n)}\,ds. \leqno(2.14) $$ Furthermore, $$ \Vert f_R(\cdot,y,s)-\chi_R f(\cdot,y,s)\Vert_{L^\infty(\R^n)} \leq Cnh^2 \int_{t_0}^t {\Vert \chi'\Vert\over Rs}\sup_k\left\Vert{\partial u\over\partial x_k}\right\Vert_\infty +{\Vert \chi''\Vert\over 2R^2s^2}\Vert u\Vert_\infty +{\Vert \chi'\Vert\over Rs}\sup_k\Vert a_k\Vert_\infty\,ds. $$ Thus, one get the inequality stated in Proposition 2.3, by first taking the limit in (2.14), as $R\rightarrow \infty$, and then, by taking the limit as $t_0\rightarrow 0$. \bigskip \noindent {\it 2.3. End of the proof of Theorem 2.1.} \medskip In order to prove Theorem 2.1, we shall be interested in the following Cauchy Problem. $$ {\partial \psi\over\partial t}+{x-y\over t}\cdot \nabla_x\psi-{h^2\over 2}\Delta\psi=V (x)- {h^2\over 2}\vert\nabla_x\psi\vert^2,\ s\leq t\leq s+\tau,\leqno(2.15) $$ $$\psi(x,y,s)=g(s). $$ In the sequel, $y$ is a parameter and all the estimates shall be uniform in this parameter. Therefore, $y$ is now omitted in the notations. Formally, $(2.15)$ is equivalent to the following integral equation $$ \psi_h(\cdot,t+s)=G_h(s,s+t)g(s)+\int_0^tG_h(s+\sigma,s+t)\left[V-{h^2\over 2} \vert\nabla_x\psi(s+\sigma)\vert^2\right]d\sigma,\ 0\leq t\leq \tau.$$ Set $u_j={\partial \psi\over\partial x_j}$. Let $\alpha\in\N^n$ with $\vert\alpha\vert\geq 1$, then write $\alpha=(0,\dots,1,\dots,0)+\delta$ (the choice of $\delta$ does not matter, only $\vert\delta\vert$ is relevant in the estimates) and set $u^{(\alpha)}=\partial^{\alpha}_x\psi= \partial^{\delta}_x\partial_{x_j}\psi$. Formally, the $u^{(\alpha)}$'s, $\vert\alpha\vert\geq 1$, satisfy the following integral equations for $t\in [0,\tau]$, $$ u^{(\alpha)}(t+s)=\left({s\over s+t}\right)^{\vert\alpha\vert} G_h(s,s+t) {\partial^{\alpha}_x g}(s) +\int_0^t \left({\sigma+s\over t+s}\right)^{\vert\alpha\vert} G_h(s+\sigma,s+t){\partial^{\alpha}_x V}d\sigma -\cdots$$ $$\cdots{h^2\over 2}c_\alpha\sum_{l=1}^n\int_0^t\left({\sigma+s\over t+s}\right)^\delta{\partial G_h\over\partial x_j}(s+\sigma,s+t) \Big[ u_l(\sigma+s){\overline u_l}^{(\delta)}(\sigma+s)+\cdots\leqno(2.16) $$ $$ \cdots\sum_{\beta+\gamma=\delta,1\leq\vert\beta\vert,\vert\gamma\vert\leq\vert\delta\vert-1} \pmatrix{\gamma\cr \beta} u_l^{(\beta)}(\sigma+s){\overline u_l}^{(\gamma)}(\sigma+s)\Big] d\sigma.$$where $c_{\alpha }=1$ if $\alpha = 0$ and $C_{\alpha}=2$ if $\alpha \not=0$. In this subsection, $C$ always denotes the real positive constant given in subsection 2.1. \bigskip \noindent {\bf Lemma 2.4. } {\it Fix $k\geq 2$ and $T>0$. Set $s\in[0,T]$. Let $\tau>0$ be such that, $s+\tau\in[0,T]$ and $$0<\tau< \left(16CnhT\sup_{\alpha\in\N^n,1\leq\vert\alpha\vert\leq k}\left\Vert {\partial^{\alpha}_x V}\right\Vert_{L^\infty(\R^n)}\right)^{-2}.$$ Define $(M_l)$ the finite sequence of real numbers, $M_1=1$, $M_2\geq 1$, and for $3\leq l\leq k$, $$M_l=\sup_{\vert\delta\vert\leq l-1} \sum_{\beta+\gamma=\delta,\ \vert\beta\vert,\vert\gamma\vert\not =0} \pmatrix{\gamma\cr \beta} M_{\vert\beta\vert+1}M_{\vert\gamma\vert+1}. \leqno (2.17)$$ Suppose that $$\forall\,\alpha\in\N^n\ with\ 1\leq\vert\alpha\vert\leq k,\ \left\Vert {\partial^{\alpha}_x g(s)}\right\Vert_{L^\infty(\R^n)}\leq sM_{\vert\alpha\vert} \sup_{\beta\in\N^n,1\leq\vert\beta\vert\leq k}\left\Vert {\partial^{\beta}_x V}\right\Vert_{L^\infty(\R^n)}, $$ then, there exists $(u^{(\alpha)})_{1\leq\vert\alpha\vert\leq k}$ solutions to (2.3) on $[s,s+\tau]$. Moreover, we have $$\left\Vert { u^{(\alpha)}(\cdot,t)}\right\Vert_{L^\infty(\R^n)}\leq 2tM_{\vert\alpha\vert} \sup_{\beta\in\N^n,1\leq\vert\beta\vert\leq k}\left\Vert {\partial^{\beta}_x V}\right\Vert_{L^\infty(\R^n)},\ \forall t\in[s,s+\tau]. \leqno (2.18)$$ } \bigskip \noindent {\it Proof of Lemma 2.4.} Here $n,T,k,s,\tau$ are fixed. We shall prove Lemma 2.4 by iterations on $\vert\alpha\vert\in\{1,\dots,k\}$. First consider (2.16) for $\vert\alpha\vert=1$. In particular, $\delta=0$ and (2.16) is a nonlinear system where the unknown functions are the $u^{(\alpha)}$'s for $\vert\alpha\vert=1$. Define $$ E_{s,\tau}=\left\{u=(u^{(\alpha)})_{\vert\alpha\vert=1}/ \ u^{(\alpha)}\in C^0(\R^n\times [s,s+\tau]) \ {\rm and}\ \Vert u\Vert_{E_{s,\tau}}<\infty\right\}, $$ where $$ \Vert u\Vert_{E_{s,\tau}}=\sup_{(x,t)\in\R^n\times [s,s+\tau],\vert\alpha\vert=1}\,t^{-1}\vert u^{(\alpha)}(x,t)\vert. $$ For $u\in E_{s,\tau}$, let $F^{(\alpha)}(u)$ be the r.h.s. of $(2.16)$ and set $F=(F^{(\alpha)})_{\vert\alpha\vert=1}$. Define $ V_k=\sup_{1\leq\vert\beta\vert\leq k}\left\Vert{\partial^{\beta}_x V}\right\Vert_{L^\infty(\R^n)}$. Let $B$ be the ball of $E_{s,\tau}$ with radius $2V_k$ and centered at the origin. We shall prove that $F$ has a unique fixed point in $B$. We first verify that $B$ is stable under $F$. Suppose that, $\forall\,\vert\alpha\vert=1$, $\left\Vert u^{(\alpha)}(\cdot,t)\right\Vert_{L^\infty(\R^n)} \leq 2t V_k$. Following $(2.16)$, we get, $\forall\,x\in\R^n$, $\forall\,\vert\alpha\vert=1$, $$ \vert F^{(\alpha)}(u)(x,s+t)\vert\leq (s+t)V_k+{nh^2\over 2}\int_0^t\sqrt {{s+\sigma\over (s+t)(t-\sigma)}}{C\over h} \left(2V_k(s+\sigma)\right)^2d\sigma $$ $$ \leq (s+t)V_k+4CnhTV_k^2(t+s)t^{1\over 2} $$ $$ \leq 2(t+s)V_k , $$ if $0\leq t\leq\tau$. This implies that $\Vert F(u)\Vert_{E_{s,\tau}}\leq 2V_k.$ Consequently, $F$ maps $B$ into itself. Next, we remark that $F$ is a contraction in $B$. Take $u,v\in E_{s,\tau}$. We have $$ (F^{(\alpha)}(u)-F^{(\alpha)}(v))(t+s)={h^2\over 2}\sum_{l=1}^n\int_0^t {\partial G\over \partial x_j}(\sigma+s,t+s)((v_l-u_l)(u_l+v_l))(s+\sigma)d\sigma. $$ Assume that $u,v\in B$. Then, $$ \Vert F^{(\alpha)}(u)-F^{(\alpha)}(v)(\cdot,t+s)\Vert_{L^\infty(\R^n)}\leq {C n hV_k\Vert u-v\Vert_{E_{s,\tau}}} \int_0^t{(s+\sigma)^2\over \sqrt{t-\sigma}}d\sigma. $$ $$ \leq 2ChnTt^{1/2}V_k\Vert u-v\Vert_{E_{s,\tau}}(t+s) $$ $$ \leq A\Vert u-v\Vert_{E_{s,\tau}}(t+s) $$ with $A<1$, if $t\leq\tau$. Consequently, there is $A<1$ such that, $$ \Vert F(u)-F(v)\Vert_{E_{s,\tau}}\leq A \Vert u-v\Vert_{E_{s,\tau}},\ \forall\,u,v\in B. $$ This implies the existence of $u\in B$ satisfying $F(u)=u$, that is to say, there are $u^{(\alpha)}(t)$, $\vert\alpha\vert=1$, solutions to $(2.16)$ with $\vert\alpha\vert=1$, $\forall\,t\in[s,s+\tau]$ and satisfying $(2.18)$ when $\vert \alpha\vert=1$. Next, set $1\leq N\leq k$ and suppose that Lemma 2.4 holds replacing $k$ by $N-1$. We shall prove that it also true when replacing $k$ by $N$. Thus, $(2.16)$ with $\vert\alpha\vert=N$ is a linear system with unknown functions $u^{(\alpha)}$, $\vert\alpha\vert=N$. Set $$ E'_{s,\tau}=\left\{v=(v^{(\alpha)})_{\vert\alpha\vert=N}/\ v^{(\alpha)}\in C^0(\R^n\times [s,s+\tau])\ {\rm and}\ \Vert v\Vert_{E'_{s,\tau}}<\infty\right\}, $$ where $$ \Vert v\Vert_{E'_{s,\tau}}=\sup_{(x,t)\in\R^n\times [s,s+\tau],\vert\alpha\vert=N}\,t^{-1}\vert v^{(\alpha)}(x,t)\vert. $$ For $v\in E'_{s,\tau}$, define $G^{(\alpha)}(v)$ as the r.h.s. of $(2.16)$ and set $G=(G^{(\alpha)})_{\vert\alpha\vert=N}$. Let $B'$ be the ball of $E'_{s,\tau}$ with radius $2M_NV_k$, centered at the origin, where $M_N$ is arbitrary if $N=2$ and $M_N$ is given in the statement of Lemma 2.4 if $N\geq 3$. Take $v\in B'$, i.e., $\Vert v^{(\alpha)}(\cdot,t)\Vert_{L^\infty(\R^n)}\leq 2M_NV_k t,\ \forall\,t\in[s,s+\tau]$. Knowing that, $\Vert u^{(\beta)}(\cdot,t)\Vert_{L^\infty(\R^n)}\leq 2M_{\vert\beta\vert}V_k t$, $\vert\beta\vert\in\{1,\dots,N-1\}$, $\forall\,t\in[s,s+\tau]$, we see that ($\vert\delta\vert=N-1$), $$ \vert G^{(\alpha)}(v)(x,s+t)\vert\leq M_NV_k (s+t) +{4ChnV_k^2}\int_0^t{(s+\sigma)^2\over \sqrt{t-\sigma}} \times\cdots $$ $$ \dots\times\left( M_N+ \sum_{\beta+\gamma=\delta,\ \vert\beta\vert,\vert\gamma\vert\not =0} \pmatrix{\delta\cr \beta} M_{\vert\beta\vert+1}M_{\vert\gamma\vert+1} \right)d\sigma $$ $$ \leq M_NV_k(s+t)+16M_NCnht^{1/2}(s+t)TV_k^2 $$ $$ \leq 2M_NV_k(t+s), $$ if $0\leq t\leq\tau$. Consequently, $G$ maps $B'$ into itself. Similarly (since $G$ is affine), $G$ is a contraction in $B'$ (and also in $E'_{s,\tau}$). Therefore, $G(v)=v$ for some $v\in B'$. Equivalently, there are $u^{(\alpha})$, $\vert\alpha\vert=N$, solutions to $(2.16)$ with $\vert\alpha\vert=N$ and $(2.18)$ with $k=N$, on $[s,s+\tau]$. The proof of Lemma 2.4 is complete. \bigskip \noindent {\it Proof of Theorem 2.1.} Fix $k\geq 2$ and $T>0$. Apply Lemma 2.4 with $s=0$, $g=0$ and $M_2=1$. This provides $u^{(\alpha)}$, $1\leq\vert\alpha\vert\leq k$, defined for $t\in[0,\tau]$. Then, define $\psi$ by $$ \psi_h(t)=\int_0^tG_h(\sigma,t)\left[V-{h^2\over 2} \sum_{\vert\alpha\vert=1}(\vert u^{(\alpha)})\vert^2(\sigma)\right]d\sigma,\ \forall\,t\in[0,\tau]. \leqno(2.19) $$ Using the integrability properties of the kernels $G$ and $\nabla_x G$, one may check that $\psi$ is a classical solution to $(2.15)$, and $\psi$ satisfies $$\forall\,\alpha\in\N^n,\ 1\leq\vert\alpha\vert\leq k,\ {\partial^{\alpha}_x\psi}(\cdot,t)= u^{(\alpha)}(\cdot,t),\quad \forall\,t\in[0,\tau]. $$ In particular, $\Vert{\partial^{\alpha}_x\psi}(\cdot,t)\Vert_{L^\infty(\R^n)} \leq 2M_{\vert\alpha\vert}V_k t$. By differentiating the equation (2.15) with respect to $x_j$, we see that the function $u_j=t{\partial \psi\over\partial x_j}$ satisfies $$ {\partial u_j\over\partial t}+{x-y\over t}\cdot\nabla u_j-{h^2\over 2}\Delta u_j +\sum_{k=1}^n h^2{\partial \psi\over\partial x_k}{\partial u_j\over\partial x_k} =t{\partial V\over x_j}.$$ Since $u_j$ and ${\partial u_j\over\partial x_k}\ (1\leq k\leq n)$ are bounded in $\R^n\times [0,\tau]$, Proposition 2.3 shows that $ \Vert u_j(\cdot,t)\Vert_\infty\leq \int_0^t s\Vert{\partial V(\cdot)\over \partial x_j} \Vert_\infty d\,s.$ It follows the estimation $$ \Vert{\partial\psi\over\partial x_j}(\cdot,t)\Vert_{L^\infty(\R^n)}\leq {t\over 2}\Vert {\partial V\over\partial x_j}\Vert_\infty . $$We emphasize the dependence of $M_l$ on $M_2$ by writing $M_l=f_l(M_2)$. Next, set $s=\tau$, $g(\tau)=\psi(\tau)$, $M_2=2$. Since $f_l(2)\geq 2 f_l(1)$ then $g(\tau)$ satisfies the hypothesis in Lemma 2.4, and we obtain $(u^{(\alpha)})$ defined on $[\tau,2\tau]$. We define $\psi$ on $]\tau,2\tau]$ by, $$ \psi_h(\tau+t)=G(\tau,\tau+t) \psi (\tau)+\int_0^tG_h(\tau+\sigma,\tau+t)\left[V-{h^2\over 2} \sum_{\vert\alpha\vert=N}(u^{(\alpha)})^2(\tau+\sigma)\right]d\sigma,\ \forall\,t\in[0,\tau]. \leqno(2.20) $$ Moreover, $\Vert{\partial^{\alpha}_x\psi}(\cdot,t)\Vert_{L^\infty(\R^n)} \leq 2M_{\vert\alpha\vert}V_k t$, for all $t\in ]\tau,2\tau]$, for all $\vert\alpha\vert\in\{1,\dots,k\}$, where $M_1=1,M_2=2,M_l=f_l(2)$, $3\leq l\leq k$. Observe that $(2.20)$ is a smooth extension of $(2.19)$. To see this, apply Lemma 2.4 on $[\tau/2,3\tau/2]$ and use the uniqueness of the fixed point in the proof of Lemma 2.4. Consequently, $(2.19)(2.20)$ define a solution $\psi$ on $[0,2\tau]$ to $(2.15)$. This solution satisfies $$ \left\Vert{\partial^{\alpha}_x\psi}(\cdot,t)\right\Vert_{L^\infty(\R^n)} \leq 2 M_{\vert\alpha\vert}V_k t,\ 1\leq\vert\alpha\vert\leq k,\ \forall t\,\in[0,2\tau]. $$ Iterate this process to find a solution $\psi$ to $(2.1)$ on $[0,T]$ verifying, for some $M$ depending only on $T,k,n,h$, $$ \left\Vert{\partial\psi\over\partial x_j}(\cdot,t)\right\Vert_{L^\infty(\R^n)}\leq {t\over 2}\Vert {\partial V\over\partial x_j}\Vert_{L^\infty(\R^n)},$$ and $$ \left\Vert{\partial^{\alpha}_x\psi}(\cdot,t)\right\Vert_{L^\infty(\R^n)} \leq M V_k ,\ \forall t\,\in[0,T],\ 2\leq\vert\alpha\vert\leq k. $$ Since $T$ and $k$ are arbitrary, the proof of Theorem 2.1 is finished. \bigskip \noindent {\bf 3. A lemma about linear evolution equations.} \bigskip For each finite subset $\Lambda $ of $\Z^d$, we are studying a function $\varphi \in C^{\infty } ((\R ^p) ^{\Lambda } \times (\R ^p) ^{\Lambda }\times [0, + \infty [)$, depending of course on $\Lambda$, and also on two parameters $h>0$ and $\varepsilon >0$. We assume that $\varphi $ satisfies $${\partial \varphi \over \partial t}\ + \ {x-y \over t}\ .\ \ \nabla _x \varphi - {h^2 \over 2 } \Delta _x \varphi + h^2 (\nabla _x a, \nabla _x \varphi ) \ =\ F(x, y , t, h , \varepsilon ) \ \leqno (3.1)$$ $$\varphi (x, y, 0, h, \varepsilon )= 0\leqno (3.2)$$ $$ \varphi (x, y, t, h, \varepsilon ) = \varphi (y, x, t, h, \varepsilon ),\leqno (3.3)$$ where $a$ and $F$ are also functions in $C^{\infty } ((\R ^p) ^{\Lambda } \times (\R ^p) ^{\Lambda }\times [0, + \infty [)$, which depend on $\Lambda$ and may depend on $h$ and $\varepsilon$. \bigskip We assume that we already know that, for each fixed $\Lambda $, $t$, $h$ and $\varepsilon$, all the derivatives of order $\geq 1$ in $x$ and $y$ of $\varphi $ are bounded in $(\R ^p) ^{\Lambda } \times (\R ^p) ^{\Lambda }$. We want to obtain more precise bounds for the derivatives, assuming similar bounds for the derivatives of $a$ and $F$. \bigskip We assume that there exists a constant $\sigma _0>0$, that, for each $m\geq 1$, there exist a constant $K_m>0$ and a function $\gamma \rightarrow \varepsilon _m (\gamma)$ from $]0, 1[$ in itself, such that, for each sequence $\lambda ^{(1)}, ... , \lambda ^{(m)}$ in $\Lambda $, we have, if $ht \leq \sigma _0$ and $0<\varepsilon \leq \varepsilon _m (\gamma )$, $$ \Vert \nabla _{\lambda ^{(1)}}... \nabla _{\lambda ^{(m)}}a \Vert \leq K_m t \varepsilon ^{\gamma {\rm diam } (\{ \lambda ^{(1)} , ... , \lambda ^{(m)} \})}\leqno (3.4)$$ \bigskip We assume that we can associate to each finite subset $E$ of $\Z ^d$ a function $\rho (E)\geq 0$ such that in the right hand side of (3.1) the function $F$ satisfies the following estimates. For each $m\geq 1$, and for each $\gamma \in ]0, 1[$, there exist $C_m(F)(t, h)>0$ and $\varepsilon _m (\gamma) \in ]0, 1[$ such that, for each sequence $\lambda ^{(1)}, ... , \lambda ^{(m)}$ in $\Lambda $, we have, if $ht \leq \sigma _0$ and $0<\varepsilon \leq \varepsilon _m (\gamma )$, $$ \Vert \nabla _{\lambda ^{(1)}}... \nabla _{\lambda ^{(m)}}F \Vert \leq C_m(F)(t, h) \varepsilon ^{\gamma \rho (\{ \lambda ^{(1)}, ... , \lambda ^{(m)} \})}.\leqno (3.5)$$ where $C_m(F)(t, h)$ may depend on other parameters, according to the applications. \bigskip We assume also that the function $\rho (E) $ has the following property. For each $\gamma _1 $ and $\gamma _2$ such that $0< \gamma _1 < \gamma _2 <1$, there exists $\varepsilon _0(\gamma _1 , \gamma _2)\in ]0, 1 [$ such that, for each finite subsets $A$ and $B$ of $\Z ^d$ ($A \not= \emptyset $), we have $$ \sum _{\mu \in \Z^d} \varepsilon ^{ \gamma _2 {\rm diam }(A \cup \{ \mu \} ) + \gamma _1 \rho (B \cup \{ \mu \} )} \leq 4^d \varepsilon ^{\gamma _1 \rho (A \cup B)} \hskip 1cm {\rm if} \ \ \ \ \varepsilon \leq \varepsilon _0(\gamma _1 , \gamma _2). \leqno (3.6)$$ {\it Examples.} The function $\rho (E)= {\rm diam }(E)$ satisfies (3.6) and, if $S$ is a given subset of $\Z^d$, the function $\rho (E) = \sup _{\lambda \in E} \delta ( \lambda , S)$ also. \bigskip \noindent {\it Notations.} If $I$ is a subset of $\N$, $(X^{(i)})_{(i\in I)}$ a map from $I$ to $\R^{2p}$, and $(\lambda ^{(i)})_{(i\in I)}$ a map from $I$ to $\Z^d$, we set $$X^I . \nabla _{\lambda _I} = \prod _{i\in I} (X^{(i)} . \nabla _{\lambda _I}).$$ If $I= \emptyset $, $(X^I .\nabla _{x_{\lambda _I}})$ is the identity. We define similarly $X^I . \nabla _{x_{\lambda _I}}$ if $X^{(i)}\in \R ^p$. We denote also by $\lambda _I$ the set of the elements $\lambda ^{(i)}$ $(i\in I)$. \bigskip \noindent {\bf Proposition 3.1.} {\it With these notations, for each integer $m \geq 1$, there exists another $K_m>0$ and another $\varepsilon _m (\gamma )\in ]0, 1[$ such that, for each finite set $\Lambda $ of $\Z^d$ for each $t>0$, $h>0$, $\varepsilon >0$ and $\gamma \in ]0, 1[$ satisfying $$ ht \leq \sigma _0, \hskip 1cm \varepsilon \leq \varepsilon _m (\gamma ),\leqno (3.7)$$for each sequence $(\lambda ^{(1)}, ... , \lambda ^{(m)})$ in $\Lambda$, a function $\varphi$ satisfying (3.1), (3.2) and (3.3) corresponding to these parameters satisfies $$ \Vert \nabla _{\lambda ^{(1)}}... \nabla _{\lambda ^{(m)}} \varphi (., ., t) \Vert \leq K_m \ \varepsilon ^{\gamma \rho (\{ \lambda ^{(1)}, ... , \lambda ^{(m)} \} )}\sum _{j=1}^m \int _0^t C_j(F)(s, h) ds \leqno (3.8)$$If the hypothesis (3.5) is also satisfied for $m=0$, we have also, under similar conditions, $$\Vert \varphi (., ., t)\Vert \leq \varepsilon ^{ \gamma \rho (\emptyset )} \int _0^t C_0(s, h) ds \leqno (3.8')$$} \bigskip Let us set, for each integers $m$ and $m'$ $$N_{m, m'}(t, \varepsilon, \gamma ) = \sup _{ (\lambda ^{(1)}, \ldots , \lambda {(m)})\in \Lambda ^m \atop (\mu ^{(1)} , \ldots , \mu ^{(m')} )\in \Lambda ^{m'}} { \Vert \nabla _{x_{\lambda ^{(1)}}} \ldots \nabla _{x_{\lambda ^{(m)}}}\ \nabla _{y_{\mu ^{(1)}}}\ldots \nabla _{y_{\mu ^{(m')}}}\varphi (., ., t) \Vert \over \varepsilon ^{\gamma \rho (\{ \lambda ^{(1)}, \ldots , \lambda ^{(m)}, \mu ^{(1)} , \ldots , \mu ^{(m')} \} ) } }\leqno (3.9) $$ For the proof of Proposition 3.1, we have to prove that, if $m+ m' \geq 1$, for some constant $\varepsilon _{m, m'}(\gamma)$ , $$N_{m, m'}(t, \varepsilon, \gamma ) \leq K_m \sum _{j=1}^m \int _0^t C_j(F)(s, h) ds \hskip 1cm {\rm if}\ \ \ ht \leq \sigma _0 \ \ \ \ {\rm and} \ \ \ 0<\varepsilon \leq \varepsilon _{m, m'} (\gamma )\leqno (P_{m, m'})$$ \bigskip \noindent {\it First step: $m\geq 1$ and $m'=0$.} We assume that $m'=0$ and either that $m=1$, or that $m\geq 2$ and that $(P_{j, 0})$ is proved for $1\leq j\leq m-1$, and we shall prove $(P_{m, 0})$. Let $(\lambda ^{(1)}, ... \lambda ^{(m)})$ a sequence of points in $\Lambda $ and $(X^{(1)} , ... , X^{(m)})$ a sequence of vectors of $\R^p$. Let $u$ the function defined by $$u(x, y, t)= (X^{\{ 1, ... ,m\} } . \nabla _{x_{\lambda _{\{ 1 , ... ,m\} }}}) \varphi(x, y, t, h, \varepsilon). \leqno (3.10)$$ This function satisfies $${\partial (t^m u) \over \partial t}\ + \ {x-y \over t}\ .\ \ \nabla _x (t^mu) - {h^2 \over 2 } \Delta _x (t^m u) + h^2 (\nabla _x a(x), \nabla _x (t^m u) ) \ =\ \Phi _m \leqno (3.11 )$$ where the right hand side $\Phi _m$ can be written $\Phi _m = t^m (F_m - h^2G_m) $ with $$F_m = (X^{\{ 1, ... ,m\} } . \nabla _{x_{\lambda _{\{ 1 , ... ,m\} }}}) F,\leqno (3.12)$$ and, denoting by ${\cal P}_m$ the set of couples $(I, J)$ such that $(I, J)$ is a partition of $\{ 1, ..., m \}$, and $I \not= \emptyset$, $$G_m = \sum _{(I, J) \in {\cal P}_m}\ \sum _{\mu \in \Lambda } \Big ( \nabla _{x _{\mu } } (X^{I} .\nabla _{x _{\lambda _I}}) a\ , \nabla _{x _{\mu } }(X^{J} .\nabla _{x _{\lambda _J}})\varphi \Big ).\leqno (3.13)$$ By our hypotheses (3.5) on $F$, we can write $$\Vert F_m \Vert \leq \prod _{j=1}^m |X^{(j)}| \ C_m(F)(t, h) \varepsilon ^{\gamma \rho (\{ \lambda ^{(1)}, ... , \lambda ^{(m)} \})}$$ if $ht \leq \sigma _0$ and $0<\varepsilon \leq \varepsilon _m(\gamma )$. Under similar conditions, we have also, by the hypotheses (3.4) on $a$ and by the definition of $N_{m, 0}(t, \varepsilon, \gamma)$, $$\Vert G_m\Vert \leq K_m\ t\ \prod _{j=1}^m |X^{(j)}| \sum _{(I, J) \in {\cal P}_m}\ N_{|J|+1}(t, \varepsilon, \gamma) \sum _{\mu \in \Lambda } \varepsilon ^{ { \gamma +1 \over 2} {\rm diam }( \lambda _I \cup \{ \mu \} ) + \gamma \rho ( \lambda _J \cup \{ \mu \} )}$$ By our hypothesis (3.6) on the function $\rho $, since $I \not= \emptyset$ if $(I, J) \in {\cal P}_m$, it follows that, under similar conditions, $$\Vert G_m\Vert \leq K_m \ t \ \prod _{j=1}^m |X^{(j)}| \sum _{(I, J) \in {\cal P}_m}\ N_{|J|+1}(t, \varepsilon, \gamma) \varepsilon ^{ \gamma \rho ( \{ \lambda ^{(1)}, ... , \lambda ^{(m)} \} )}.$$For all $(I, J) \in {\cal P}_m$, we have $|J|+1 \leq m$. If $|J|+1\leq m-1$,we can apply our induction hypothesis $(P_{|J|+1 , 0})$. We obtain, by a combination of these inequalities, the following bound for the RHS of (3.11), under conditions of type (3.7) $$\Vert \Phi _m \Vert \leq K_m\ t^m\ \ \prod _{j=1}^m |X^{(j)}|\ \varepsilon ^{ \gamma \rho ( \{ \lambda ^{(1)}, ... , \lambda ^{(m)} \} )} \ \left [ \sum _{j=1}^{m-1} t h^2 \int _0^t C_j (F)(s, h) ds + C_m(F) + th^2 N_m(t, \varepsilon, \gamma )\right ].$$(By convention, the sum in the RHS is replaced by $0$ if $m=1$). By Proposition 2.3, it follows that $$\Vert t^m u(., ., t)\Vert \leq K_m \ \prod _{j=1}^m |X^{(j)}|\ \varepsilon ^{ \gamma \rho ( \{ \lambda ^{(1)}, ... , \lambda ^{(m)} \} )} \ \int _0^t s^m \ \Big [ \sum _{j=1}^{m-1} s h^2 \int _0^s C_j (F)(s', h) ds' + ... $$ $$... + C_m(F)(s, h) + sh^2 N_m(s, \varepsilon, \gamma )\Big ]ds.$$ In other words $$ N_m(t, \varepsilon, \gamma )\leq K_m \ \int _0^t \ \left [ \sum _{j=0}^{m-1} h^2 {t^2 -s^2 \over 2 } C_j (F)(s, h) + C_m(F)(s, h) + sh^2 N_m(s, \varepsilon, \gamma )\right ]ds.$$ The property $(P_{m, 0})$ follows by Gronwall's Lemma, using the condition $ht \leq \sigma _0$ (the constants $K_m$ depends on $\sigma _0$, which is fixed). \medskip \noindent {\it Second step : $m=0$ and $m'\geq 1$.} In this case, $(P_{0, m'})$ follows from the symmetry (3.3). \medskip \noindent {\it Third step: $m\geq 1$ and $m'\geq 1$. } Given $m\geq 1$ and $m'\geq 1$, we assume that $P_{j, k}$ has been proved for all $j$ and $k$ such that $1 \leq j+k \leq m+m'$ and $j \leq m-1$ or $k\leq m'-1$, and we shall prove $P_{m, m'}$. For that, we introduce, for each points $\lambda ^{(1)}$, $\ldots $ , $\lambda ^{(m)}$, $\mu ^{(1)}$ , $\ldots$ , $\mu ^{(m')}$ in $\Lambda $, for each vectors $X^{(1)}$, $\ldots$, $X^{(m)}$, $Y^{(1)}$, $\ldots$, $Y^{(m')}$ of $\R^p$, the following function $$u (x,y, t, \varepsilon) = (X^{\{ 1, ... ,m\} } . \nabla _{x_{\lambda _{\{ 1 , ... ,m\} }}}) \ (Y^{\{ 1, ... ,m'\} } . \nabla _{y_{\mu _{\{ 1 , ... ,m'\} }}})\varphi (x, y, t, h, \varepsilon) \leqno (3.14)$$ This function satisfies $$ {\partial (t^m u ) \over \partial t}\ +\ {x-y \over t} \ .\ \nabla _x (t^m u ) \ -\ {h^2\over 2} \Delta _x (t^m u ) + h^2(\nabla _x a\ .\ \nabla _x (t^m\ u) ) \ = \Phi _{m, m'}\leqno (3.15)$$ where the right hand side $\Phi _{m, m'}$ can be written on the form $\Phi _{m, m'} = \ t^m F_{m, m'} + t^{m-1} \Psi _{m, m'} -{ h^2 \over 2} t^m G_{m, m'}$ with the following notations: $$F_{m, m'} =(X^{\{ 1, ... ,m\} } . \nabla _{x_{\lambda _{\{ 1 , ... ,m\} }}}) \ (Y^{\{ 1, ... ,m'\} } . \nabla _{y_{\mu _{\{ 1 , ... ,m'\} }}})F\leqno (3.16) $$ $$\Psi _{m, m'}= \sum _{k=1}^{m'} (X^{\{ 1, ... ,m\} } . \nabla _{x_{\lambda _{\{ 1 , ... ,m\} }}}) \ (Y^{(1)}. \nabla _{y_{\mu ^{(1)}}}) \ldots (Y^{(k)}. \nabla _{x_{\mu ^{(k)}}}) \ldots (Y^{(m')}. \nabla _{y_{\mu ^{(m')}}}) \varphi (x, y, t) \leqno (3.17)$$ and, denoting now by ${\cal P}_{m, m'}$ the set of $(I, I', J, J')$ such that $(I, J)$ is a partition of $\{ 1, ... , m \}$, $(I', J')$ a partition of $\{ 1, ... , m' \}$ and $I \cup I' \not= \emptyset$: $$ G_{m, m'} = \sum _{(I, I', J, J')\in {\cal P}_{m, m'} }\ \ \sum _{\nu \in \Lambda } \Big ( \nabla_{x_{\nu}} (X^{I} .\nabla _{x _{\lambda _I}}) (Y^{I'} .\nabla _{y_{\mu _{I'}}})a , \nabla _{x_{\nu}} (X^{J} .\nabla _{x _{\lambda _J}}) (Y^{J'} .\nabla _{y _{\mu _{J'}}}) \varphi \Big ). \leqno (3.18)$$ By the hypothesis (3.5) on $F$, we can write $$\Vert F_{m, m'} \Vert \leq C_{m+m'}(F) \ \prod _{i\leq m} |X^{(i)}| \ \prod _{j\leq m'} |Y^{(j)}| \varepsilon ^{\gamma \rho (\{ \lambda ^{(1)}, ... , \lambda ^{(m)}, \mu ^{(1)}, ... , \mu ^{(m')} \} )}$$if $ht \leq \sigma _0$ and $\varepsilon \leq \varepsilon _{m, m'}(\gamma )$. By definition, we can write $$\Vert \Psi _{m, m'} \Vert \leq m'\ \prod _{i\leq m} |X^{(i)}| \ \prod _{j\leq m'} |Y^{(j)}|\ N_{m+1, m'-1}(t, \varepsilon, \gamma ) \varepsilon ^{ \gamma \rho (\{ \lambda ^{(1)}, ... , \lambda ^{(m)}, \mu ^{(1)}, ... , \mu ^{(m')} \} )}$$ By the hypotheses (3.4) on $a$, we can write, if $ ht \leq \sigma _0 $ and $\varepsilon \leq \varepsilon _{m+m'} ({1 + \gamma \over 2})$ $$ \Vert \nabla_{x_{\nu}} (X^{I} .\nabla _{x _{\lambda _I}}) (Y^{I'} .\nabla _{y_{\mu _{I'}}})a \Vert \leq K_{m+m'} t\ \prod _{i\in I} |X^{(i)}|\ \prod _{i\in I'}|Y^{(i)}| \varepsilon ^{ { \gamma +1 \over 2} {\rm diam } ( \lambda _I \cup \mu _{I'} \cup \{ \nu \} )}$$By definition of $N_{m, m'} (t, \varepsilon, \gamma)$, we can write $$\Vert \nabla _{x_{\nu}} (X^{J} .\nabla _{x _{\lambda _J}}) (Y^{J'} .\nabla _{y _{\mu _{J'}}}) \varphi \Vert \leq \prod _{i\in J} |X^{(i)}\vert \ \ \prod _{i\in J'}|Y^{(i)}|\ N_{ |J| +1, |J'|}(t, \varepsilon, \gamma) \varepsilon ^{ \gamma \rho ( \lambda _J \cup \mu _{J'} \cup \{ \nu \})}$$ By the hypothesis (3.6) on the function $\rho $, it follows, if moreover $\varepsilon \leq \varepsilon _0(\gamma , {\gamma +1\over 2})$, that $$\Vert G_{m, m'} \Vert \leq 4^d K_{m+m'} t \ \prod _{i=1}^m |X^{(i)}|\ \prod _{i=1}^{m'} |Y^{(i)}| \sum _{(I, I', J, J')\in {\cal P}_{m, m'} }\ \ N_{ |J| +1, |J'|}(t, \varepsilon, \gamma) \varepsilon ^{ \gamma \rho (\{ \lambda ^{(1)}, ... , \lambda ^{(m)}, \mu ^{(1)}, ... , \mu ^{(m')} \} )}$$ For each $(I, I', J, J')$ in ${\cal P}_{m, m'}$, we have $|J| + 1 + |J'| \leq m+m'$ and either $|J| \leq m-1$ or $|J'| \leq m'-1$.Therefore, we can apply the induction hypothesis to all the terms $H_{I, I', J, J'}$, excepted when $|J|+1=m$ and $|J'| = m'$. Thus we obtain, for the RHS of (3.15), with some constants $K_{m, m'}$ (depending only on $m$, $m'$ and $d$) and $\varepsilon _{m, m'} (\gamma)$ : $$\Vert \Phi _{m, m'}\Vert \leq K_{m, m'} t^m \prod _{i \leq m} |X^{(i)}|\ \prod _{j\leq m'}|Y^{(j)}|\ \varepsilon ^{ \gamma \rho (\{ \lambda ^{(1)}, ... , \lambda ^{(m)}, \mu ^{(1)}, ... , \mu ^{(m')} \} )}\ ... $$$$... \ \left [ \sum _{j=1}^{m+m'-1}h^2 t \int _0^t C_j(F)(s, h) ds + C_{m+m'}(F)(t, h) + th^2 N_{m, m'} (t, \varepsilon , \gamma) \right ] $$if $ht\leq \sigma _0$ and $\varepsilon \leq \varepsilon _{m, m'}(\gamma)$. By the maximum principle (Proposition 2.3), the function $u$ defined in (3.10) satisfies, under these conditions $$ \Vert t^m u(., . , t)\Vert \leq \ K_{m, m'} \prod _{i \leq m} |X^{(i)}|\ \prod _{j\leq m'}|Y^{(j)}|\ \varepsilon ^{ \gamma \rho (\{ \lambda ^{(1)}, ... , \lambda ^{(m)}, \mu ^{(1)}, ... , \mu ^{(m')} \} )} ... $$ $$... \ \int _0^t s^m\ \left [ \sum _{j=1}^{m+m'_1}th^2 \int _0^s C_j(F)(s', h) ds' + C_{m+m'}(F) (t, h) + sh^2 N_{m, m'} (s, \varepsilon , \gamma) \right ]\ ds$$ In other words, under the same conditions, $$ N_{m, m'} (t, \varepsilon , \gamma) \leq \ K_{m, m'} \int _0^t \ \left [ \sum _{j=1}^{m+m'-1}h^2 {t^2 - s^2 \over 2} C_j(F)(s, h) + C_{m+m'} (s, h) + sh^2 N_{m, m'} (s, \varepsilon , \gamma) \right ]\ ds$$ The property $(P_{m, m'})$ follows by Gronwall's lemma. The Proposition is proved for $m\geq 1$. \medskip \noindent {\it Case $m=0$.} Now, we assume that the hypothesis of the RHS $F$ of (3.1) is also satisfied form $m=0$, i.e. $\Vert F\Vert \leq C_0(F) \varepsilon ^{\gamma \rho (\emptyset )}$ under the previous conditions. Then it follows directly from Proposition 2.3 that, under the same conditions, we have (3.8'). Now the Proposition is proved. \bigskip \noindent {\bf 4. Proof of Theorem 1.1}. \bigskip With the results of Section 2, we know that, for each finite subset $\Lambda $ of $\Z^d$, and for $h$ and $\varepsilon $ small enough, the solution $\psi _{\Lambda}$ of the Cauchy problem (1.16), (1.17) exists globally. Then the function $U_{\Lambda }$ defined in (1.3) is the integral kernel of the operator $e^{-tH_{\Lambda}(\varepsilon )}$. We have, for the first order derivatives of $\psi _{\Lambda }$, the estimation given by (). By the hypotheses on the potential $V_{\Lambda , \varepsilon}$, we have, for some constant $C>0$ depending only on the constants in the hypotheses, $\Vert \nabla _{x_{\lambda}}V_{\Lambda , \varepsilon}\Vert \leq C$ if $0<\varepsilon \leq 2^{-d}$. Since $H_{\Lambda }$ is self-adjoint, it follows that, for each finite set $\Lambda$, for each $\lambda \in \Lambda$, we have $$\Vert \nabla _{\lambda} \psi _{\Lambda }(., ., t, h, \varepsilon )\Vert \leq Ct \hskip 1cm {\rm if }\ \ 0<\varepsilon \leq 2^{-d}.\leqno (4.1)$$ Therefore, the statement of Theorem 1.1 is already proved for $m=1$. The next step is the proof for $m=2$, before the induction. We shall use the notation $\nabla _{x_{\lambda }}$ of the introduction. \bigskip \noindent {\bf Lemma 4.1.} {\it There exists $\sigma _0>0$ and $C>0$ such that, for each $\gamma \in ]0, 1[$, for each $\lambda $ and $\mu $ $\in \Lambda$, we have $$\Vert \nabla _{x_{\lambda }}\nabla _{x_{\mu }}\psi _{\Lambda }(., ., t, h, \varepsilon ) \Vert \leq Ct \varepsilon ^{\gamma \vert \lambda - \mu \vert }\hskip 1cm {\rm if } \ \ \ ht \leq \sigma _0\ \ \ \ \ {\rm and } \ \ \ \ \varepsilon ^{1 - \gamma } \leq {1 \over 2^d}\leqno (4.2)$$} \bigskip \noindent {\it Proof.} For each $t$, $h$, $\varepsilon $ and $\gamma \in ]0, 1[$, let us denote by $S(t, h, \varepsilon, \gamma)$ the best constant such that, for each vector $X\in \R^p$, for each $\lambda \in \Lambda $ and for each sequence $(Y_{\mu })_{\mu \in \Lambda }$ of vectors in $\R^p$, we have $$ \sum _{\mu \in \Lambda } { \Vert (X. \nabla _{x_{\lambda }}) (Y_{\mu }.\nabla_{x_{\mu}}) \psi _{\Lambda }(., ., t, h, \varepsilon ) \Vert \over \varepsilon ^{ \gamma |\lambda - \mu |}} \ \leq \ \ S(t, h, \varepsilon, \gamma) \ |X| \ \sup _{\mu \in \Lambda } |Y_{\mu }|$$ This function is well defined by the results of section 2. For each $\lambda \in \Lambda $, for each sequence $(Y_{\mu })_{\mu \in \Lambda }$, for each $\varepsilon $ and $\gamma \in ]0, 1[$, the function $$ \varphi = \sum _{\mu \in \Lambda } {(X. \nabla _{x_{\lambda }}) (Y_{\mu }. \nabla _{x_{\mu }}) \psi _{\Lambda }(., ., t, h, \varepsilon ) \over \varepsilon ^{ \gamma |\lambda - \mu |}}$$ satisfies the equation $$ {\partial (t^2 \varphi ) \over \partial t}\ +\ {x-y \over t} \ .\ \nabla _x (t^2 \varphi ) \ -\ {h^2\over 2} \Delta _x (t^2 \varphi ) + h^2(\nabla _x \psi _{\Lambda })\ .\ (\nabla _x t^2\varphi) \ = \ t^2 F - h^2 t^2G $$ where $$F = \sum _{\mu \in \Lambda } {(X. \nabla _{x_{\lambda }}) (Y_{\mu }. \nabla _{x_{\mu }}) V _{\Lambda }(., \varepsilon ) \over \varepsilon ^{ \gamma |\lambda - \mu |}}$$ $$G = \sum _{(\mu , \nu ) \in \Lambda ^2 } {(\nabla _{x_{\nu}} (X. \nabla _{x_{\lambda }})\psi _{\Lambda }(., \varepsilon ) \ , \ \nabla _{x_{\nu}} (Y_{\mu }. \nabla _{x_{\mu }}) \psi _{\Lambda }(., \varepsilon ))\over \varepsilon ^{ \gamma |\lambda - \mu |}}$$ We see, by our hypotheses on $V$, that $$|F| \leq C \ |X| \ \sup _{\mu \in \Lambda } |Y_{\mu }|$$ if $\varepsilon ^{1 - \gamma } \leq {1 \over 2^d}$. By the definition of $S$, we have $$ |G| \leq S(t, h, \varepsilon, \gamma)^2 |X| \ \sup _{\mu \in \Lambda } |Y_{\mu }|$$Therefore, by the maximum principle, (Proposition 2.3), $$\Vert t^2 \varphi(.,., t) \Vert \leq \ |X| \ \sup _{\mu \in \Lambda } |Y_{\mu }| \ \int _0^t [ Cs^2 + h^2s^2 S(s, h, \varepsilon, \gamma)^2 ] \ ds$$ In other words $$t^2 S(t, h,\varepsilon, \gamma) \leq \int _0^t s^2 [ C+ h^2 S(s, h, \varepsilon, \gamma)^2 ] \ ds \leqno (4.3)$$ We shall prove, by a kind of version of Gronwall lemma, that $$ S(t, h, \varepsilon, \gamma) \leq 2Ct \hskip 1cm {\rm if }\ \ \varepsilon ^{1 - \gamma } \leq {1 \over 2^d}\ \ \ \ {\rm and } \ \ 4Ch^2 t^2 \leq 1\leqno (4.4)$$ If this result were false, let us denote by $t_0$ the lower bound of the $t>0$ such that $S(t, h, \varepsilon, \gamma) \geq 2Ct$. The only point which needs details is that $t_0>0$. If $t_0$ were $0$, there would be a contradiction with the results of section 2, proving that, for some $K_{\Lambda }(\varepsilon)$ perhaps depending on $\Lambda$ and $\varepsilon$, we have $S(t, h, \varepsilon, \gamma) \leq K_{\Lambda }(\varepsilon)$ and therefore, by (4.3), that $ S(t, h,\varepsilon, \gamma) \leq t (C + h^2 K_{\Lambda }(\varepsilon)^2)$ and, applying again (4.3), that $ S(t, h,\varepsilon, \gamma) \leq Ct + h^2 {t^3 \over 3} (C + h^2 K_{\Lambda }(\varepsilon)^2)^2$, which proves that the lower bound $t_0$ cannot be $0$. Supposing that the inequality in the left of (4.4) is true for $00$ and, for each $\gamma \in ]0, 1[$, there exists $\varepsilon _{m, m'}(\gamma)\in ]0,1[$ such that $$ N_{m, m'} (t, \varepsilon, \gamma ) \leq C_{m, m' }t \hskip 1cm {\rm if} \ \ ht \leq \sigma _0 \ \ \ \ \ {\rm and}\ \ \varepsilon \leq \varepsilon _{m, m'}(\gamma) $$ } \bigskip \noindent {\it First step : $m\geq 3$ and $m' = 0$.} Let $m\geq 3$ and $\gamma \in ]0, 1[$. We assume that $P_{k, 0}$ has been proved for all $k\leq m-1$. We define, for each points $\lambda ^{(1)}$, ...$\lambda ^{(m)}$ in $\Lambda $, for each vectors $X^{(1)}$, ... $X^{(m)}$ in $\R^p$, with the notations introduced before Proposition 3.1, a function $u (x, t, \varepsilon)$ like in (3.10) (with $\varphi = \psi _{\Lambda , \varepsilon })$. We remark that this function satisfies (for a fixed $y$) the equation (3.11), with $a={1 \over 2} \varphi ={1 \over 2} \psi _{\Lambda , \varepsilon }$, and $\Phi _m = t^m ( F_m - h^2 G_m)$ where $F_m$ is defined like in (3.12) with $F= V_{\Lambda,\varepsilon }$, and $G_m$ like in (3.13), with the same value of $a$. \medskip We remark that our hypotheses on the potential $V_{\Lambda , \varepsilon }$ imply that, for each integer $m\geq 1$, there exists $C_m>0$, independent on all the parameters, such that, for each points $\lambda ^{(1)}$, ... $\lambda ^{(m)}$ in $\Lambda $ $$ \Vert \nabla_{x_{\lambda ^{( 1)}}}\ldots \nabla_{x_{\lambda ^{( m)}}} V_{\Lambda , \varepsilon }\Vert \leq C_m \varepsilon ^{{\rm diam} (\{ \lambda ^{(1)}, ...\lambda ^{(m)}\} )}\hskip 1cm {\rm if } \ \ \ \ 0< \varepsilon \leq 2^{-d}\leqno (4.5)$$ (The last condition is for the case where all the $\lambda ^{(i)}$ are the same). Therefore, there exists $K_m>0$ such that the function $F_m$ defined like in (3.12) with $F= V_{\Lambda,\varepsilon }$ satisfies $$|F_m(x, \varepsilon)|\leq K_m |X^{(1)}|....|X^{(m)}| \varepsilon ^{ {\rm diam} (\{ \lambda ^{(1)}, ...\lambda ^{(m)}\})}\hskip 1cm {\rm if } \ \ \ 0< \varepsilon \leq \ {1 \over 2}. $$ \medskip Using the fact that $a= {1 \over 2} \varphi $, we can write $${1 \over 2 } (\nabla _x \varphi , \nabla _x u)\ +\ G_m = (\nabla _x \varphi , \nabla _x u)\ +\ G'_m\leqno (4.6)$$ where, denoting by ${\cal P}'_m$ the set of couples $(I, J)$ of subsets of $\{ 1, ... ,m \}$ such that $(I, J)$ is a partition of $ \{ 1, ... ,m \}$ and $I\not= \emptyset$, $J\not= \emptyset$, we define $G'_m$ like $G_m$ in (3.13), but with ${\cal P}_m$ replaced by ${\cal P}'_m$ $$G'_m = {1 \over 2} \sum _{(I, J) \in {\cal P}'_m}\ \sum _{\mu \in \Lambda } \Big ( \nabla _{x _{\mu } } (X^{I} .\nabla _{x _{\lambda _I}}) \psi _{ \Lambda , \varepsilon } \ , \nabla _{x _{\mu } }(X^{J} .\nabla _{x _{\lambda _J}})\psi _{ \Lambda , \varepsilon } \Big ).$$ By (4.6), the equation, similar to (3.11), satisfied by $u$ defined like (3.10) can be rewritten $${\partial (t^m u) \over \partial t}\ + \ {x-y \over t}\ .\ \ \nabla _x (t^mu) - {h^2 \over 2 } \Delta _x (t^m u) + 2h^2 (\nabla _x a(x), \nabla _x (t^m u) ) \ =\ \Phi '_m \leqno (4.7)$$ where $ \Phi '_m = t^m ( F_m - h^2 G'_m)$. By the definition (3.9) of $N_{m, 0}$, we have, using also the notations before Proposition 3.1, $$\Vert G'_m \Vert \leq \prod _{j=1}^m |X^{(j)}|\ \Bigg [ \sum _{ (I, J) \in {\cal P}'_m } \ N_{|I|+1, 0}(t, \varepsilon , \gamma ) N_{|J|+1 , 0 }(t, \varepsilon , {1 + \gamma \over 2}) \sum _{\mu \in \Z ^d} \varepsilon ^{ \gamma {\rm diam} (\lambda _I \cup \{ \mu \} ) + {1 + \gamma \over 2} {\rm diam} (\lambda _J \cup \{ \mu \} )} \Bigg ] $$ We remark that, for each $(I,J)\in {\cal P}'_{m}$, ($m\geq 3$), we have $|I|+1\leq m$, $|J|+1\leq m$, and that we cannot have simultaneously $|I|+1= m$ and $|J|+1= m$. For each term such that $\vert I \vert +1 \leq m-1$ or $\vert J \vert + 1 \leq m-1$, we can apply the induction hypothesis. We obtain, using also (3.6), satisfied by the function $\rho (E)= {\rm diam }(E)$, if $ht \leq \sigma _0$ and $\varepsilon \leq \varepsilon _{m-1, 0} ({ 1 + \gamma \over 2} ) \leq \varepsilon _{m-1, 0} (\gamma) $, and if $\varepsilon \leq \varepsilon _0( \gamma , {1 + \gamma \over 2})$, $$\Vert G'_m \Vert \leq 4^d \ \prod _{j=1}^m |X^{(j)}|\ \varepsilon ^{ \gamma {\rm diam } (\{ \lambda ^{(1)} , \ldots , \lambda ^{(m )} \} ) } \ \Bigg [ \sum _{ (I, J) \in {\cal P}'_m \atop \vert I \vert +1 \leq m-1 , \vert J \vert +1 \leq m-1} C_{m-1, 0}^2 t^2 +... $$ $$\ldots + 2 \sum _{ (I, J) \in {\cal P}'_m \atop \vert I \vert +1 = m , \vert J \vert +1 \leq m-1} C_{m-1, 0} t N_{m, 0, } (t, \varepsilon , \gamma ) \Bigg ] $$ Therefore, with some other constant $K_0$, we have, for the right hand side of (4.7), under conditions similar to those of $(P_{m})$, $$\Vert \Phi ' _m \Vert \leq K_0 t^m \ \prod _{j=1}^m |X^{(j)}|\ \varepsilon ^{\gamma {\rm diam} (\{ \lambda ^{(1)}, ...\lambda ^{(m)}\} ) } \Big (1 + h^2 t^2 + h^2 t N_{m, 0}(t, \varepsilon , \gamma) \Big ) $$ By the maximum principle (Proposition 2.3) (which can be applied since, by the results of section 2, its hypotheses are satisfied), it follows that, under these conditions, $$t^m \Vert \varphi (., ., t) \Vert \leq K_0 \ \prod _{j=1}^m |X^{(j)}|\ \varepsilon ^{\gamma {\rm diam} (\{ \lambda ^{(1)}, ...\lambda ^{(m)}\} ) } \int _0^t s^m \ \Big ( 1 + h^2 s^2 + h^2 s N_{m, 0}(s, \varepsilon , \gamma) \Big ) ds $$ In other words $$t^m N_{m, 0}(t, \varepsilon , \gamma) \leq K_0 \int _0^t s^m \Big ( 1 + h^2 s^2 + h^2 s N_{m, 0}(s, \varepsilon , \gamma) \Big ) ds $$ The property $P_{m, 0}$ follows by Gronwall's Lemma, and therefore is proved for all $m$. \bigskip \noindent {\it Second step : $m' \geq 1$.} If $m=0$ and $m'\geq 1$ the property $P_{0, m'}$ follows from $P_{m', 0}$ since $\psi _{\Lambda , \varepsilon }$ is symmetric. We organize the double induction in a following way. Given $m\geq 1$ and $m'\geq 1$, we assume that $P_{j, k}$ has been proved for all $j$ and $k$ such that $j+k \leq m+m'$ and $k\leq m'-1$, and we shall prove $P_{m, m'}$. For that, we introduce, for each points $\lambda ^{(1)}$, $\ldots $ , $\lambda ^{(m)}$, $\mu ^{(1)}$ , $\ldots$ , $\mu ^{(m')}$ in $\Lambda $, for each vectors $X^{(1)}$, $\ldots$, $X^{(m)}$, $Y^{(1)}$, $\ldots$, $Y^{(m')}$ of $\R^p$, the function $u$ defined like in (3.14), with $\varphi = \psi _{\Lambda }$. We remark that this function satisfies (for a fixed $y$) the equation (3.15), with $a= {1 \over 2} \psi _{\Lambda }$, and, since $V$ is independent of $y$, $\Phi _{m, m'}= t^{m-1} \Psi _{m , m'} - h^2 t^m G_{m , m'}$, where $\Psi _{m , m'}$ is defined in (3.17) and $G_{m , m'}$ in (3.18). Using the fact that $a= {1 \over 2} \varphi $, we can write $${1 \over 2 } (\nabla _x \varphi , \nabla _x u)\ +\ G_{m , m'} = (\nabla _x \varphi , \nabla _x u)\ +\ G'_{m , m'}\leqno (4.8)$$ where, denoting by ${\cal P}'_{m , m'}$ the set of $(I, I', J, J')$ such that $(I, J)$ is a partition of $ \{ 1, ... ,m \}$, $(I', J')$ is a partition of $ \{ 1, ... ,m' \}$, $I \cup I' \not= \emptyset$, $J \cup J' \not= \emptyset$, we define $G'_{m , m'}$ like $G_{m , m'}$ in (3.18), but with ${\cal P}_{m , m'}$ replaced by ${\cal P}'_{m, m'}$, i.e. $$ G'_{m, m'} = {1 \over 2} \sum _{(I, I', J, J')\in {\cal P}'_{m, m'} }\ \ \sum _{\nu \in \Lambda } \Big ( \nabla_{x_{\nu}} (X^{I} .\nabla _{x _{\lambda _I}}) (Y^{I'} .\nabla _{y_{\mu _{I'}}}) \psi _{\Lambda } , \nabla _{x_{\nu}} (X^{J} .\nabla _{x _{\lambda _J}}) (Y^{J'} .\nabla _{y _{\mu _{J'}}}) \psi _{\Lambda } \Big ). \leqno (4.9)$$ By (4.8), the equation, similar to (3.15) satisfied by $u$ defined in (3.14) can be rewritten on the form (4.7), where the RHS is now $\Phi '_{m , m'}= t^{m-1} \Psi _{m, m'}- h^2 t^m G' _{m , m'}$. \smallskip By definition, we have $$\Vert \Psi _{m, m'} \Vert \leq m' \prod _{i\leq m} |X^{(i)}| \ \prod _{j\leq m'} |Y^{(j)}| \ N_{ m+1, m'-1}(t, \varepsilon, \gamma) \varepsilon ^{\gamma {\rm diam} (\{ \lambda ^{(1)}, \ldots , \lambda ^{(m)}, \mu ^{(1)} , \ldots , \mu ^{(m')} \} ) } $$ and, by our induction hypothesis, we have, for some $K_0>0$ and $\varepsilon _0 (\gamma )>0$, $$\Vert \Psi _{m, m'} \Vert \leq K_0 t \prod _{i\leq m} |X^{(i)}|\ \prod _{j\leq m'} |Y^{(j)}| \varepsilon ^{\gamma {\rm diam} (\{ \lambda ^{(1)}, \ldots , \lambda ^{(m)}, \mu ^{(1)} , \ldots , \mu ^{(m')} \} ) } $$ if $ht \leq\sigma _0$ and $\varepsilon \leq \varepsilon _0(\gamma)$. \smallskip By the definition of $N_{m , m'}$, using also the condition (3.6) satisfied by $\rho (E) = {\rm diam }(E)$, if $\varepsilon \leq \varepsilon _0( {1 + \gamma \over 2})\leq \varepsilon _0(\gamma)$, $$\Vert G' _{m, m'} \Vert \leq \prod _{i\leq m} |X^{(i)}| \ \prod _{j\leq m'} |Y^{(j)}| \Bigg [ \sum _{(I, I', J, J')\in {\cal P}'_{m, m'} }\ \ N_{|I|+1, |I'|}(t, \varepsilon , \gamma )\ N_{|J|+1, |J'|}(t, \varepsilon ,{ 1 + \gamma \over 2 } )\ldots $$ $$\ldots \varepsilon ^{ \gamma {\rm diam }( \{ \lambda ^{(1)}, \ldots , \lambda ^{(m)}, \mu ^{(1)} , \ldots , \mu ^{(m')} \} ) } \Bigg ] $$ For all $(I, I', J, J')$ in ${\cal P}_{m, m'}$, we have $|I| + |I'|+1 \leq m+m'$, $|J| + |J'|+1 \leq m+m'$, $ |I'| + |J'| = m' \geq 1$, hence $|I'|$ or $|J'|$ is $\leq m'-1$. For each term such that $\vert I'\vert \leq m-1$ or $\vert J' \vert \leq m-1$, we can apply the induction hypothesis, and the property $(P_{m , m'})$ follows like in the first step, and therefore is proved for all $m$ and $m'$. \bigskip \noindent {\bf 5. Cluster decomposition of a function on the lattice.} \bigskip Let $\Lambda = \prod _{j=1}^d [a_j, b_j]$ be a box of $\Z ^d$. Let $E$ be a finite dimensional vector space. We shall associate to each box $Q$ contained in $\Lambda $ (which may be all $\Lambda $, and also may be reduced to a single point) an operator $T_Q$ in $C^{\infty }(E^{\Lambda } \times E^{\Lambda })$ in the following way. First, we define a linear map $\pi _Q $ in $E^{\Lambda } \times E^{\Lambda }$ by $$\Big ( \pi _Q (x, y)\Big )_{\lambda } = \left \{ \matrix { (x_{\lambda }, y_{\lambda })&if &\lambda \in Q \cr (0, y_{\lambda } - x_{\lambda })&if &\lambda \notin Q\cr } \right . \hskip 1cm \forall (x, y) \in E^{\Lambda } \times E^{\Lambda }.\leqno (5.1)$$ For each $j\in \{ 1, ... d \}$, we define two linear operators $P^{(j, +)}_{Q }$ and $P^{(j, -)}_{Q }$in $E^{\Lambda } \times E^{\Lambda }$ in the following way. If the box $Q \subseteq \Lambda $ is defined by $$Q = \prod _{j=1}^d [\alpha_j, \beta _j], \leqno (5.2)$$we set : $$\Big ( P^{(j, +)}_{Q }(x, y) \Big ) _{\lambda } = \left \{ \matrix { (x_{\lambda }, y_{\lambda })&if &\lambda _j \not = \beta_j \cr (0, y_{\lambda } - x_{\lambda })&if &\lambda _j = \beta _j \cr } \right . \hskip 1cm \forall (x, y) \in E^{\Lambda } \times E^{\Lambda }. $$ $$\Big ( P^{(j, -)}_{Q }(x, y) \Big ) _{\lambda } = \left \{ \matrix { (x_{\lambda }, y_{\lambda })&if &\lambda _j \not = \alpha _j \cr (0, y_{\lambda } - x_{\lambda })&if &\lambda _j = \alpha _j \cr } \right . \hskip 1cm \forall (x, y) \in E^{\Lambda } \times E^{\Lambda }. $$ Then we define our operator $T_Q$, for each function $f$ in $C^{\infty }(E^{\Lambda } \times E^{\Lambda })$ by $$(T_Qf) (x, y) = \sum _{ \sigma \in \{ 0, 1 \} ^d \atop \tau \in \{ 0, 1 \} ^d } (-1)^{|\sigma |_1 + |\tau |_1} f \Big ( \left ( P^{(1, +)} _{Q}\right )^{\sigma _1} ... \left ( P^{(d, +)} _{Q} \right )^{\sigma _d} \ \left ( P^{(1, -)} _{Q}\right ) ^{\tau _1} ... \left ( P^{(d, -)} _{Q} \right )^{\tau _d} \ \pi _Q \ (x, y) \Big ) $$ Here $|\sigma |_1$ denotes the $\ell ^1$ norm of the vector $\sigma \in \Z^d$. \bigskip Of course, we may have $P^{(j, +)}_Q=P^{(j, -)}_Q$. In particular, if $Q$ is reduced to a single point, we have $$(T_Qf)(x, y) = f( \pi _Q(x, y))- f(0, y-x)\leqno (5.3)$$ \bigskip \noindent {\bf Proposition 5.1.} {\it For each function $f$ in $C^{\infty }( E^{\Lambda } \times E^{\Lambda })$, we have \smallskip \noindent i) $T_Qf$ depends only on $x-y$ and on the variables $x_{\lambda }$ and $y_{\lambda }$ such that $\lambda \in Q$. $$f(x, y)- f(0, y-x) = \sum _{Q \subseteq \Lambda } (T_Qf)(x, y) \leqno ii)$$ iii) If $f$ depends in a smooth way on a parameter $\theta$, we have $T_Q ({\partial f \over \partial \theta})= {\partial (T_Qf) \over \partial \theta}$.} \bigskip The next Proposition will be useful later to estimate the functions $T_Qf$ and their derivatives, when $f$ will be the solution of one of the Cauchy problems studied in the previous sections. \medskip For each finite subset $\Lambda $ of $\Z ^d$, for each function $f\in \C ((\R ^p)^{\Lambda }\times (\R ^p)^{\Lambda })$, for each vector $u = (u_{\lambda })_{\lambda \in \Lambda }$ in $(\R ^p)^{\Lambda }$, we set $$(S_uf) (x, y) = f(x+u, y+u) - f(x, y) \hskip 1cm \forall f\in C^{\infty }( E^{\Lambda } \times E^{\Lambda }) \ \ \ \ \forall (x, y) \in E^{\Lambda } \times E^{\Lambda }\leqno (5.4)$$ and we shall denote by $\sigma (u)$ the following set $$\sigma (u)= \{ \lambda \in \Lambda , \ \ \ \ \ \ u_{\lambda } \not = 0 \} .\leqno (5.5)$$ In the next section, we shall apply the operators $T_Q$ to the function $\psi _{\Lambda }$ of Theorem 1.1, and we shall see that the diameter of $Q$ for the $\ell ^{\infty }$ norm, in other words, the greatest of the $\beta _j - \alpha _j$, if $Q$ is defined by (5.2), will play an important role. We shall apply the following proposition with the corresponding integer $j$ . For a box $Q$ defined as (5.2), and for each $j \in \{ 1, ... d\}$, we shall set $$B_-^{(j)}(Q)= \{ \lambda \in Q , \ \ \ \ \ \ \lambda _j= \alpha _j\}, \hskip 1cm B_+^{(j)}(Q)= \{ \lambda \in Q , \ \ \ \ \ \ \lambda _j= \beta_ j\}\leqno (5.6)$$ \bigskip \noindent {\bf Proposition 5.2.} {\it Let $\lambda ^{(1)}$, ... $\lambda ^{(m)}$ be a finite sequence of points in $\Lambda $. Let $j$ be an integer such that $1\leq j \leq d$. Then : \smallskip \noindent i) If, in this sequence, there is no point in $B_+^{(1)}(Q) \cup B_-^{(1)}(Q)$ (in particular, if this sequence is void), we have $$\Vert \nabla _{\lambda ^{(1)}} ... \nabla _{\lambda ^{(m)}} T_Qf \Vert _{\infty } \leq 4^d \sup _{\sigma (u) \subset B_+^{(1)}(Q) \atop \sigma ( v) \subset B_-^{(1)}(Q)} \Vert \nabla _{\lambda ^{(1)}} ... \nabla _{\lambda ^{(m)}}S_u S_v f\Vert _{\infty }\leqno (5.7)$$ ii) If one at least of the $\lambda ^{(k)}$ is in $B_+^{(1)}(Q)$, but none of them is in $B_-^{(1)}(Q)$, we have $$\Vert \nabla _{\lambda ^{(1)}} ... \nabla _{\lambda ^{(m)}} T_Qf \Vert _{\infty } \leq 4^d \sup _{ \sigma (v) \subset B_-^{(1)}(Q)} \Vert \nabla _{\lambda ^{(1)}} ... \nabla _{\lambda ^{(m)}}S_v f\Vert _{\infty }\leqno (5.8)$$ iii) If one at least of the $\lambda ^{(k)}$ is in $B_+^{(1)}(Q)$ and another one (perhaps the same) is in $B_-^{(1)}(Q)$, then we have $$\Vert \nabla _{\lambda ^{(1)}} ... \nabla _{\lambda ^{(m)}} T_Qf \Vert _{\infty } \leq 4^d \Vert \nabla _{\lambda ^{(1)}} ... \nabla _{\lambda ^{(m)}} f \Vert _{\infty }\leqno (5.9)$$ } \bigskip \noindent {\bf Proposition 5.3.} {\it Let $V_{\Lambda , \varepsilon}(x)$ be of the form (1.2), where $A$ and $B_{\lambda }$ satisfy the hypotheses of the introduction. We identify $V_{\Lambda , \varepsilon}(x)$ with a function of $(x, y)$ independent of $y$, and then $T_Q V_{\Lambda , \varepsilon}(x)$ is also independent of $y$. Then, we can write, with $C_m>0$ independent of all the parameters, for each sequence $(\lambda ^{(1)}, \ldots , \lambda ^{(m)})$ of points of $\Lambda $, $$|\nabla _{x_{\lambda ^{(1)}}}\ldots \nabla _{x_{\lambda ^{(m)}}} T_Q V_{\Lambda , \varepsilon }(x, t)| \leq \left \{ \matrix { C_m \varepsilon ^{\gamma {\rm diam Q}} &{\rm if } & {\rm diam }(Q) >0 &and &\varepsilon ^{1 - \gamma} \leq 2^{-d}\cr C_m (1 + |x_Q|) &{\rm if } & {\rm diam }(Q) =0 &and & \varepsilon \leq 2^{-d} \cr } \right . \leqno (5.10)$$ } \bigskip \noindent {\it Proof.} If ${\rm diam}(Q)>0$, (5.10) follows, by Proposition 5.2, from (4.5) if $\varepsilon \leq 2^{-d}$ in the case iii). For the case ii), we can write, for some constant $K_m>0$, $$\Vert \nabla _{x_{\lambda ^{(1)}}}... \nabla _{x_{\lambda ^{(m)}}}S_u V_{\Lambda , \varepsilon } \Vert \leq K_m \varepsilon ^{ \gamma \sup _{j\leq m}\delta (\lambda ^{(j)}, \sigma (u)) } \hskip 1cm {\rm if } \ \ \ \varepsilon ^{1 - \gamma } \leq 2^{-d}\leqno (5.11)$$ and apply (5.8). For the case i), we shall prove that, for some constant $K_m>0$, for each $\gamma \in ]0, 1[$, we have, for each sequence $\lambda ^{(1)}$, ...$\lambda ^{(m)}$ in $\Lambda $, $$ \Vert \nabla _{x_{\lambda ^{(1)}}}... \nabla _{x_{\lambda ^{(m)}}}S_uS_v V_{\Lambda , \varepsilon } \Vert \leq K_m \varepsilon ^{\gamma \delta (\sigma (u), \sigma (v)) } {\rm inf }(\sharp \sigma (u), \sharp \sigma (v)) \hskip 1cm {\rm if } \ \ \ \varepsilon ^{1-\gamma } \leq 2^{-d}\leqno (5.12)$$ and if $\sigma (u) \cap \sigma (v)= \emptyset$. Under this condition, we have $S_u S_v A (x_\lambda) = 0$ for all $\lambda$, and we see that $S_u S_v B_{\lambda - \mu } (x_{\lambda} , x_{\mu }) = 0$ unless $\lambda $ or $\mu $ is in $\sigma (u)$ and $\lambda $ or $\mu $ is in $\sigma (v)$, which implies, in our case, that $\lambda \in \sigma (u)$ and $\mu \in \sigma (v)$, or the opposite. Then we remark that $$\sum _{\lambda \in \sigma (u), \mu \in \sigma (v)} \varepsilon ^{\vert \lambda - \mu \vert } \leq 4^d \varepsilon ^{\gamma \delta (\sigma (u), \sigma (v))} {\rm inf } (\sharp \sigma (u), \sharp \sigma (v)) \hskip 1cm {\rm if } \ \ \ \varepsilon ^{1-\gamma } \leq 2^{-d}.$$ and (5.12) follows, and (5.10) also, if we apply (5.7). When $Q$ is reduced to a single point, (5.10) follows from (5.3) (which means here $(T_{ \{ \lambda \} } V)(x) = V(x'^{(\lambda )}) - V(0)$, where $x'^{(\lambda )}_{\mu } $ is equal to $ x_{\lambda }$ if $\mu = \lambda $ and to $0$ if not). \bigskip \noindent {\bf 6. Cluster decomposition of the function $\psi _{\Lambda }$ .} \bigskip The aim of this section is the proof of the following Proposition. \bigskip \noindent {\bf Proposition 6.1.} {\it For each integer $m\geq 0$, and for any $\gamma \in [0, 1[$, there exists $C_m>0$ and $\varepsilon _m (\gamma) \in ]0, 1[$ with the following properties. \smallskip \noindent i) If $m\geq 1$, for each finite subset $\Lambda $ of $\Z ^d $, for each points $\lambda ^{(1)}$, ... $\lambda ^{(m)}$ in $\Lambda $, for each box $Q \subseteq \Lambda $, we have, $$\Vert \nabla _{\lambda ^{(1)}} ... \nabla _{\lambda ^{(m)}} \Big ( T_Q \psi _{\Lambda} \Big ) \Vert \leq tC_m \varepsilon ^{\gamma {\rm diam } (Q )} \hskip 1cm if \ \ \ h t \leq \sigma _0 \ \ \ and \ \ \ \varepsilon \leq \varepsilon _m (\gamma ) \leqno (6.1)$$ \smallskip \noindent ii) If $m=0$, this result is also valid if ${\rm diam}(Q) \not=0$. \smallskip \noindent iii) If one at least of the points $\lambda ^{(j)}$ is not in $Q$, we have also $$\Vert \nabla _{\lambda ^{(1)}} ... \nabla _{\lambda ^{(m)}} \Big ( T_Q \psi _{\Lambda} \Big ) \Vert \leq tC_m \varepsilon ^{\gamma \sup _{j \leq m } \delta (\lambda ^{(j)}, Q)} \hskip 1cm {\rm if} \ \ \ h t \leq \sigma _0 \ \ \ {\rm and} \ \ \ \varepsilon \leq \varepsilon _m (\gamma ) \leqno (6.2)$$ } \bigskip The proof relies on the two following Lemmas. \bigskip \noindent {\bf Lemma 6.2.} {\it For each integer $m\geq 1$, and for each $\gamma \in ]0, 1[$, there exists $C_m>0$ and $\varepsilon _m (\gamma) \in ]0, 1[$ such that, for each $u$ in $(\R ^p)^{\Lambda }$, for each points $\lambda ^{(1)}$, $\ldots $ $\lambda ^{(m)}$ in $\Lambda $, the solution $\psi _{\Lambda } $ of (1.16), (1.17) satisfies, if $S_u$ is the operator defined in (5.4) $$ \Vert \nabla _{\lambda ^{(1)}}\ldots \nabla _{\lambda ^{(m)}} S_u\psi _{\Lambda} \Vert \leq C_m \ t \ \varepsilon ^{ \gamma \sup _{j\leq m} \delta (\lambda ^{(j)}, \sigma (u)) } \hskip 1cm if\ \ \ ht \leq \sigma _0 \ \ \ \ \ \varepsilon \leq \varepsilon _m (\gamma ) \leqno (6.3)$$ } \bigskip \noindent {\it Proof.} The function $\varphi = S_u \psi _{\Lambda }$ satisfies the following equation $${\partial \varphi \over \partial t}\ + \ {x-y \over t}\ .\ \ \nabla _x \varphi - {h^2 \over 2 } \Delta _x \varphi + h^2 (\nabla _x a, \nabla _x \varphi ) \ =\ S_u V_{\Lambda , \varepsilon }$$ where $$a(x, y, t) = {1 \over 2} \ \Big ( \psi _{\Lambda } (x+u, y+u, t) + \psi _{\Lambda } (x, y, t)\Big ). \leqno (6.4)$$ The function $\varphi $ satisfies also (3.2) and (3.3) and, by Theorem 1.1, the function $a$ satisfies the hypothesis (3.4). We define, for each non void finite subset $E$ of $\Z^d$, $\rho (E)$ by $$\rho (E)= \sup _{\lambda \in E} \delta ( \lambda , \sigma (u)).$$ We already know that this function satisfies (3.6), (it was given as an example just after (3.6)). Using also (5.11), we see that all the hypotheses of Proposition 3.1 are satisfied, and Lemma 6.2 follows from this Proposition. \bigskip \noindent {\bf Lemma 6.3.} {\it For each integer $m\geq 0$, and for each $\gamma \in ]0, 1[$ , there exists $C_m>0$ and $\varepsilon _m(\gamma )\in ]0, 1[$, such that, for each $u$ and $v$ in $(\R ^p)^{\Lambda }$ such that $\sigma (u) \cap \sigma (v) = \emptyset $, for each points $\lambda^{(1)}$, $\ldots $ $\lambda ^{(m)}$ in $\Lambda $, the solution $\psi _{\Lambda } $ of (1.16), (1.17) satisfies $$ \Vert \nabla _{\lambda ^{(1)}}\ldots \nabla _{\lambda ^{(m)}} S_u S_v \psi _{\Lambda} \Vert \leq C_m \ t \ \varepsilon ^{ \gamma \delta (\sigma (u), \sigma (v)) } {\rm inf } \ \ (\sharp \sigma (u), \sharp \sigma (v)) \hskip 1cm if\ \ \ h t \leq \sigma _0 \ \ \ \ \ \varepsilon \leq \varepsilon _m(\gamma). \leqno (6.5) $$ } \bigskip \noindent {\it Proof.} The function $\varphi = S_u S_v \psi _{\Lambda }$ satisfies $${\partial \varphi \over \partial t}\ + \ {x-y \over t}\ .\ \ \nabla _x \varphi - {h^2 \over 2 } \Delta _x \varphi + h^2 (\nabla _x a, \nabla _x \varphi ) \ =\ S_ u S_v V_{\Lambda , \varepsilon } - h^2 (\nabla _x S_u \psi _{\Lambda } , \nabla _x S_v \psi _{\Lambda })\leqno (6.6)$$where $$a(x, y, t)= {1 \over 2} \Big ( \psi _{\Lambda }(x+u+v, y+u+v, t) + \psi _{\Lambda } (x +u, y+u, t) + \psi _{\Lambda } (x+v, y+v, t) - \psi _{\Lambda } (x, y, t) \Big ).$$ The function $\varphi $ satisfies also (3.2) and (3.3). By Theorem 1.1, the function $a$ satisfies the estimations (3.4) under conditions of type (3.7). Now, the function $\rho (E)$ of Section 3 is independent of the subset $E$ of $\Z^d$: we set $\rho (E) = \delta (\sigma (u), \sigma (v))$ for all $E \subset \Z^d$, and the condition (3.6) is obviously satisfied. Now we estimate the derivatives of the RHS of (6.6). (These inequalities are valid also for $m=0$ (not like in Lemma 6.2), and therefore, by Proposition 3.1, (6.5) will be valid also for $m=0$). The first term of the RHS is bounded by (5.12). For the second term, we can write, for each sequence $\lambda ^{(1)}$, ...$\lambda ^{(m)}$ in $\Lambda $, $$\nabla _{\lambda ^{(1)}} ... \nabla _{\lambda ^{(m)}} (\nabla _x S_u \psi _{\Lambda } , \nabla _x S_v \psi _{\Lambda })= \sum _{(I, J)\in {\cal P}''_m} \sum _{\nu \in \Lambda } (\nabla _{x_{\nu }} \nabla ^I S_u \psi _{\Lambda } , \nabla _{x_{\nu}} \nabla ^J S_v \psi _{\Lambda })$$where, here ${\cal P}''_m$ denotes the set of all $(I, J)$ such $(I, J)$ is a partition of $\{ 1, ... , m \}$, (one of them, $I$ or $J$, may be empty), and for each subset $I$ of $\{ 1, ... , m \}$, we set $\nabla ^I = \prod _{i\in I} \nabla _{\lambda ^{(i)} }$. By Lemma 6.2, there exists $K_m>0$ and a function $\varepsilon _m(\gamma)$ such that $$\Vert \nabla _{x_{\mu }} \nabla ^I S_u \psi _{\Lambda } \Vert \ \Vert \nabla _{x_{\mu}} \nabla ^J S_v \psi _{\Lambda })\Vert \leq K_m t^2 \ \varepsilon ^{ {1 +\gamma \over 2} [ \delta ( \mu , \sigma (u))+ \delta (\mu , \sigma (v)) ]} $$ if $\varepsilon \leq \varepsilon _m({1 +\gamma \over 2} )$ and if $h t\leq \sigma _0$. We see that $$\sum _{\nu \in \Z ^d} \varepsilon ^ {{1+ \gamma \over 2}[\delta (\nu , \sigma (u)) +\delta (\nu , \sigma (v)) ] } \leq 4^d \varepsilon ^ { \gamma \delta (\sigma (u) , \sigma (v) )}\ \inf ( \sharp \sigma (u), \sharp \sigma (v)) \hskip 1cm {\rm if}\ \ \ \varepsilon ^{1 - \gamma } \leq 2^{-2d}.\leqno (6.7)$$ It follows that, if all the preceding conditions are satisfied, with another $K_m$, $$\Vert \nabla _{\lambda ^{(1)}} ... \nabla _{\lambda ^{(m)}} (\nabla _x S_u \psi _{\Lambda } , \nabla _x S_v \psi _{\Lambda })\Vert \leq K_m t^2 \varepsilon ^ { \gamma \delta (\sigma (u) , \sigma (v) )}.$$ \smallskip Therefore, the condition (3.5) on the RHS $F$ of (6.6) is satisfied with our function $\rho (E)$, and $C_m(F)= K_m [ \inf ( \sharp \ \sigma (u), \sharp \ \sigma (v)) + h^2 t^2]$, where $K_m$ is independent of $u$ and $v$. Since we may assume that $ht \leq 1$ and that $ \inf ( \sharp \ \sigma (u), \sharp \ \sigma (v)) \geq 1$ (otherwise the Lemma is trivial), we can omit the second term in the expression of $C_m(F)$. All the hypotheses of Proposition 3.1 are satisfied, and Lemma 6.3 is a consequence of it. \bigskip \noindent {\it End of the proof of Proposition 6.1.} For the proof of i) and ii), we may assume that $Q$ is defined by (5.2) and that ${\rm diam}(Q)= \beta _1 - \alpha _1$. Then, by Proposition 5.2, if the sequence $(\lambda ^{(1)}, ... \lambda ^{(m)})$ is in the case iii) of Proposition 5.2, (with $j=1$), the estimation (6.1) is a direct consequence of Theorem 1.1 since $\delta (B_+^{(1)} (Q), B_-^{(1)} (Q)= {\rm diam }(Q)$. In the case ii) of Proposition 5.2, or in the obviously similar case where the signs $+$ and $-$ are permuted, (6.1) follows from Lemma 6.2. In the case i) of Proposition 5.2, (6.1) follows from Lemma 6.3 since $\sharp B_{\pm }^{(1)} (Q) \leq {\rm diam } (Q))^{d-1}$. For the point ii), we apply the point i) of Proposition 5.2 and Lemma (6.3) with $m=0$. (We may apply it since, if ${\rm diam}(Q)\not= 0$, we have $\beta _1 - \alpha _1 >0$ and therefore $B_+^{(1)} (Q) \cap B_-^{(1)} (Q)= \emptyset$). For the proof of iii), we may assume that $Q$ is defined by (5.2) and that $\sup _j \delta (\lambda ^{(j)}, Q) = \lambda _1 ^{(1)} - \beta _1>0$. In the case i) and ii) of Proposition 5.2, (6.2) follows from Lemma 6.2, and in the cases iii), (6.2) follows from Theorem 1.1. \bigskip \noindent {\bf 7. Study of the union of two disjoints subsets.} \bigskip If $\Lambda \subset \Z ^d$ is the union of two disjoints subsets $\Lambda _1 $ and $\Lambda _2$, we may identify $(\R^p)^{\Lambda }$ with $(\R^p)^{\Lambda _1} \times (\R^p)^{\Lambda _2}$, denoting the variable of $(\R^p)^{\Lambda }$ by $(x^{(1)}, x^{(2)})$, where $x^{(j)}= (x_{\lambda })_{\lambda \in \Lambda _j}$. Thus, a function $f$ (resp. $g$ ) in $C^{\infty }(\R^p)^{\Lambda _1})$ (resp. in $C^{\infty }(\R^p)^{\Lambda _2})$) is identified with a function on $C ^{\infty }((\R^p)^{\Lambda})$, independent of the variable $x^{(2)}$ (resp. of $x^{(1)}$). As in [], we set $(f \oplus g)(x^{(1)}, x^{(2)}) = f(x^{(1)}) + g(x^{(2)})$. For each $\theta \in [0, 1]$, we set $$V_{\Lambda, \varepsilon}(\theta ) = \theta V_{\Lambda , \varepsilon} + (1 - \theta ) (V_{\Lambda _1 , \varepsilon} \oplus V_{\Lambda _2 , \varepsilon}).\leqno (7.1)$$ We denote by $H_{\Lambda }(\varepsilon , \theta )$ the Hamiltonian defined as in (1.1), with $V_{\Lambda , \varepsilon}$ replaced by $V_{\Lambda , \varepsilon}(\theta )$. Since this new potential is of the same type that $V_{\Lambda , \varepsilon}$, and satisfies the same hypotheses, we can associate to the new Hamiltonian a function $\psi _{\Lambda }(x, y, t, h, \varepsilon , \theta)$, solution of the Cauchy problem like (1.16), (1.17) in $\Lambda $, but with $V_{\Lambda , \varepsilon}$ replaced by this new potential, i.e. satisfying:$${\partial \psi _{\Lambda }(\theta) \over \partial t}\ + \ {x-y \over t}\ .\ \ \nabla _x \psi _{\Lambda }(\theta) - {h^2 \over 2 } \Delta _x \psi _{\Lambda }(\theta) \ =\ V_{\Lambda , \varepsilon , \theta}(x) \ -\ {h^2 \over 2} \vert \nabla_x \psi _{\Lambda }(\theta) \vert ^2 \leqno (7.2)$$ and $\psi _{\Lambda }(x, y, 0, h, \varepsilon , \theta )= 0$. We are now interested to the derivative of this function with respect to $\theta$. \bigskip \noindent {\bf Proposition 7.1.} {\it For each integer $m\geq 1$, there exists a constant $C_m>0$, independent of all the sets and parameters, such that, with the previous notations, we have, for each points $\lambda ^{(1)}$, . . . $\lambda ^{(m)}$ in $\Lambda ^{(1)}$, $$ \Vert \nabla_{\lambda ^{(1)}} \ldots \nabla_{\lambda ^{(m)}} \psi _{\Lambda }(\theta ) \Vert \leq tC_m\ \varepsilon^{\gamma {\rm diam } (\{ \lambda _1, ...\lambda _m\} )} \ \ \ \ \ \ {\rm if} \ \ \ h t \leq \sigma _0, \ \ \ \varepsilon \leq \varepsilon _m(\gamma)\ \ \ \ {\rm and } \ \ \ \theta \in [0, 1] \leqno (7.3)$$and, under similar conditions, for each $u$ and $v$ in $(\R^p )^{\Lambda}$, $$ \Vert \nabla_{\lambda ^{(1)}} \ldots \nabla_{\lambda ^{(m)}}S_u \psi _{\Lambda }(\theta ) \Vert \leq tC_m\ \ \varepsilon ^{ \gamma \sup _{j\leq m} \delta (\lambda ^{(j)}, \sigma (u))}\leqno (7.3')$$ $$ \Vert \nabla_{\lambda ^{(1)}} \ldots \nabla_{\lambda ^{(m)}}S_u S_v \psi _{\Lambda }(\theta ) \Vert \leq tC_m\ \ \varepsilon ^{ \gamma \delta (\sigma (u), \sigma (v)) }\leqno (7.3'')$$ $$ \Vert \nabla_{\lambda ^{(1)}} \ldots \nabla_{\lambda ^{(m)}} { \partial \psi _{\Lambda }(\theta ) \over \partial \theta } \Vert \leq t C_m \ \varepsilon^{\gamma {\rm diam } (\{ \lambda _1, ...\lambda _m\} )} \leqno (7.4)$$ $$ \Vert \nabla_{\lambda ^{(1)}} \ldots \nabla_{\lambda ^{(m)}} { \partial \psi _{\Lambda }(\theta ) \over \partial \theta } \Vert \leq t C_m \varepsilon ^{\gamma \sup _{j\leq m} \delta (\lambda ^{(j)},\Lambda _2)} \leqno (7.5)$$ } \bigskip \noindent {\it Proof of (7.3), (7.3') and (7.3'').} The potential $V_{\Lambda , \varepsilon}(\theta )$ is of the same type that $V_{\Lambda , \varepsilon}$, and satisfies the same hypotheses, with constants independent of $\theta \in [0, 1]$. Therefore, all the proofs of Theorem 1.1 and Lemmas 6.2 and 6.3 can be applied to the Cauchy problem (7.2), (1.17), and leads to the same bounds for its solution, i.e. (7.3), (7.3') and (7.3''). \smallskip \noindent {\it Proof of (7.4).}We see that $\varphi = { \partial \psi _{\Lambda }(\theta ) \over\partial \theta }$ satisfies $${\partial \varphi \over \partial t}\ + \ {x-y \over t}\ .\ \ \nabla _x \varphi - {h^2 \over 2 } \Delta _x \varphi \ +\ h^2 \ ( \nabla_x \psi _{\Lambda }(\theta), \nabla_x \ \varphi ) =\ \sum _{\lambda \in \Lambda _1, \mu \in \Lambda _2} \varepsilon ^{|\lambda - \mu |}B_{\lambda - \mu }(x_{\lambda }, x_{\mu}) \leqno (7.6) $$This equation is of the form (3.1), and ${\partial \psi _{\Lambda }(\theta ) \over\partial \theta }$ satisfies also (3.2) and (3.3). By (7.3), the function $a= \psi _{\Lambda }(\theta) $ satisfies the condition (3.4). We put, for each finite non void subset $E$ of $\Z^d$, $\rho (E)= {\rm diam}(E)$. We already know that this function satisfies the condition (3.6). The RHS $F(x)$ of (7.6) satisfies the condition (3.5) with this function $\rho$, just like $V_{\Lambda , \varepsilon}$. Therefore ${\partial \psi _{\Lambda }(\theta ) \over\partial \theta }$ satisfies all the hypotheses of Proposition 3.1, and therefore, under conditions like (3.7), satisfies (3.8), which is equivalent, with our function $\rho$, to (7.4). \smallskip \noindent {\it Proof of (7.5).} The proof is almost the same, but we put now, for each finite non void subset $E$ of $\Z^d$, $$\rho (E)= \sup _{\lambda \in E} \delta ( \lambda , \Lambda _2).$$This function satisfies the condition (3.6). This follows from the inequality (6.4), applied now to $S= \Lambda _2$. The RHS $F(x)$ of (7.6) satisfies also the condition (3.5) with this new function $\rho$ and, by Proposition 3.1, ${\partial \psi _{\Lambda }(\theta ) \over\partial \theta }$, under conditions like (3.7), satisfies (3.8), which is equivalent, with our new function $\rho$, to (7.5). \bigskip Now, we shall be interested to the cluster decomposition of ${\partial \psi _{\Lambda }(\theta ) \over\partial \theta }$. \bigskip \noindent {\bf Proposition 7.2.} {\it For each integer $m\geq 0$, and for each $\gamma \in ]0, 1[$, there exists $C_m>0$ and $\varepsilon _m (\gamma) \in ]0, 1[$ with the following properties. For each finite subset $\Lambda $ of $\Z ^d $, for each points $\lambda ^{(1)}$, ... $\lambda ^{(m)}$ in $\Lambda $, for each box $Q \subseteq \Lambda $, we have, $$\Vert \nabla _{\lambda ^{(1)}} ... \nabla _{\lambda ^{(m)}} \Big (T_Q {\partial \psi _{\Lambda} \over \partial \theta } \Big ) \Vert \leq tC_m \varepsilon ^{\gamma {\rm diam} (Q )}\ \ \ \ \ \ \ {\rm if} \ \ \ h t \leq \sigma _0, \ \ \ \varepsilon \leq \varepsilon _m (\gamma ) \ \ \ \ {\rm and } \ \ \ \theta \in [0, 1] \leqno (7.7) $$ If one at least of the points $\lambda _j$ is not in $Q$, (hence $m\geq 1$) we have also, under similar conditions, $$\Vert \nabla _{\lambda ^{(1)}} ... \nabla _{\lambda ^{(m)}} \Big ( T_Q {\partial \psi _{\Lambda} \over \partial \theta } \Big ) \Vert \leq tC_m \varepsilon^{\gamma \sup _{j \leq m } \delta (\lambda ^{(j)} , Q)} \leqno (7.8)$$ If moreover $$\Lambda _1 = \prod _{j=1}^d [a_j, b_j] \hskip 1cm \Lambda _2 \subset \{ \lambda \in \Z^d , \ \ \ \lambda _1 > b_1 \} \hskip 1cm Q = \prod _{j=1}^d [\alpha _j, \beta _j ]\subseteq \Lambda _1\leqno (7.9)$$ then we have, under similar conditions, where $m$ can be $0$ $$\Vert \nabla _{\lambda ^{(1)}} ... \nabla _{\lambda ^{(m)}} \Big ( T_Q {\partial \psi _{\Lambda} \over \partial \theta } \Big ) \Vert \leq tC_m \varepsilon ^{\gamma (b_1 - \beta _1)} \leqno (7.10)$$ } \bigskip The proof relies on the two following Lemmas. \bigskip \noindent {\bf Lemma 7.3.} {\it For each integer $m\geq 0$, and for each $\gamma \in ]0, 1[$, there exists $C_m>0$ and $\varepsilon _m (\gamma) \in ]0, 1[$ such that, for each $u$ in $(\R ^p)^{\Lambda }$, for each points $\lambda ^{(1)}$, $\ldots $ $\lambda ^{(m)}$ in $\Lambda $,we have, if $m\geq 1$, $$ \Vert \nabla _{\lambda ^{(1)}}\ldots \nabla _{\lambda ^{(m)}} S_u {\partial \psi _{\Lambda}\over \partial \theta } \Vert \leq C_m \ t \ \varepsilon ^{\gamma \sup _{j\leq m} \delta (\lambda ^{(j)}, \sigma (u)) } \hskip 1cm if\ \ \ ht \leq \sigma _0 \ \ \ \ \ \varepsilon \leq \varepsilon _m (\gamma ). \leqno (7.11) $$ If $\sigma (u) $ is contained in $\Lambda _1$, we have, under the previous conditions, (but here $m$ can be $0$)$$\Vert \nabla _{\lambda ^{(1)}} ... \nabla _{\lambda ^{(m)}} S_u {\partial \psi _{\Lambda }(\theta ) \over \partial \theta }\Vert \leq C t \varepsilon ^{\gamma \delta (\sigma (u) , \Lambda _2)} \sharp \sigma (u)\leqno (7.12)$$ } \bigskip \noindent {\it Proof of (7.11).} The function $\varphi = S_u {\partial \psi _{\Lambda}\over \partial \theta }$ satisfies $${\partial \varphi \over \partial t}\ + \ {x-y \over t}\ .\ \ \nabla _x \varphi - {h^2 \over 2 } \Delta _x \varphi \ +\ h^2 \ ( \nabla_x \psi _{\Lambda }(\theta), \nabla_x \ \varphi ) =\Phi \leqno (7.13)$$ where $$\Phi = S_u V_{\Lambda _1 , \Lambda _2} - h^2 \Big ( \nabla _x S_u \psi _{\Lambda} (\theta ), \nabla _x {\partial \psi _{\Lambda}\over \partial \theta } (x+u, y+u, t)\Big ) \ \ \ \ \ \ V_{\Lambda _1 , \Lambda _2} = \sum _{\lambda \in \Lambda _1, \mu \in \Lambda _2} \varepsilon ^{|\lambda - \mu |}B_{\lambda - \mu }(x_{\lambda }, x_{\mu}) $$This equation is of the form (3.1), and $\varphi $ satisfies also (3.2) and (3.3). By (7.3), the function $a= \psi _{\Lambda }(\theta) $ satisfies the condition (3.4). We put, for each finite non void subset $E$ of $\Z^d$, $\rho (E)= \sup _{\lambda \in E} \delta (\lambda , \sigma (u))$. We already know that it satisfies (3.6). By the form of $V_{\Lambda _1 , \Lambda _2}$, we can write, for each sequence $\lambda ^{(1)}$, ... , $\lambda ^{(m)}$ in $\Lambda$, under conditions similar to those of (7.11), $$\Vert \nabla _{x_{\lambda ^{(1)}}}... \nabla _{x_{\lambda ^{(m)}}}S_u V_{\Lambda _1 , \Lambda _2}\Vert \leq K_m \varepsilon ^{ \gamma \sup _{j \leq m} \delta (\lambda ^{(j)} , \sigma (u))}. \leqno (7.14)$$ By (7.3'), we can write, for each point $\nu $ in $\Lambda $ and for each sequence $\lambda ^{(1)}$, ... , $\lambda ^{(k)}$ in $\Lambda $, under the conditions of (7.11) $$\Vert \nabla _{x_{\nu}} \nabla _{\lambda ^{(1)}}... \nabla _{\lambda ^{(k)}}S_u \psi _{\Lambda }(\theta )\Vert \leq C_k t \varepsilon ^{ \gamma \rho ( \{ \lambda ^{(1)} , ..., \lambda ^{(k)}, \nu \} )}\leqno (7.15)$$ and by (7.4), under the same type of condition, for each sequence $\mu ^{(1)}$, ... $\mu ^{(\ell )}$ in $\Lambda $ $$\Vert \nabla _{x_{\nu}} \nabla _{\mu ^{(1)}}... \nabla _{\mu ^{(\ell )}}{\partial \psi _{\Lambda }(\theta )\over \partial \theta } \Vert \leq C_k t \varepsilon ^{{ \gamma +1\over 2} {\rm diam} ( \{ \mu ^{(1)}, ..., \mu ^{(\ell )}, \nu \})}\leqno (7.16)$$By taking the product, using (3.6), and the summation on $\mu$, we obtain, for the RHS $\Phi$ of (7.13), for each sequence $\lambda ^{(1)}$, ... , $\lambda ^{(m)}$ in $\Lambda$, $$\Vert \nabla _{\lambda ^{(1)}}... \nabla _{\lambda ^{(m)}}\Phi \Vert \leq C_m (1 + h^2 t^2)\varepsilon ^{ \gamma \sup _{j \leq m} \delta (\lambda ^{(j)} , \sigma (u))}.$$Therefore, (7.11) follows from Proposition 3.1. \bigskip \noindent {\it Proof of (7.12).} We have only to change the bounds for the RHS $\Phi $ of (7.13). Our new function $\rho (E)$ is independent of $E$. We set $\rho (E) = \delta (\sigma (u), \Lambda _2)$, and (3.6) is obviously satisfied. We can write, instead of (7.14), if $\sigma (u) \subset \Lambda _1$, under conditions similar to those of (7.11), $$\Vert \nabla _{x_{\lambda ^{(1)}}}... \nabla _{x_{\lambda ^{(m)}}}S_u V_{\Lambda _1 , \Lambda _2}\Vert \leq K_m \varepsilon ^{ \gamma \delta (\sigma (u), \Lambda _2)}\sharp \sigma (u).$$The estimate (7.15), with $\gamma $ replaced by ${\gamma +1 \over 2}$, implies $$\Vert \nabla _{x_{\nu}} \nabla _{\lambda ^{(1)}}... \nabla _{\lambda ^{(k)}}S_u \psi _{\Lambda }(\theta )\Vert \leq C_k t \varepsilon ^{ {\gamma + 1 \over 2} \delta (\nu , \sigma (u))}.$$By (7.5), we can write, instead of (7.16)$$\Vert \nabla _{\nu} \nabla _{\mu ^{(1)}}... \nabla _{\mu ^{(\ell )}}{\partial \psi _{\Lambda }(\theta )\over \partial \theta } \Vert \leq C_k t \varepsilon ^{\gamma \delta (\nu , \Lambda _2)}$$We remark that $$\sum _{\nu \in \Z^d} \varepsilon ^{ {\gamma + 1 \over 2} \delta (\nu , \sigma (u))+ \gamma \delta (\nu , \Lambda _2)} \leq 4^d \varepsilon ^{ \gamma \delta ( \sigma (u), \Lambda _2)}\sharp \sigma (u) \hskip 1cm {\rm if } \ \ \ \ \varepsilon^{{1 - \gamma \over 2 }} \leq 2^{-d}$$ The end of the proof is the same as for (7.11). \bigskip \noindent {\bf Lemma 7.4.} {\it With the notations of Lemma 7.3, (but here $m$ can be $0$), we have, for each $u$ and $v$ in $(\R^p)^{\Lambda }$, under conditions similar to those of (7.11), $$ \Vert \nabla _{\lambda ^{(1)}}\ldots \nabla _{\lambda ^{(m)}} S_u S_v {\partial \psi _{\Lambda}\over \partial \theta } \Vert \leq C_m \ t \ \varepsilon ^{\gamma \delta (\sigma (u), \sigma (v)) } {\rm inf } \ \ (\sharp \sigma (u), \sharp \sigma (v))\leqno (7.17) $$ } \bigskip \noindent {\it Proof.} The new function $\varphi = S_uS_v { \partial \psi _{\Lambda }(\theta ) \over \partial \theta }$ satisfies an equation very similar to (7.13) $${\partial \varphi \over \partial t}\ + \ {x-y \over t}\ .\ \ \nabla _x \varphi - {h^2 \over 2 } \Delta _x \varphi \ +\ h^2 \ ( \nabla_x \psi _{\Lambda }(\theta), \nabla_x \ \varphi ) =\Phi \leqno (7.18)$$ but the new RHS $\Phi $ is now defined by $\Phi = \Phi _1 - \Phi _2 - \Phi _3 - \Phi _4$, where $$\Phi _1 = S_u S_v V_{\Lambda _1, \Lambda _2} \hskip 1cm \Phi _2 = h^2 \Big ( \nabla _x S_u S_v \psi _{\Lambda }(\theta ), \nabla _x { \partial \psi _{\Lambda }(\theta ) \over \partial \theta } (x+ u +v, y+ u +v , t) \Big ) \leqno (7.19)$$ $$ \Phi _3 = h^2 \Big ( \nabla _x S_u \psi _{\Lambda }(\theta ) , \nabla _x S_v { \partial \psi _{\Lambda }(\theta ) \over \partial \theta } (x+u, y+u, t) \Big ) \hskip 1cm \Phi _4 = h^2 \Big (\nabla _x S_v \psi _{\Lambda }(\theta ) , \nabla _x S_u { \partial \psi _{\Lambda }(\theta ) \over \partial \theta } (x+v , y+v, t ) \Big )$$We have only to estimate the new $\Phi$, setting now $\rho (E) = \delta (\sigma (u), \sigma (v))$. For each sequence $\lambda ^{(1)}$, ... , $\lambda ^{(m)}$ in $\Lambda$, ($m\geq 0$), we can write, under the conditions of (7.11), $$\Vert \nabla _{x_{\lambda ^{(1)}}}... \nabla _{x_{\lambda ^{(m)}}}S_u V_{\Lambda _1 , \Lambda _2}\Vert \leq K_m \varepsilon ^{\gamma \delta ( \sigma (u), \sigma (v))} \ \inf ( \sharp \sigma (u), \sharp \sigma (v)). \leqno (7.20)$$By (7.3') and (7.3''), we can write, for each point $\nu $ in $\Lambda $ and for each sequence $\lambda ^{(1)}$, ... , $\lambda ^{(k)}$ in $\Lambda $, under the conditions of (7.11), $$\Vert \nabla _{x_{\nu}} \nabla _{\lambda ^{(1)}}... \nabla _{\lambda ^{(k)}}S_u S_v \psi _{\Lambda }(\theta )\Vert \leq C_k t \ {\rm inf } \left [ \varepsilon ^{ { 1 + \gamma \over 2} \delta (\nu , \sigma(u))} \ , \ \varepsilon ^{ { 1 + \gamma \over 2} \delta (\sigma (u), \sigma (v))} \right ] \leq C_k t \varepsilon^{\gamma \delta (\sigma (u), \sigma (v)) + {1 - \gamma \over 2} \delta ( \nu , \sigma (u) )}.$$By (7.4), under the previous conditions, for each $\nu \in \Lambda $ and for each sequence $\mu ^{(1)}$, ... $\mu ^{(\ell )}$ in $\Lambda $ $$\Vert \nabla _{x_{\nu}} \nabla _{\mu ^{(1)}}... \nabla _{\mu ^{(\ell )}}{\partial \psi _{\Lambda }(\theta )\over \partial \theta } \Vert \leq C_k t$$ By multiplication and summation on $\nu \in \Lambda $, remarking that $$ \sum _{\nu \in \Z ^d} \varepsilon^{{1 - \gamma \over 2} \delta ( \nu , \sigma (u) )} \leq 4^d \sharp (\sigma (u)) \hskip 1cm {\rm if } \ \ \ \varepsilon ^{1- \gamma } \leq 2^{-2d} ,$$ and applying the same argument to the vector $v$, we obtain, under the usual conditions, for each sequence $\lambda ^{(1)}$, ... , $\lambda ^{(m)}$ in $\Lambda $, $$\Vert \nabla _{x_{\lambda ^{(1)}}}... \nabla _{x_{\lambda ^{(m)}}}\Phi _2 \Vert \leq K_m t^2 h^2 \varepsilon ^{\gamma \delta ( \sigma (u), \sigma (v))} \ \inf ( \sharp \sigma (u), \sharp \sigma (v)) \leqno (7.21)$$ By (7.3'), we can write, for each point $\nu $ in $\Lambda $ and for each sequence $\lambda ^{(1)}$, ... , $\lambda ^{(k)}$ in $\Lambda $, under the usual conditions, $$\Vert \nabla _{x_{\nu}} \nabla _{\lambda ^{(1)}}... \nabla _{\lambda ^{(k)}}S_u \psi _{\Lambda }(\theta )\Vert \leq C_k t \varepsilon ^{ {\gamma +1\over 2} \delta (\nu , \sigma (u)) }$$By (7.11), we can write, for each $\nu \in \Lambda $ and for each sequence $\mu ^{(1)}$, ... $\mu ^{(\ell )}$ in $\Lambda $ $$\Vert \nabla _{x_{\nu}} \nabla _{\mu ^{(1)}}... \nabla _{\mu ^{(\ell )}}{\partial \psi _{\Lambda }(\theta )\over \partial \theta } \Vert \leq C_k t \varepsilon ^{{ \gamma +1\over 2} \delta ( \nu , \sigma (v))}$$By multiplication of the last two inequalities , and summation on $\nu \in \Lambda $, using (6.7), we obtain for $\Phi _3$, and, permuting $u$ and $v$, for $\Phi _4$, estimations similar to (7.21). Then Lemma 7.4 follows from Proposition 3.1. \bigskip \noindent {\it End of the proof of Proposition 7.2.} We may assume that $Q$ is defined like in (5.2). \smallskip \noindent {\it Proof of (7.7).} We may assume that ${\rm diam} (Q) = \beta _1 - \alpha _1$. If the sequence $(\lambda ^{(1)}, ...,\lambda ^{(m)})$ is in the case iii) of Proposition 5.2, (7.7) follows from (5.9) and (7.4) since $\delta (B_+^{(1)}(Q), B_-^{(1)}(Q)) = {\rm diam} (Q)$. If we are in the case ii) of Proposition 5.2, (7.7) follows from (5.8) and (7.11). In the case i) of Proposition 5.2, (7.7) follows from (5.7) and Lemma 7.4.(We remark that the sets $B_{\pm }^{(1)}(Q)$ of (5.6) satisfy $\sharp (B_{\pm }^{(1)}(Q)) \leq ({\rm diam}(Q))^{d-1}$ and we have to change the coefficient $\gamma$). \smallskip \noindent {\it Proof of (7.8).} We may assume that $\sup _{j\leq m} \delta (\lambda ^{(j)}, Q) = \delta (\lambda ^{(1)}, B_+^{(1)}(Q))$. If we are in the case iii) of Proposition 5.2, (7.8) follows from (5.9) and (7.4). In the case ii), it follows from (5.8) and (7.4), and in the case i), from (5.7) and (7.11). \smallskip \noindent {\it Proof of (7.10).} In the case iii) of Proposition 5.2, (7.10) follows from (5.9) and (7.5), in the case ii) from (5.8) and (7.5), and in the case i) from (5.7) and (7.12) since, in the situation (7.9), we have $\delta (B_+^{(1)}(Q), \Lambda _2)\geq b_1 - \beta _1$. \bigskip \noindent {\bf 8. Semi-classical approximation and global behavior of the heat kernel} \bigskip Our first goal is to determine the first term of a possible semiclassical asymptotic expansion of the function $\psi _{\Lambda }(x, y, t, h, \varepsilon )$ of Theorem 1.1. For each finite set $\Lambda $ of $\Z^d$, we consider the following function $$\psi _{\Lambda }^{(0)}(x, y, t, \varepsilon) = t \int _0^1 V_{\Lambda , \varepsilon }(y +\theta (x-y))d\theta. \leqno (8.1)$$We shall prove that it is the first term of the semi-classical approximation of $\psi _{\Lambda }$, and give a bound of derivatives of the error term. \bigskip \noindent {\bf Proposition 8.1.} {\it There exists $C>0$ with the following properties. For each finite set $\Lambda $ of $\Z^d$, if $\psi_{\Lambda }$ is the solution of (1.16), (1.17), and $\psi _{\Lambda }^{(0)}$ the function defined above, we have:$$\sup _{\lambda \in \Lambda } \Vert \nabla _{\lambda } (\psi_{\Lambda} -\psi_{\Lambda}^{(0)})\Vert \leq C h^2 (t^2 + t^3) \hskip 1cm {\rm if }\ \ \ ht \leq \sigma _0 \ \ \ \ \ {\rm and } \ \ \varepsilon \leq 4^{-d} \leqno (8.2)$$where $\sigma _0$ is the constant of Theorem 1.1. } \bigskip \noindent {\it Proof.} We see that $${\partial \psi_{\Lambda}^{(0)} \over \partial t} + {x-y\over t} \cdot \nabla _x \psi_{\Lambda}^{(0)} = V_{\Lambda , \varepsilon}\leqno (8.3)$$ and therefore that the function $\varphi = \psi_{\Lambda} -\psi_{\Lambda}^{(0)}$ satisfies the equation $$ {\partial \varphi \over \partial t} + {x-y\over t} \cdot \nabla _x \varphi -{h^2 \over 2 } \Delta _x \varphi = F\hskip 1cm F= {h^2 \over 2 } \Delta _x \psi_{\Lambda}^{(0)} - {h^2 \over 2 } |\nabla _x \psi_{\Lambda} |^2 $$We see that, for each $\lambda $ in $\Lambda $, we can write, by Theorem 1.1, applied with $m\leq 2$ and, for example, $\gamma = {1 \over 2}$, $\Vert \nabla _{\lambda } F(.,t) \Vert \leq K_m h^2 (t + t^2)$ if the conditions of (8.2) are satisfied. Therefore, applying Proposition 3.1 with $\rho (E)=0$ and $m=1$, (then we use the hypothesis on $F$ only for $m=1$), we obtain (8.2). \bigskip If $E$ is a subset of $\Lambda $, we denote by $x_E $ the family of variables $(x_{\lambda })_{\lambda \in E}$, and we denote obviously the variable of $(\R^{p})^{\Lambda }$ by $x = (x_E , x_{\Lambda \setminus E})$. \bigskip In the rest of this section, we shall be sure that, for $t>0$, the integral kernel $U_{\Lambda} (x, y, t)$ is in ${\cal S}( (\R^p)^{\Lambda} \times (\R^p)^{\Lambda} )$ (which is certainly already well known), and then we shall estimate expectations of the form (1.14), where $A$ is the multiplication by a polynomially bounded function, depending only, with the previous notations, of $x_E$, where $E$ is a subset of $\Lambda$ with a fixed number of elements, (while $\sharp (\Lambda)$ may tend to $+ \infty$). \smallskip In the following Proposition, the $\ell ^1$ norm of a vector $x$, denoted by $|x|_1$, will play an important role. The answer to our question will be given by the following proposition. \bigskip \noindent {\bf Proposition 8.2.} {\it With these notations, if $\varepsilon $ is small enough, for each $t>0$ and $h>0$, the function $U_{\Lambda } (., ., t)$ is in ${\cal S}((\R^p)^{\Lambda}\times (\R^p)^{\Lambda})$. Moreover, there exist $\sigma _1>0$, $\varepsilon _1>0$ and, for each integers $m$ and $N$, a function $t\rightarrow C(t, m, N)>0$, bounded on each compact of $]0, + \infty [$, such that, for each subset $E$ of a box $\Lambda $, for each $t>0$, $h>0$ and $\varepsilon >0$ such that $h ^2(t + t^2) \leq \sigma _1$ and $ \varepsilon \leq \varepsilon _1$, we have $$ \int _{(\R^{p})^{\Lambda}} (1 + \vert x_E\vert _1 )^{m} U_{\Lambda} (x , x, t, h, \varepsilon ) dx \leq C (t, m, \sharp E) {\rm Tr } (e^{-tH_{\Lambda }(\varepsilon )}).\leqno (8.4)$$ } \bigskip The proof of this Proposition relies on the following Lemma. \bigskip \noindent {\bf Lemma 8.3.} {\it There exists $\kappa >0$, $C>0$, $\varepsilon _1>0$ and $\sigma _1>0$ such that, for each subset $E$ of $\Lambda$, we have $$\psi _{\Lambda } (x, y, t)\geq \psi _{\Lambda }(0 , x_{\Lambda \setminus E}, 0 , y_{\Lambda \setminus E}, t)+ \kappa t (| x_E|_1 + | y _E |_1 ) - C t (\sharp E) \leqno (8.5)$$ if $\varepsilon \leq \varepsilon _0$, $h^2(t + t^2) \leq \sigma _1$. } \bigskip \noindent {\it Proof of Lemma 8.3. First step.} We shall prove the analogous of (8.5) for $\psi _{\Lambda }^{(0)}$, i.e. that there exists $\kappa >0$, $C>0$ and $\varepsilon _0>0$, such that, for each subset $E$ of a box $\Lambda$ of $\Z^d$, we have $$\psi_{\Lambda }^{(0]} (x, y, t)\geq \psi _{\Lambda }^{(0)}(0 , x_{\Lambda \setminus E}, 0 , y_{\Lambda \setminus E}, t)+ 2 \kappa t (| x_E|_1 + | y _E |_1 ) - C t (\sharp E) \hskip 1cm {\rm if} \ \ \varepsilon \leq \varepsilon _0. \leqno (8.6)$$ If $A$ and $B_{\Lambda } $ are the functions appearing in (1.2), we set $$\widetilde A (x, y) = \int _0^1 A(y + \theta (x-y))\ d \theta \hskip 1cm \widetilde {B_{\lambda }} (x, y) = \int _0^1 B_{\lambda } (y + \theta (x-y))\ d \theta \leqno (8.7)$$ Therefore, we can write : $$ \psi _{\Lambda }^{(0)} (x, y, t) - \psi _{\Lambda }^{(0)} (0, x_{\Lambda \setminus E}, 0,y _{\Lambda \setminus E}, t) =t(F_1 + F_2 )\leqno (8.8)$$ where $$F_1 = \sum _{\lambda \in E} [ \widetilde A(x_{\lambda }, y_{\lambda }) -\widetilde A (0, 0)]\leqno (8.9)$$ $$F_2 = \sum _{ \lambda \in E , \ \mu \in \Lambda \setminus E}\varepsilon ^{|\lambda - \mu |} \Big ( \widetilde B_{\lambda - \mu } (x_{\lambda }, x_{\mu } , y_{\lambda}, y_{\mu }) - \widetilde B_{\lambda - \mu } (0, x_{\mu } ,0, y_{\mu }) \Big ) + ... \leqno (8.10)$$ $$ + \sum _{ \lambda \in \Lambda \setminus E , \ \mu \in E}\varepsilon ^{|\lambda - \mu |} \Big ( \widetilde B_{\lambda - \mu }(x_{\lambda }, x_{\mu } , y_{\lambda}, y_{\mu }) - \widetilde B_{\lambda - \mu } (x_{\lambda }, 0 ,y_{\lambda } , 0)\Big ) + ... $$ $$... + \sum _{ \lambda , \mu \in E \atop \lambda \not = \mu} \varepsilon ^{|\lambda - \mu |} \Big ( \widetilde B_{\lambda - \mu }(x_{\lambda }, x_{\mu } , y_{\lambda}, y_{\mu }) - \widetilde B_{\lambda - \mu }(0)\Big ) $$ There exists some $K$, depending only on $p$, such that $$\vert x\vert + \vert y\vert \leq K \int _0^1 \vert y + \theta (x-y)\vert d\theta \hskip 1cm \forall (x, y) \in \R^p \times \R^p . \leqno (8.11) $$By the hypotheses on $A$, we can write, for some constants $C_1>0$ and $C_2>0$, $|x| \leq C_1 (A(x) - A(0)) + C_2$ for all $x\in \R^p$, and therefore, by (8.7), (8.9) and (8.11): $$|x_E|_1 + |y_E|_1 \leq K \big [ C_1 F_1(x, y) + C_2 \sharp (E)\big ].\leqno (8.12) $$ By (8.10), there exists $C_3>0$ such that $|F_2(x, y)| \leq C_3 \ \varepsilon (|x_E|_1 + |y_E|_1)$ if $\varepsilon \leq 1/2$. Therefore, by (8.8) and (8.12), under the same condition: $$t(|x_E|_1 + |y_E|_1) \leq KC_1 \Big [ \psi _{\Lambda }^{(0)} (x, y, t) - \psi _{\Lambda }^{(0)} (0, x_{\Lambda \setminus E}, 0,y _{\Lambda \setminus E}, t) \Big ] + KC_1 C_3 \varepsilon t(|x_E|_1 + |y_E|_1 ) + K C_2 t (\sharp E ).$$ If $ KC_1 C_3 \varepsilon \leq {1 \over 2} $, the inequality (8.6) follows. \bigskip \noindent {\it Second step.} If $\psi _{\Lambda }^{(0)}$ is the classical approximation defined in (8.1), we have, by Proposition 8.1 (inequality (8.2)), if $ht \leq \sigma _0$ and $\varepsilon \leq 4^{-d}$, $$\left \vert (\psi _{\Lambda } - \psi _{\Lambda }^{(0)} ) (x_{E}, x_{\Lambda \setminus E} ,y_{E},x_{\Lambda \setminus E}, t) - (\psi _{\Lambda } - \psi _{\Lambda }^{(0)} ) (0, x_{\Lambda \setminus E}, 0,x _{\Lambda \setminus E}, t) \right \vert \leq C h^2 (t^2 + t^3) (| x_{E}|_1 + | y_{E}|_1 ).$$ The lemma follows from (8.6) and from this inequality. \bigskip \noindent {\it Proof of Proposition 8.2. First claim.} For any $h>0$ and $\varepsilon \leq \varepsilon _0$, we can find $t_1>0$ such that the conditions of Lemma 8.3 are satisfied if $0< t \leq t_1$. Therefore, by (1.3), by Theorem 1.1 and Lemma 8.3, the function $U_{\Lambda }(., .,t)$ is in ${\cal S}((\R^p)^{\Lambda}\times (\R^p)^{\Lambda})$ if $00$ such that $$|\psi _{\Lambda }(x, x, t) - \psi _{\Lambda}(0 , x_{\Lambda \setminus E}, 0 , x_{\Lambda \setminus E}, t ) | \leq \kappa _1 t |x_E|_1.\leqno (8.13)$$ If we set $$ \Psi _1 (t, m, \sharp(E)) = \int _{ (\R^{p})^E} (1 + |x_E|_1 ) ^m e^{Ct \sharp (E) - \kappa t |x_E|_1 } dx_E \hskip 1cm \Psi _2 (t, \sharp (E)) = \int _{ (\R^{p})^E} e^{ - \kappa_1 t |x_E|_1 } dx_E, \leqno (8.14)$$ we can write $$I \leq (2\pi th^2)^{-(p\sharp \Lambda ) /2} \ { \Psi _1 (t, m, \sharp (E)) \over \Psi _2(t, \sharp (E))} \ \int _{(\R^{p})^{\Lambda }} e^{-\psi _{\Lambda }(0,x_{\Lambda \setminus E} , 0,x_{\Lambda \setminus E}, t)- \kappa _1 t |x_E| } dx .$$By (8.13), it follows that $$I \leq (2\pi th^2)^{-(p\sharp \Lambda ) /2} \ {\Psi_1(t, m, \sharp (E)) \over \Psi _2 (t, \sharp (E))} \ \int _{(\R^{p})^{\Lambda}} e^{-\psi _{\Lambda }(x,x, t)} dx ={\Psi_1(t, m, \sharp (E)) \over \Psi _2 (t, \sharp (E))}\ \ {\rm Tr } (e^{-tH_{\Lambda }(\varepsilon )}).$$ The proposition is proved. \bigskip \bigskip \noindent {\bf 9. Quantum correlation of local multiplicative observables.} \bigskip The aim of this section is the proof of Theorem 1.5 in the case where $A$ and $B$ are multiplication by bounded or, more generally, polynomially bounded functions. More precisely, for each box $\Lambda $ in $\Z^d$, for each disjoint finite subsets $E_1$ and $E_2$, if $A $ is the multiplication by a function $f \in C^{\infty }( (\R ^p)^{E_1})$, and $B$ is the multiplication by a function $g \in C^{\infty }( (\R ^p)^{E_2})$, satisfying, for some constant $N(f)$ and $N(g)$ $$\vert f(x) \vert \leq N(f) (1+\vert x_{E_1} \vert )^{m_1} \hskip 1cm \vert g(x) \vert \leq N(g) (1+\vert x_{E_2} \vert )^{m_2}, \leqno (9.1)$$ we shall estimate the correlation $cov _{\Lambda , \varepsilon } (A, B) $ defined in (1.10), (1.11). \medskip We shall distinguish the case $m_1=m_2=0$ because the constants are then more explicit. We set $m(t) = \inf (t, 1)$ and $M(t) = \sup (t, 1)$. \bigskip \noindent {\bf Proposition 9.1.} {\it With the previous notations, there exists a function $\gamma \rightarrow \varepsilon _0(\gamma)$ in $]0, 1[$, such that, \smallskip \noindent a) if the conditions (9.1) are satisfied with $m_1=m_2=0$, and if $\gamma \in ]0, 1[$, we have $$cov _{\Lambda , \varepsilon } (A, B) \leq 4 \ m(t)\ \inf ( \sharp (E_1), \sharp (E_2))\ ( M(t) \varepsilon ^ {\gamma})^{ \delta ( E_1, E_2)} \ N(f) \ N(g)\leqno (9.2)$$ if the following conditions are satisfied: $$ht \leq \sigma _0, \ \ \ \ \ \ \ \varepsilon \leq \varepsilon _0 (\gamma) \ \ \ \ \ \ M(t) \varepsilon^{\gamma} \leq {1 \over 2}, \leqno (9.3)$$ where $\sigma _0$ is the constant of Theorem 1.1. \smallskip \noindent b) There exists $\sigma _1>0$, independent of all the parameters, and, for each integers $m_1$, $m_2$, $N_1$ and $N_2$, a function $t \rightarrow K( m_1, m_2, N_1, N_2, t)$, bounded on each compact set of $]0, + \infty [$, such that, if the conditions (9.1) are satisfied with $m_1\geq 0$ and $m_2\geq 0$, if $\gamma \in ]0, 1[$, we have $$cov _{\Lambda , \varepsilon } (A, B) \leq m(t)\ K(m_1, m_2, \sharp (E_1), \sharp (E_2), t)\ ( M(t) \varepsilon ^ {\gamma})^{ \delta ( E_1, E_2)} N(f) N(g)\leqno (9.4)$$ if the following conditions are satisfied: $$h^2(t +t^2) \leq \sigma _1, \ \ \ \ \ \ \ \ \varepsilon \leq \varepsilon _0(\gamma), \ \ \ \ \ \ \ \ M(t) \varepsilon ^ {\gamma} \leq {1 \over 2}.\leqno (9.5)$$ If $m_1 = 0$ and $m_2>0$, we can take $K$ independent of $\sharp E_1$.} \bigskip If, in the definition of $H_{\Lambda \varepsilon } $, the potential $V_{\Lambda \varepsilon }$ is replaced by a family, depending on an auxiliary parameter, such that the hypotheses of the introduction are satisfied uniformly, then the constants in Proposition 9.1 can be chosen independent of this parameter. \bigskip The first step of the proof of Proposition 9.1, which will also used in Section 10 for the case where $A$ and $B$ may be arbitrary bounded operators, will be an expression of the quantum correlation defined in (1.11), using operators in $L^2((\R^p)^{\Lambda } \times (\R^p)^{\Lambda })$. \medskip Denoting by $X = (x', x'')$ the variable of $(\R^p)^{\Lambda } \times (\R^p)^{\Lambda }$, for each operator $A$ in $L^2((\R^p)^{\Lambda })$, we denote by $A'$ (resp. $A''$ ) the operator $A$, seen as an operator in $L^2((\R^p)^{\Lambda } \times (\R^p)^{\Lambda })$, acting only on the variable $x'$ (resp. on $x''$). We denote by $\widetilde H_{\Lambda, \varepsilon }$ the operator in $(\R^p)^{\Lambda } \times (\R^p)^{\Lambda }$ defined by $\widetilde {H_{\Lambda, \varepsilon }}= H'_{\Lambda, \varepsilon } + H''_{\Lambda, \varepsilon } $. Then, an easy computation shows that $$cov _{\Lambda , \varepsilon } (A, B) = { 1\over 2} \ { Tr \ \left ( e^{-t \widetilde H_{\Lambda , \varepsilon } } (A' -A'') (B' - B'') \right ) \over Tr \ \left ( e^{-t \widetilde H_{\Lambda , \varepsilon } } \right ) } \leqno (9.6)$$ \bigskip In the second step, we shall replace, in the numerator, the operator $ e^{-t \widetilde H_{\Lambda , \varepsilon } }$ by another one, denoted by $T(E_1,E_2)$, which will satisfy better estimates. In order to define this new operator, we need some notations. \bigskip \noindent {\it Notations.} For each disjoint subsets $E_1$ and $E_2$ of $\Lambda $, let $G_{+ }(E_1, E_2)$ and $G_{- }(E_1, E_2)$ the sets of maps $\tau $ from $\Lambda$ to $\{ 0, 1\}$ defined by $$\matrix { G_{+ }(E_1, E_2) \ = \ \{ \tau : \Lambda \longrightarrow \{0, 1 \}&| \tau (\lambda )=0\ \ \forall \lambda \in E_1, & \tau (\lambda )=0\ \ \forall \lambda \in E_2\} \cr G_{- }(E_1, E_2) \ = \ \{ \tau : \Lambda \longrightarrow \{0, 1 \},&| \tau (\lambda )=0\ \ \forall \lambda \in E_1, & \tau (\lambda )=1\ \ \forall \lambda \in E_2\}. \cr } \leqno (9.7)$$ For each $\tau \in G_{\pm }(E_1, E_2)$, we denote by $\phi _{\tau }$ the map in $(\R^p)^{\Lambda } \times (\R^p)^{\Lambda }$ defined by $$ \Big ( \phi _{\tau } (x', x'') \Big ) _{\lambda } =\left \{ \matrix { (x'_{\lambda } , x''_{\lambda } ) & if &\tau (\lambda )=0\cr & & \cr (x''_{\lambda } , x'_{\lambda } )&if&\tau (\lambda )=1 \cr } \right . \ \ \ \ \ \ \ \forall X =(x', x'') \in (\R^p)^{\Lambda } \times (\R^p)^{\Lambda } \leqno (9.8)$$ Let $\Phi _{\tau }$ the corresponding operator in $L^2((\R^p)^{\Lambda } \times (\R^p)^{\Lambda })$. For each operator $S$ in $L^2((\R^p)^{\Lambda } \times (\R^p)^{\Lambda })$, for each function $F(X)$ in $\Big ( (\R^p)^{\Lambda }\Big )^2$ or $G(X, Y)$ in $\Big ( (\R^p)^{\Lambda }\Big ) ^4$, and for each $\tau \in G_{\pm}$, we set $$S^{(\tau )} = ( \Phi _{\tau })^{-1} S \Phi _{\tau }, \ \ \ \ \ F^{(\tau)} (X)= F(\phi _{\tau }(X)), \ \ \ \ \ G^{(\tau)} (X, Y)= F(\phi _{\tau }(X), \phi _{\tau }(Y)) .\leqno (9.9)$$ If $U_{\Lambda }(x, y, t)$ is the heat kernel of Theorem 1.1, and $ \psi _{\Lambda }(x, y, t)$ the function appearing in its expression (1.3), we set, if $X = (x', x'')$, $Y = (y', y'')$ $$\widetilde U_{\Lambda } (X, Y, t) = U_{\Lambda }(x', y', t) \ U_{\Lambda }(x'', y'', t)\hskip 1cm \widetilde \psi _{\Lambda }(X, Y, t)= \psi _{\Lambda }(x', y', t) + \psi _{\Lambda }(x'', y'', t).$$ Therefore we have $$\widetilde U _{\Lambda } (X, Y, t) = (2 \pi h t^2)^{- p \sharp (\Lambda)} e^{-{|X-Y|^2 \over 2th^2}} e^{-\widetilde \psi _{\Lambda }(X, Y, t)}\leqno (9.10)$$ \bigskip \noindent {\bf Proposition 9.2.} {\it We can write, if $E_1$ and $E_2$ are disjoint subsets of a box $\Lambda $ of $\Z^d$, if $A$and $B$ are either multiplications by functions depending only on $x_{E_1}$ and $x_{E_2}$ and satisfying (9.1), or bounded operators in ${\cal L}( {\cal H}_{E_1})$ and ${\cal L}( {\cal H}_{E_2})$: $$cov _{\Lambda , \varepsilon } (A, B) = { 1\over 2} \ { Tr \ \Big ( T(E_1, E_2) (A' -A'') (B' - B'') \Big ) \over Tr \ \Big ( e^{-t \widetilde H_{\Lambda , \varepsilon } } \Big ) } \leqno (9.11)$$ where $$T(E_1, E_2) = {1\over 2 \ \sharp (G_+(E_1, E_2))} \left [ \sum _{\tau \in G_+(E_1, E_2)} e^{-t \widetilde H^{\tau }} \ - \sum _{\tau \in G_-(E_1,E_2)} e^{-t\widetilde H^{\tau }} \right ]. \leqno (9.12) $$ Moreover, the integral kernel $K_{E_1, E_2}$ of the operator $T(E_1, E_2)$ defined in (9.12) can be written $$K_{E_1, E_2}(X, Y, t) = {1 \over 2\ \sharp (G_+(E_1, E_2))} \sum _{\tau \in (G_+ \cup G_- )(E_1,E_2)}{\rm sgn} (\tau) \widetilde U_{\Lambda }^{(\tau )} (X, Y, t)\leqno (9.13)$$ where ${\rm sgn}(\tau ) = \pm 1 $ if $\tau \in G_{\pm } (E_1, E_2)$.} \bigskip \noindent {\it Proof.} The numerator of the fraction in (9.6), i.e. $$ N(\varepsilon ):= {\rm Tr }\ \Big ( e^{-t \widetilde H_{\Lambda , \varepsilon } } (A' -A'') (B' - B'') \Big )$$ satisfies $$N(\varepsilon ) = {\rm sgn} (\tau ) Tr \ \left ( e^{-t \widetilde H_{\Lambda , \varepsilon } ^{\tau } } (A' -A'') (B' - B'') \right ) \hskip 1cm \forall \tau \in ( G_{+}\cup G_-)(E_1, E_2)$$ and therefore, since $\sharp \big ( G_+(E_1,E_2)\big ) =\sharp \big ( G_-(E_1,E_2)\big )$, $$N(\varepsilon )= {\rm Tr }\ \Big ( T(E_1, E_2) (A' -A'') (B' - B'') \Big ), \leqno (9.14)$$ the equalities (9.11) and (9.12) follow, and the expression of the integral kernel is direct since $\widetilde U_{\Lambda } ^{\tau } (X, Y, t)$ is the heat kernel of ${\rm exp} (-t \widetilde H _{\Lambda , \varepsilon } ^{\tau })$. \bigskip The next step will be another expression of $K_{E_1, E_2}(X, X)$ the restriction to the diagonal of the integral kernel (9.13) of the operator $T(E_1, E_2)$ defined in (9.12), using the cluster decomposition of Section 5. For that, we need again some notations. \bigskip \noindent {\it Notations.} For each box $Q\subset \Lambda $, we write obviously $$(T_Q \widetilde \psi _{\Lambda } )(X, Y, t) = (T_Q \psi _{\Lambda }) (x', y', t)+ (T_Q \psi _{\Lambda }) (x'', y'', t) \hskip 1cm {\rm if } \ \ \ \ X=(x', x''), \ \ \ \ \ \ Y=(y', y'').$$For each box $Q $, let us set $$M_Q(t) = \sup _{X\in (\R^p)^{Q}}T_Q \widetilde \psi _{\Lambda } (X, X, t) \hskip 1cm g_Q (X, t) = e^{ M_Q(t)-(T_Q\widetilde \psi _{\Lambda } )(X, X, t) } -1 \leqno (9.15)$$ We denote by ${\cal A} (\Lambda)$ the following set: the elements of ${\cal A} (\Lambda)$ are the (possibly empty) sets of boxes, contained in $\Lambda$, and of diameter $\geq 1$. For each set $A \in {\cal A} (\Lambda)$, we set $$G_A(X, t)= \prod _{Q\in A} g_Q (X, t).\leqno (9.16)$$ If the set $A$ is empty, we set $G_A(X, t)=1$. We set $$\Phi _0(X, t) = \widetilde \psi _{\Lambda }(0, 0, t) + \sum _{{\rm diam} (Q)=0 } (T_Q \widetilde \psi _{\Lambda })(X, X, t) \ + \sum _{{\rm diam} (Q) \geq 1 } M_Q (t),\leqno (9.17)$$ where the sums are taken on all boxes contained in $\Lambda$. \medskip Now, if $E_1$ and $E_2$ are subsets of a box $\Lambda $ of $\Z^d$, we denote by ${\cal A}_0(E_1, E_2, \Lambda)$ the following set: the elements of ${\cal A}_0(E_1, E_2, \Lambda)$ are the (possibly empty) sets of boxes contained in $\Lambda$, with diameter $\geq 1$, containing a sequence $(Q_1, ... , Q_p)$ ($p\geq 1$) of boxes connecting $E_1$ and $E_2$, i.e. such that $E_1\cap Q_1 \not = \emptyset$, $Q_1 \cap Q_2 \not = \emptyset$, . . . $Q_{p-1}\cap Q_p\not = \emptyset$, and $Q_p \cap E_2\not = \emptyset$. \bigskip \noindent {\bf Proposition 9.3. } {\it Under the hypotheses of Proposition 9.2, and with these notations, we have $$K _{E_1, E_2} (X, X, t) = {(2 \pi t h^2)^{-p|\Lambda | } e^{-\Phi _0(X, t)} \over 2\ \sharp (G_+(E_1, E_2))} \sum _{\tau \in (G_+ \cup G_- )(E_1,E_2)} {\rm sgn} (\tau) \sum _{A\in {\cal A}_0(E_1,E_2, \Lambda)}(G_A)^{(\tau )} (X, t) .\leqno (9.18)$$} \bigskip \noindent {\it Proof. } Since each function $(T_Q\psi _{\Lambda })(x, x, t)$ depends only on the variables $x_{\lambda }$ such that $\lambda \in Q$, (by Proposition 5.1 i)), we have the following implication, if $\tau $ and $\tau '$ are maps from $\Lambda $ to $\{ 0, 1 \}$, with the notation (9.9), $$ \left . \matrix { \tau '(\lambda ) = \tau (\lambda ) \ \ \forall \lambda \in Q \cr or \cr \tau '(\lambda ) = 1 - \tau (\lambda ) \ \ \forall \lambda \in Q\cr } \right \} \Longrightarrow (T_Q\widetilde \psi _{\Lambda } )^{(\tau)} ( X,X, t) =(T_Q\widetilde \psi _{\Lambda } )^{(\tau ' )} ( X,X, t)\leqno (9.19)$$ By (9.10), (9.15) and Proposition 5.1 (point ii), we can write $$\widetilde U_{\Lambda } (X, X, t) = (2 \pi t h^2)^{-p\sharp (\Lambda )} e^{-\Phi _0(X, t)} \prod _{ {\rm diam Q} \geq 1} (1 + g_Q(X, t))\leqno (9.20)$$ It follows that $$\widetilde U_{\Lambda } (X, X, t) = (2 \pi t h^2)^{-p\sharp (\Lambda )} e^{-\Phi _0(X, t)} \sum _{A \in {\cal A}(\Lambda) } G_A (X, t)\leqno (9.21)$$ where ${\cal A}(\Lambda)$ is the set defined before the statement of the proposition. By (9.19) and Proposition 5.1 i), $\Phi _0$, defined in (9.17), is not modified by the action of $\phi _{\tau }$ $(\tau \in G_{\pm }(E_1, E_2))$, and we can write, by (9.13) and (9.21) $$K_{E_1, E_2 }(X, X, t)={(2 \pi t h^2)^{-p\ \sharp (\Lambda )} e^{-\Phi _0(X, t)} \over 2\ \sharp (G_+(E_1, E_2))} \sum _{\tau \in (G_+ \cup G_- )(E_1,E_2)} sgn (\tau) \sum _{A\in {\cal A}(\Lambda)} (G_A)^{(\tau )} (X, t)\leqno (9.22)$$ Let us show that, if $A\in {\cal A}$, but if $ A \notin {\cal A}_0(E_1, E_2, \Lambda )$, we have $$\sum _{\tau \in (G_+ \cup G_- )(E_1,E_2)} sgn (\tau) (G_A)^{(\tau )} (X, t) = 0.\leqno (9.23)$$ Let us denote by $\widehat E_2(A)$ the set of all points $\lambda \in \Lambda $ which are either in $E_2$, or connected to $E_2$ by a sequence of boxes in $A$, in other words, such that there exists a sequence of boxes $Q_1\in A $, . . . $Q_p \in A$ ($p\geq 1$), such that $E_2 \cap Q_1 \not = \emptyset $, $Q_j \cap Q_{j+1} \not = \emptyset $ ($1 \leq j \leq p-1$), and $\lambda \in Q_p$. For each $\tau \in G_+(E_1,E_2)$, we define a map $b_A(\tau )$ from $\Lambda $ to $\{ 0, 1 \} $ by by $$\Big ( b_A(\tau ) \Big ) (\lambda ) = \left \{ \matrix { 1 - \tau (\lambda ) &if &\lambda \in \widehat E_2(A) \cr \tau (\lambda ) &if &\lambda \notin \widehat E_2(A) \cr } \right .$$ Let us prove that, with our hypotheses on $A$, for each $\tau \in G_+(E_1,E_2)$, we have $$G_A^{(\tau) } ( X, t)=G_A ^{ ( b_A(\tau ) )} ( X, t)\leqno (9.24)$$ For each box $Q\in A$, either $Q$ is disjoint from $\widehat E_2 (A)$, and we have $(b_A(\tau )) (\lambda ) = \tau (\lambda )$ for all $\lambda \in Q$, or it is contained in $\widehat E_2 (A)$, and we have $(b_A(\tau )) (\lambda ) = 1 - \tau (\lambda )$ for all $\lambda \in Q$. By (9.19), for each box $Q \in A $, we have $g_Q^{ ( \tau )} ( X, t)=g_Q ^{ (b_A(\tau ) )}( X, t)$, and (9.24) follows. If $\tau \in G_+ (E_1, E_2)$, $b_A(\tau )$ is in $G_- (E_1, E_2)$ since $E_2$ is contained in $\widehat E_2(A)$ and $E_1$ is disjoint from $\widehat E_2(A)$ (otherwise $A$ would be in ${\cal A}_0(E_1,E_2, \Lambda)$). Since the map $b_A$ is a bijection between $G_+ (E_1,E_2)$ and $G_- (E_1, E_2)$, the equality (9.23) follows from (9.24). Therefore, the Proposition follows from (9.22) and (9.23), valid if $A\in {\cal A}(\Lambda ) \setminus {\cal A}_0(E_1, E_2, \Lambda)$. \bigskip For the proof of Proposition 9.1, we need estimations for $K ^{\Delta }(X, t)=K _{E_1, E_2} (X, X, t)$, but in the next section, we shall need also estimations of its derivatives, and we shall give in the next Proposition the needed estimations, with the following notations. \bigskip For each subset $E$ of a box $\Lambda $ of $\Z^d$, for each function $F$, $C^{\infty }$ on $(\R^{2p}) ^{\Lambda }$, and for each integer $m\geq 0$, we set $$ |F|^{(m)}_E (X) = \sup _{0 \leq k \leq m} \ \sup _{ (\lambda ^{(1)}, ... ,\lambda ^{(k)})\in E^k} | \nabla _{\lambda ^{(1)}} ... \nabla _{\lambda ^{(k)}}F (X)|\leqno (9.25)$$ and, if $F$ is bounded such as all its derivatives, $$ \Vert F \Vert ^{(m)}_E = \sup _{X \in (\R^{2p})^{\Lambda} } |F|^{(m)}_E (X) \leqno (9.26)$$ Let us recall that $m(t)= \inf (t, 1)$ and $M(t) = \sup (t, 1)$. \bigskip \noindent {\bf Proposition 9.4.} {\it For each integers $m\geq 0$ and $N\geq 0$, there exist a constant $C_{m, N}$, and a function $\gamma \rightarrow \varepsilon _0(\gamma , m, N)$ from $]0, 1[$ in itself such that, for each subsets $E$, $E_1$ and $E_2$ of a finite box $\Lambda $of $\Z^d$ such that $E_1$ and $E_2$ are disjoint, we can write $$ \vert K^{\Delta } _{E_1,E _2}\vert ^{(m)} _E (X, t) \leq ... \leqno (9.27)$$ $$\ldots \leq m(t)\ M(t)^m \ \ C_{m, \sharp (E)} \big ( M(t) \varepsilon ^{\gamma}\big ) ^{ \delta (E_1, E_2)} \ {\inf ( \sharp ( E_1), \sharp ( E_2)) \over 2 \ \sharp (G_+(E_1,E_2)) } \ \ \sum _{\tau \in (G_+\cup G_-)(E_1,E_2)} \widetilde U_{\Lambda }^{\tau } (X,X, t, h, \varepsilon)$$ if $$ht \leq \sigma _0, \ \ \ \ M(t) \varepsilon^{\gamma} \leq 1/2, \ \ \ \ \varepsilon \leq \varepsilon_0 (\gamma , m, \sharp (E)). \leqno (9.28)$$ } \bigskip \noindent {\it Notations.} Let us denote by ${\cal S}(E_1, E_2, \Lambda )$ the set of $A \in {\cal A}(\Lambda)$ whose elements can be ordered in a finite sequence $Q_1$, .. $Q_p$ ($p\geq 1$), satisfying $E_1\cap Q_1 \not = \emptyset$, $Q_1 \cap Q_2 \not = \emptyset$, . . . $Q_{p-1}\cap Q_p\not = \emptyset$, and $Q_p \cap E_2\not = \emptyset$. (Therefore, a set $A\in {\cal A}(\Lambda)$ is in ${\cal A}_0(E_1, E_2, \Lambda )$ if and only if it contains a set in ${\cal S}(E_1, E_2, \Lambda )$). We denote by ${\cal B}(E, \Lambda)$ the set of $A \in {\cal A}(\Lambda)$ whose all elements have a non void intersection with $E$, and by by ${\cal C}(E, \Lambda)$ the set of $A \in {\cal A}(\Lambda)$ whose all elements have a void intersection with $E$. We shall denote by $K_m$ a constant such that: $$|uv|^{(m)}_E(X) \leq K_m |u|^{(m)}_E(X) |v|^{(m)}_E(X).$$ \bigskip \noindent {\it Proof of Proposition 9.4.} By (9.18), the proposition will follow if we prove that, for some constant $C_{m, \sharp (E)}$ and $\varepsilon _0 (\gamma , m, \sharp (E))$, independent on $\tau $, we have, for each $\tau \in (G_+ \cup G_-)(E_1, E_2)$, under the conditions (9.28), $$(2 \pi t h^2)^{-p\ \sharp (\Lambda )} \sum _{A\in {\cal A}_0(E_1,E_2)} \vert e^{-\Phi _0(., t)} (G_A)^{(\tau )} (. , t) \vert ^{(m)} _E (X) \leq \leqno (9.29)$$ $$ \leq C_{m, \sharp (E)} \varepsilon ^{\gamma \delta (E_1, E_2)} \ \inf ( \sharp ( E_1), \sharp ( E_2)) \ \widetilde U_{\Lambda } ^{(\tau )} (X, X, t). $$ For each sets $A\in {\cal S}(E_1, E_2)$, $B\in {\cal B}(E)$ and $C\in {\cal C}(E)$, the set $A \cup B \cup C$ is in $ {\cal A}_0(E_1,E_2)$. All the elements of $ {\cal A}_0(E_1,E_2)$ can be obtained in this form, perhaps in several ways, and we have $G_{A \cup B \cup C} = G_A G_B G_C$ if $A$, $B$ and $C$ are disjoint. Remarking that all functions $G_A$ are $\geq 0$, and that, if $C\in {\cal C}(E)$, $G_C^{(\tau ) } (X, t)$ is independent of the variable $X_E$, it follows that $$ \sum _{A\in {\cal A}_0(E_1,E_2)} \vert e^{-\Phi _0(. , t)}(G_A)^{(\tau )} \vert ^{(m)}_E(X, t) \leq \leqno (9.30) $$ $$ \leq K_m^2 \vert e^{-\Phi _0(. , t)} \vert ^{(m)} _E (X) \sum _{A, B, C} \vert (G_A)^{(\tau )} \ \vert ^{(m)}_E(X, t) \ |(G_B)^{(\tau )}|^{(m)}_E(X, t) \ (G_C)^{(\tau )} (X, t) , $$ where the sum is over all the triples $(A, B, C)$, where $A$, $B$ and $C$ are disjoint sets of boxes in $\Lambda$ such that $A$ is in ${\cal S}(E_1,E_2 )$, $B$ in ${\cal B}(E)$ and $C$ in ${\cal C}(E)$. \medskip By Proposition 6.1, we can write $\vert e^{-\Phi _0(. , t)} \vert ^{(m)} _E (X) \leq C_m \ M(t)^m\ e^{-\Phi _0(X, t)}$. By Proposition 6.1, point i), and by the definition (9.16) of $G_A$, for any sequence of boxes ($Q_1$, .. $Q_N$) contained in $\Lambda$, not reduced to single points, we can write, if $ht \leq \sigma _0$ and $\varepsilon \leq \varepsilon _m({ 3 + \gamma \over 4} )$, for some other $C_m >1$: $$ \vert \big ( g_{Q_1} \ldots g_{Q_N} \big )^{(\tau )}(., t)|^{(m)}_ E (X) \leq (C_m)^N t^N \varepsilon ^ {{3 +\gamma \over 4} \sum _{j=1}^N {\rm diam } (Q_j)) } {\rm exp }\left [ \sum _{j=1}^N \big ( M_{Q_j}(t)-(T_{Q_j}\widetilde \psi _{\Lambda } ) ^{(\tau )}(X, X, t) \big ) \right ] .$$ In other words, denoting, for each set $A$ of boxes contained in $\Lambda$, and of diameter $\geq 1$, by $L(A)$ the sum of their diameters, for the $\ell ^{\infty }$ norm, we have, $$ \vert G_A^{(\tau )} \ (., t)|^{(m)}_ E (X) \leq m(t)\ L (A)^m \ \left (C_m M(t) \varepsilon ^{3 + \gamma \over 4 } \right ) ^ {L(A) } \ {\rm exp } \left [ {\sum _{Q \in A} \big ( M_{Q}(t)-(T_{Q}\widetilde \psi _{\Lambda } )^{(\tau )} \ (X, X, t) \big )} \right ] .$$ It follows, if $C_m \varepsilon ^{1 - \gamma \over 4 } \leq 1/2$, that, with another $K_m$, $$ \vert G_A^{(\tau )} (., t)|^{(m)}_ E (X) \leq \ m(t) K_m \ (M(t) \varepsilon ^{ {1 + \gamma \over 2}})^{L(A)} \ {\rm exp } \left [ {\sum _{Q \in A} \big ( M_{Q}(t)-(T_{Q}\widetilde \psi _{\Lambda } )^{(\tau )} (X, X, t) \big )} \right ] .$$ Therefore $$ \sum _{A\in {\cal A}_0(E_1,E_2)} \vert e^{-\Phi _0(. , t)}(G_A)^{(\tau )} \vert ^{(m)}_E(X, t) \leq K'_m e^{-\Phi _0(X , t)}\ M(t)^m \ldots $$ $$ \sum _{A\in {\cal S}(E_1,E_2 ) , B \in {\cal B}(E) \atop A \cap B = \emptyset } (M(t) \varepsilon ^{ {1 + \gamma \over 2} })^{ (L(A)+ L(B))} {\rm exp } \left [ \sum _{Q \in A \cup B } \big ( M_{Q}(t)-(T_{Q}\widetilde \psi _{\Lambda } )^{(\tau )} (X, X, t) \big ) \right ] \ \ \sum _{C \cap ( A \cup B) = \emptyset } G_C^{(\tau )} (X, t) , $$ where the last sum is taken on all the (possibly empty) sets $C$ of boxes in $\Lambda$, of diameter $\geq 1$, but not belonging to $A \cup B$. We remark that $$ \sum _{ C \cap ( A \cup B) = \emptyset } (G_C)^{(\tau )} (X, t) \ =\ {\rm exp } \left [ \sum _{Q \notin A \cup B } \big ( M_{Q}(t)-(T_{Q}\widetilde \psi _{\Lambda } )^{(\tau )} \ (X, X, t) \big ) \right ].$$ Therefore $$ \sum _{A\in {\cal A}_0(E_1,E_2)} \vert e^{-\Phi _0}G_A^{(\tau )} \vert ^{(m)}_E(X, t) \leq m(t) K''_m \ e^{-\widetilde \psi _{\Lambda } ^{(\tau )} (X, X, t) } \ \sum _{A\in {\cal S}(E_1,E_2 )} (M(t) \varepsilon ^{1 + \gamma \over 2})^{ L(A)} \ \sum _{ B \in {\cal B}(E) } (M(t) \varepsilon ^{1 + \gamma \over 2})^{ L(B)}.$$ The estimation (9.29) and, therefore, the Proposition, will follow from the next Lemma, applied to $\rho = M(t) \varepsilon ^{{1+\gamma \over 2}}$, (remarking that the condition $\rho \leq {1 \over 2K}$ is satisfied if $M(t) \varepsilon ^{\gamma} \leq {1 \over 2}$ and $K \varepsilon ^{{1-\gamma \over 2}}\leq 1$). \bigskip \noindent {\bf Lemma 9.5.} { \it There exists $K>1$, depending only on the dimension $d$, such that we have the following implication, for each finite sets $E$, $E_1$ and $E_2$ of $\Z ^d$: $$0 < \rho \leq {1 \over 2K} \ \Longrightarrow \ \sum _{ A\in {\cal S}(E_1,E_2 , \Z^d ) } \rho ^{L(A)} \ \leq 2 \ (K \rho ) ^{\delta (E_1, E_2)}\ \inf ( \sharp (E_1), \sharp (E_2) ) \leqno (9.31)$$ $$ 0 < \rho \leq {1 \over 2K} \ \Longrightarrow \ \sum _{ B \in {\cal B}(E, \Z^d ) } \rho ^{L(A)} \ \leq 2 \sharp (E)\leqno (9.32)$$} \bigskip \noindent {\it Proof of (9.32).} Let ${\cal B}_L(E)$ be the set of $ B \in {\cal B}( E , \Z^d )$ such that $L (A)= L$. We shall prove that there exists $K>0$, depending only on the dimension $d$, such that, for each finite subset $E$ of $\Z^d$, for each integer $L \geq 0$, we have $$\sharp({\cal B}_ L (E) ) \ \leq \sharp (E)\ \ K^L\leqno (9.33)$$ Let us consider first the case where $E$ is reduced to a single point $\lambda $. There are at most $ \left ( {R(R+1)\over 2}\right )^{d}\leq R^{2d}$ possible boxes, of greatest side $R$, which contain $\lambda$. If $R_1$, ..., $R_p$ ($p\geq 1$) is a finite sequence of integers $\geq 1$, the number of possible sequences $(Q_1, ... Q_p)$ of boxes such that $\lambda \in Q_j$ and ${\rm diam }(Q_j)=R_j$ ($1\leq j \leq p$) is $\leq \prod _{j=1}^p R_j^{2d} $. If we choose $K>1$ such that $R^{2d} \leq K^R$ for all $R\geq 1$, and if we remember that the number of sequences ($R_1$ ,...$R_N$) such that $R_j \geq 1$ and $R_1 + ... + R_N = L$ is $2^{L -1}$,we see that $$\sharp({\cal B}_L(\{ \lambda \} ) ) \ \leq (2K)^L$$ and (9.33), and therefore (9.32), follow easily. \medskip \noindent {\it Proof of (9.31).} Let ${\cal S}_L(E_1,E_2)$ be the set of $ A \in {\cal S}(E_1,E_2, \Z^d )$ such that $L (A) = L$, ($L(A)$ being the sum of the diameters, for the $\ell ^{\infty } $ norm, of all the boxes in $A$). We remark that, if $A \in {\cal S}_L(E_1,E_2)$, we have $L(A) \geq \delta (E_1, E_2)$. We shall prove that there exists $K>0$, depending only on the dimension $d$, such that, for each disjoint subsets $E_1$ and $E_2$ of $\Z^d$, and for any integer $L \geq \delta (E_1, E_2)$, we have $$\sharp({\cal S}_L(E_1, E_2) ) \ \leq K^{ L} \inf ((\sharp E_1), (\sharp E_2)) \leqno (9.34) $$ Suppose that $\sharp (E_1) \leq \sharp (E_2)$. For each point $a\in E_1$, for each sequence $(R_1, ... , R_p)$ of integers $\geq 1$, let ${\cal S}(a; R_1, ... R_p)$ be the set of sequences of boxes $(Q_1, ... , Q_p)$ contained in $\Lambda$ such that $a\in Q_1$, $Q_1 \cap Q_2 \not= \emptyset$, ...$Q_{p-1} \cap Q_p \not= \emptyset$ and ${\rm diam} (Q_j) = R_j$ ($1 \leq j \leq p$). If we remark that the number of boxes with greatest side $R$ intersecting a given box of greatest side $R_0$ is less than $R^d ( R_0 + R)^d \leq (R_0 + R)^{2d}$, we see that $$\sharp ({\cal S}(a; R_1, ... R_p)) \leq R_1^{2d}\ (R_1 + R_2)^{2d} ... (R_{p-1} + R_p)^{2d} \leq e^{4d(R_1 +... R_p)}.$$Given $L\geq 1$, the number of sequences $(R_1, ... , R_p)$ such that $p\geq 1$, $R_j \geq 1$ and $R_1 + ... + R_p = L$ is $2^{L-1}$. Therefore, the number of sequences of boxes $(Q_1, ... , Q_p)$ such that $a\in Q_1$, $Q_1 \cap Q_2 \not= \emptyset$, ...$Q_{p-1} \cap Q_p \not= \emptyset$ and $\sum {\rm diam } (Q_j) = L$ is $\leq (2 e^{4d}) ^L$, and $\sharp ( {\cal S}_L(E_1,E_2)) \leq \sharp (E_1) K^L$, with $K= 2 e^{4d}$. The case where $\sharp (E_2) \leq \sharp (E_1)$ being similar, (9.34) is proved, and (9.31) follows easily. \bigskip \noindent {\it End of the proof of Proposition 9.1.} \medskip \noindent If both $A$ and $B$ are multiplications by polynomially bounded functions $f$ and $g$ satisfying (9.1), the numerator $N(\varepsilon )$ of (9.6), calculated in (9.14) satisfies classically $$N(\varepsilon )= \int _{(\R^{2p})^{\Lambda }} K_{E_1, E_2}(X, X) \Big ( f(x'_{E_1}) - f(x''_{E_1}) \big ) \Big ( g(x'_{E_2}) - g(x''_{E_2}) \big )\ dX $$ Therefore, $$\vert N(\varepsilon)\vert \leq 4 N( f ) N(g) \int _{(\R ^p)^{\Lambda }} \vert K_{E_1, E_2}(X, X)\vert \ (1 + |X_{E_1}|)^{m_1}(1 + |X_{E_2}|)^{m_2} dX$$ By Proposition 9.4, applied to $m=0$ and $E= \emptyset$, if the conditions (9.28) are satisfied, we have $$\vert N(\varepsilon)\vert \leq {4 N( f ) N(g)\ \inf (\sharp (E_1) , \sharp (E_2)) \over 2 |G_+(E_1,E_2)|}m(t) \ ( M(t) \varepsilon ^{\gamma})^{ \delta (E_1, E_2)} \ \sum _{\tau \in G_+(E_1,E_2)\cup G_+(E_1,E_2)} I_{\tau }(\varepsilon, t)\leqno (9.35)$$ where $$I_{\tau }(\varepsilon, t)= \int _{(\R^{2p})^{\Lambda }} \widetilde U_{\Lambda }^{\tau } (X,X, t, h, \varepsilon) (1 + |X_{E_1}|)^{m_1}(1 + |X_{E_2}|)^{m_2} \vert dX .$$ If $m_1= m_2 = 0$, we have $I_{\tau } (\varepsilon, t) = {\rm Tr} \Big ( e^{ -t\widetilde H_{\Lambda }^{(\tau)} (\varepsilon )} \Big )$ and the point a) of Proposition 9.1 is proved. In the other cases, by Proposition 8.2, there exists $\sigma _1>0$, $\varepsilon _0>0$, and, for each integers $m_1$, $m_2$, $N_1$, $N_2$, a function $t\rightarrow C(t, m_1, m_2, N_1, N_2)$ such that, for each $h$, $t$ and $\varepsilon $ satisfying $h^2 (t + t^2) \leq \sigma _1$ and $\varepsilon \leq \varepsilon _0$, we have $$ I_{\tau }(\varepsilon, t) \leq C(t, m_1, m_2, \sharp E_1, \sharp E_2) {\rm Tr} \Big ( e^{ -t\widetilde H_{\Lambda }^{(\tau)} (\varepsilon )} \Big ) . \leqno (9.36)$$ If $m_j= 0$, $C$ is independent on $\sharp (E_j)$ ($j=1, 2$). The Proposition follows from (9.35) and (9.36). \bigskip \noindent {\bf 10. Quantum correlation : general case.} \bigskip The aim of this section is the proof of Theorem 1.5 in the general case. For that, we need some notations. If $T$ is an integral operator in $L^2((\R^{p})^{\Lambda })$ with an integral kernel $K_T(x, y)$ in ${\cal S}((\R^{2p})^{\Lambda })$, and if $E$ is a subset of $\Lambda $, let us denote, with the notation $x = (x_E, x _{\Lambda \setminus E}) $ already introduced, for each $x_{\Lambda \setminus E}$ in $(\R^{p})^{\Lambda \setminus E}$, by $T(x_{\Lambda \setminus E})$ the operator (''partial trace'') in $L^2((\R^{p})^{E})$ with integral kernel $(x_E, y_E)\rightarrow K_T(x_E, x_{\Lambda \setminus E}, y_E, x_{\Lambda \setminus E})$, in other words the operator defined by : $$\Big ( T(x_{\Lambda \setminus E})f \Big ) (x_E) = \int _{(\R^p)^E} K_T(x_E, x_{\Lambda \setminus E}, y_E, x_{\Lambda \setminus E}) f(y_E) dy_E \hskip 1cm \forall f\in L^2((\R^{p})^{E}), \ \ \ \ \ \forall x\in E. \leqno (10.1) $$ If $S$ is a trace class operator in $L^2((\R^{p})^{E})$, we denote by $\Vert S\Vert ^{(tr)}_E$ its trace norm. Let us recall that, with the previous notations, if $E$ is a subset of a finite set $\Lambda $ of $\Z ^d$, if $L$ is a bounded operator in $L^2((\R^{p})^{E})$, (identified to a bounded operator in $L^2((\R^{p})^{\Lambda})$, we have: $$\vert {\rm Tr} (T \circ L) \vert \leq \Vert L \Vert \int _{ (\R^{p})^{\Lambda \setminus E}} \Vert T( x_{\Lambda \setminus E}) \Vert ^{(tr)}_E \ d x_{\Lambda \setminus E}\leqno (10.2)$$ \bigskip Theorem 1.5 will be a consequence of the following proposition, in which we set $X_E= (x'_E, x''_E)$: \bigskip \noindent {\bf Proposition 10.1.} {\it With the previous notations, for each integer $N\geq 1$, there exist a function $(t, h) \rightarrow C(t, h, N)$, bounded on each compact set of $]0, + \infty [ \times ]0, + \infty [ $, and a function $\gamma \rightarrow \varepsilon _0 (\gamma , N)$ on $]0, 1[$, with the following properties. With the notations of Theorem 1.5, for each disjoint subsets $E_1$ and $E_2$ of any box $\Lambda$ of $\Z^d$, for each set $E \subseteq E_1 \cup E_2$, one can find two disjoint subsets $F_1$ and $F_2$ of $\Lambda$, containing $E_1$ and $E_2$, such that if $T(F_1, F_2)$ is the operator defined like in (9.12), with $E_j$ replaced by $F_j$, if $\gamma \in ]0, 1[$, we have $$\int_ {(\R^{2p})^{\Lambda \setminus E}} \Vert T(F_1 , F_2) (X_{ \Lambda \setminus E} ) \Vert ^{(tr )}_E dX_{ \Lambda \setminus E} \leq \leqno (10.3)$$ $$\leq C(t, h, \sharp (E)) \ (1 + {\rm diam}(E_1) + {\rm diam}(E_2))^d\ (M(t) \varepsilon ^{\gamma }) ^{{1 \over 5} \delta (E_1, E_2)} Tr \ \left ( e^{-t \widetilde H_{\Lambda , \varepsilon } } \right )$$ if $$ht \leq \sigma _0, \ \ \ \ \ \ 0< \varepsilon \leq \varepsilon _0 (\gamma , \sharp (E)) \ \ \ \ \ \ \ M(t)\varepsilon ^{\gamma } \leq {1 \over 2}. \leqno (10.4)$$ If $E \cap E_j = \emptyset$, we can omit ${\rm diam }(E_j)$ in the sum in the RHS $(j=1, 2)$. } \bigskip \noindent {\it Proof that Theorem 1.5 is a consequence of Proposition 10.1.} If $A$ and $B$ are operators in ${\cal L} ({\cal H} _{E_1})$ and ${\cal L} ({\cal H} _{E_2})$ like in the Theorem 1.5, we can use the expression given by the Proposition 9.2 for their correlation, and we can replace, in (9.11), the operator $T(E_1, E_2)$ by $T(F_1, F_2)$ if $F_1$ and $F_2$ are any disjoint sets containing $E_1$ and $E_2$. We make the choice of Proposition 10.1. In the general case of Theorem 1.5, we shall apply Proposition 10.1 to the operator $T(F_1, F_2)$ and to the set $E= E_1 \cup E_2$. By (10.2), applied with the operator $T = T(F_1, F_2)$, to the set $E = E_1 \cup E_2$, and to the operator $L = (A'-A'')(B'-B'')$, we have $$\vert {\rm Tr }\ \Big ( T(F_1, F_2) (A' -A'') (B' - B'') \Big ) \vert \leq 4 \Vert A \Vert \ \Vert B \Vert \ \int _{(\R^{2p})^{\Lambda \setminus E}} \Vert T(F_1 , F_2) (X_{ \Lambda \setminus E} ) \Vert ^{(tr)}_E dX_{ \Lambda \setminus E} $$ By Proposition 10.1, it follows that, for some function $K$, the estimation (1.16) is satisfied under the conditions (1.17). \smallskip If $B$ is the multiplication by a bounded $C^{\infty }$ function $g$ on $(\R^p)^{E_2}$, we apply Proposition 10.1, also with the operator $T(F_1, F_2)$, but with the set $E = E_1$. By (10.2) applied with the operator $\widetilde T = T \circ (B'-B'')$, the set $E=E_1$, and the operator $L =A'-A''$, we have $$\vert {\rm Tr }\ \Big ( T(F_1, F_2) (A' -A'') (B' - B'') \Big ) \vert \leq 2 \Vert A \Vert \int _{ {(\R^{2p})^{\Lambda \setminus E}}} \Vert (T(F_1 , F_2) \circ (B'-B'' ) (X_{ \Lambda \setminus E} ) \Vert ^{(tr)}_E dX_{ \Lambda \setminus E}$$ Since $g$ depends only on $x_{E_2}$, and therefore is independent on $x_E$, we have, for each $X_{\Lambda \setminus E}$, $ (T(F_1 , F_2) \circ (B'- B'') ) (X_{ \Lambda \setminus E} ) = (g(x'_{\Lambda \setminus E}) - g(x''_{\Lambda \setminus E}) T(F_1 , F_2) (X_{ \Lambda \setminus E}) $ and therefore $$\vert {\rm Tr }\ \Big ( T(F_1, F_2) (A' -A'') (B' - B'') \Big ) \vert \leq 4 \Vert A \Vert \ \Vert g \Vert _{L^{\infty }} \int _{(\R^{2p})^{\Lambda \setminus E}} \Vert T(F_1 , F_2) (X_{ \Lambda \setminus E} ) \Vert ^{(tr)}_E dX_{ \Lambda \setminus E}$$ and Proposition 10.1 gives again the estimation (1.13), where the function $K$ is independent of the diameter of $E_2$. \bigskip For the proof of Proposition 10.1, we shall make the following choice $$ F_j = \{ \lambda \in \Lambda , \ \ \ \ \ \ \delta (\lambda , E_j) \leq {2\over 5} \delta (E_1, E_2) \} \hskip 1cm j=1, 2.\leqno (10.5)$$ These sets are disjoint. \bigskip In order to estimate the trace-norms, we shall need the following elementary result (cf D. Robert [18] for a proof), in which we use the notations (9.25) and (9.26), with $X= (x', x'')$ replaced here by $(x, y)$. \bigskip \noindent {\bf Lemma 10.2.} {\it If $S$ is an operator in $L^2((\R^{p})^{E})$, with an integral kernel $K_S$ in ${\cal S} ((\R^{p})^{E} \times (\R^{p})^{E})$, the operator $S$ is in trace-class, and there exists a constant $C(n)$, depending only on $n= p (\sharp E)$, such that $$\Vert S \Vert^{(tr)}_E \leq C(n)\ \int _{(\R^{p})^{E} \times (\R^{p})^{E}} (1 + \vert x_E\vert + \vert y_E \vert )^{2n+2} | K_S|^{ (3n + 3)} _E (x_E, y_E) dx_E dy_E\leqno (10.6)$$ } \bigskip By this Lemma and Proposition 8.2, Proposition 10.1 will be a consequence of the following one, in which we shall use the notations (9.25) and (9.26). \bigskip \noindent {\bf Proposition 10.3.} {\it With the notations of Proposition 10.1, we have, with the choice (10.5) of $F_1$ and $F_2$, if $K_{F_1 , F_2}(X, Y, t)$ is the integral kernel of $T(F_1 , F_2)$, if $X_{\Lambda \setminus E} = Y_{\Lambda \setminus E}$, under conditions of the form (10.4), $$\vert K_{F_1 , F_2} (., . , t) \vert ^{(m)} _E (X, Y) \leq \leqno (10.7)$$ $$ \leq { F(\vert X_E - Y_E \vert , m , \sharp (E), t, h) \varepsilon ^{{\gamma \over 5} \delta (E_1 , E_2 ) } \over \sharp (G_+ (F_1 , F_2) }\ (1 + {\rm diam} (E_1) + {\rm diam} (E_2))^d \sum _{\tau \in (G_+ \cup G_-)(F_1, F_2)} \widetilde U_{\Lambda }^{(\tau)} (X, X, t)$$ where $$ F(\vert X_E - Y_E \vert , m , \sharp (E), t, h) = C_m(t, h, \sharp (E)) \ e^{ - { \vert X_E - Y_E \vert_2 ^2 \over 2th} } e^{Ct \vert X_E - Y_E \vert _1} (1 + \vert X_E - Y_E \vert )^M \leqno (10.8)$$ where $ C_m(t, h, \sharp (E))$ is a function of $(h, t)$, bounded on each compact set of $]0, + \infty [ \times ]0, + \infty [ $, and $M$ is an integer, depending also on $m$. If $E \cap E_j = \emptyset$, we can omit the term ${\rm diam} (E_j)$ in the sum in the RHS of (10.7) ($j=0, 1$). } \bigskip \noindent {\it Proof.} We introduce the following functions, for each $\tau \in G_{\pm }(F_1, F_2)$: $$g_{\Lambda }^{\tau}(X, Y, t) =\Big ( \widetilde \psi _{\Lambda }^{\tau } (X, Y, t)- \widetilde \psi _{\Lambda } (X, Y, t)\Big ) \ - \ \Big ( \widetilde \psi _{\Lambda }^{\tau } (X, X, t)- \widetilde \psi _{\Lambda } (X, X, t)\Big ), \leqno (10.9)$$ $$G(X, Y, t) = {\widetilde U_{\Lambda }(X, Y, t) \over \widetilde U_{\Lambda }(X, X, t)}.\leqno (10.10)$$ We can write, by (9.13), $ K_{F_1, F_2}(X, Y, t) = K^{(1)}(X, Y, t) + K^{(2)}(X, Y, t)$, where $$ K^{(1)} (X, Y, t) = { G(X, Y , t) \over 2 \sharp (G_+(F_1, F_2))} \ \sum _{\tau \in (G_+ \cup G_-)(F_1, F_2)} sgn (\tau ) \big ( e^{- g_{\Lambda }^{\tau}(X, Y, t)} - 1 \big ) \ \widetilde U_{\Lambda }^{(\tau )} (X, X, t)\leqno (10.11)$$ $$ K^{(2)}(X, Y, t) = \ G(X, Y, t)\ K _{F_1 , F_2} (X, X, t).\leqno (10.12)$$ We set also $$K ^{\Delta } _{F_1 , F_2} (X, t)= K _{F_1 , F_2} (X, X, t) , \hskip 1cm \widetilde U_{\Lambda , \Delta}^{(\tau)} (X, t) = \widetilde U_{\Lambda }^{(\tau)} (X, X, t)$$ \medskip For the function $K ^{\Delta } _{F_1 , F_2}$, we have the estimation given by Proposition 9.4 (in which we replace $E_1$ and $E_2$ by $F_1$ and $F_2$). We remark that $ \sharp (F_j) \leq ({\rm diam }(E_j) + {4 \over 5 } \delta (E_1 , E_2 ) )^d$ and that $\delta (F_1 , F_2) \geq {1 \over 5} \delta (E_1, E_2)$. Therefore, Proposition 9.4 gives, after changing $\gamma $, for some function $C(t, m, N)$ $$ \vert K^{\Delta } _{F_1,F _2}\vert ^{(m)} _E (X, t) \leq \leqno (10.13)$$ $$\leq C (t, m, \sharp (E)) \ \big ( M(t) \varepsilon ^{\gamma}\big ) ^{ {1 \over 5} \delta (E_1, E_2)} \ {\inf ( {\rm diam } ( E_1), {\rm diam } ( E_2))^d \over \ \sharp (G_+(F_1,F_2)) } \ \ \sum _{\tau \in (G_+\cup G_-)(F_1,F_2)} \widetilde U_{\Lambda }^{\tau } (X,X, t)$$ under conditions similar to (9.28). \medskip By Theorem 1.1, we can write, for some other function $C(t, m, N)$, $$\vert \widetilde U_{\Lambda , \Delta }^{(\tau)} (., t)\vert ^{(m)}_E (X) \leq C(t, m , \sharp (E)) \widetilde U_{\Lambda }^{(\tau)} (X, X, t) \leqno (10.14)$$ \medskip For the function $G(X, Y, t)$ defined in (10.10), we can write, by Theorem 1.1, if $X_{\Lambda \setminus E} = Y_{\Lambda \setminus E}$ $$\vert G (., ., t)\vert ^{(m)}_E (X, Y) \leq F(\vert X_E - Y_E \vert , m , \sharp (E), t, h) \leqno (10.15) $$ where $F$ is a function of the same form as (10.8). \medskip It remains to estimate the factor $ e^{- g_{\Lambda }^{\tau}(X, Y, t)} - 1$ appearing in (10.11). First, we can write, by (10.9) and Theorem 1.1, if $X_{\Lambda \setminus E} = Y_{\Lambda \setminus E}$ $$\vert g_{\Lambda }^{\tau}(X, Y, t) \vert \leq Ct \vert X_E - Y_E \vert . \leqno (10.16)$$ For a more precise study of $g_{\Lambda }^{\tau}$, we need the following Lemma. \bigskip \noindent {\bf Lemma 10.4.} {\it For each integer $m$, there exists a function $C (t, m)>0$ with the following properties. With the notations and hypotheses of Proposition 10.1, and with the choice (10.5) of $F_1$ and $F_2$, for each map $\tau $ in $(G_+ \cup G_-)(F_1, F_2)$, we have, if $X_{\Lambda \setminus E} = Y_{\Lambda \setminus E}$, under conditions of the form (10.4), $$\vert \nabla _E \Big ( \widetilde \psi _{\Lambda }^{\tau }- \widetilde \psi _{\Lambda } \Big ) (., ., t)|^{(m)}_E (X, Y) \leq C( t, m)\ \varepsilon ^{ {\gamma \over 5} \delta (E_1, E_2)}\ (1 + \vert X_E - Y_E\vert )\ (1 + {\rm diam} (E_1)+ {\rm diam} (E_2))^d$$ If $E \cap E_j = \emptyset$, we can omit the term $ {\rm diam} (E_j)$ in the sum in the RHS ($j=1, 2$). } \bigskip \noindent {\it Proof.} Let $\lambda ^{(1)}, \ldots , \lambda ^{(k)}$ ($1 \leq k \leq m+1$) be a sequence of points of $E$. Using the operators $T_Q$ of Section 5, associated to all boxes contained in $\Lambda$, we have: $$ \Big ( \widetilde \psi _{\Lambda }^{\tau }-\widetilde \psi _{\Lambda } \Big ) (X, Y, t) = \Big ( \widetilde \psi _{\Lambda }^{\tau }-\widetilde \psi _{\Lambda } \Big ) (0, Y-X, t) + \sum _{Q \subseteq \Lambda } T_Q ( \widetilde \psi _{\Lambda }^{\tau }-\widetilde \psi _{\Lambda } ) (X, Y, t).$$ Suppose that at least one of the $\lambda ^{(j)}$ is in $E \cap E_1$. There are some small changes to do in the other case. For this case, we can write $\Big ( \widetilde \psi _{\Lambda }^{\tau }- \widetilde \psi _{\Lambda } \Big ) (0, Y- X, t)= 0$ if $X_{\Lambda \setminus E_1} = Y_{\Lambda \setminus E_1}$. Therefore, we can write, by Theorem 1.1, if we suppose only that $X_{\Lambda \setminus E} = Y_{\Lambda \setminus E}$, (one of the $\lambda ^{(j)}$ being in $E \cap E_1$): $$ \vert \nabla _{ \lambda ^{(1)} }...\nabla _{ \lambda ^{(k)} } \Big ( \widetilde \psi _{\Lambda }^{\tau }- \widetilde \psi _{\Lambda } \Big ) (0, Y- X, t)| \leq C_{k+1}t \ \varepsilon ^{ \gamma \delta (E_1, E_2)} (1 + \vert X_{E\cap E_2} - Y_{E\cap E_2}\vert ). $$We divide the boxes $Q \subseteq \Lambda $ into three categories (where $F_1$ is defined in (10.5)): $$A = \{ Q \subseteq F_1 \}$$ $$B= \{ Q \subseteq \Lambda , \ \ \ \ Q \not \subseteq F_1, \ \ \ {\rm and }\ \ \ \ \ \ \delta ( E_1 , Q ) \leq {1 \over 5}\delta (E_1, E_2) \} $$ $$C = \{ Q\subseteq \Lambda , \ \ \ \ \ \ \ \delta ( E_1 , Q ) > {1 \over 5} \delta (E_1, E_2) \} $$ \medskip \noindent If $Q\in A$, and $X_{ \Lambda \setminus E_1} = Y_{ \Lambda \setminus E_1}$, then, for all $\tau$ in $(G_+ \cup G_-)(E_1, E_2)$, we have $\Big (T_Q ( \widetilde \psi _{\Lambda }^{\tau }- \widetilde \psi _{\Lambda } ) \Big ) (X, Y, t)=0 $. We have only to remember that $T_Q\widetilde \psi _{\Lambda } $ depends only on $X_Q$, $Y_Q$ and $X-Y$. If we assume only that $X_{\Lambda \setminus E} = Y_{\Lambda \setminus E}$, it follows from Proposition 6.1, point iii), if $ht \leq \sigma _0$ and $\varepsilon \leq \varepsilon _m({3+ \gamma \over 4})$, that $$|\nabla _{ \lambda ^{(1)} }...\nabla _{ \lambda ^{(k)} } T_Q \Big ( \widetilde \psi _{\Lambda }^{\tau }- \widetilde \psi _{\Lambda } \Big )(X, Y, t) \vert \leq tC_{k+1}\ |X_{E \cap E_2}- Y_{E \cap E_2}| \varepsilon^{ {3+ \gamma \over 4} \delta (Q, E_2)}$$ We have also, by Proposition 6.1, point i), under similar conditions, $$|\nabla _{ \lambda ^{(1)} }...\nabla _{ \lambda ^{(k)} } T_Q \Big ( \widetilde \psi _{\Lambda }^{\tau }- \widetilde \psi _{\Lambda } \Big )(X, Y, t) \vert \leq tC_{k}\ \varepsilon^{ {3 + \gamma \over 4} {\rm diam}(Q)} \leqno (10.16)$$ Since $ \delta (Q, E_2)\geq {3\over 5} \delta (E_1, E_2)$, it follows that, if $Q \in A$, $$|\nabla _{ \lambda ^{(1)} }...\nabla _{ \lambda ^{(k)} } T_Q \Big ( \widetilde \psi _{\Lambda }^{\tau }- \widetilde \psi _{\Lambda } \Big )(X, Y, t) \vert \leq C'_k t (1 + |X_{E }- Y_{E }|) \varepsilon ^{ {1 + \gamma \over 10} \delta (E_1, E_2) + {1 \over 3} \gamma {\rm diam} (Q)} $$ Since $$ \sum _{Q\in A } \varepsilon ^{ {1 \over 3} \gamma {\rm diam} (Q)} \leq 4^d \sharp (F_1) \leq 4^d \Big ( {\rm diam} (E_1 ) + {4 \over 5} \delta (E_1 , E_2) \Big )^d \hskip 1cm {\rm if } \ \ \varepsilon ^{{1\over 3} \gamma } \leq 4^{-d},$$ it follows that $$|\nabla _{ \lambda ^{(1)} }...\nabla _{ \lambda ^{(k)} } \sum _{Q\in A } T_Q \Big ( \widetilde \psi _{\Lambda }^{\tau }- \widetilde \psi _{\Lambda } \Big )(X, Y, t) \vert \leq C''_k t (1 + |X_{E }- Y_{E }|) \ \varepsilon ^{ {\gamma \over 5} \delta (E_1, E_2)} ({\rm diam}(E_1))^d $$ \medskip \noindent If $Q \in B$, by Proposition 6.1 (point i)), we can use again (10.16) under the same conditions. We remark that, if $Q \in B$, we have ${\rm diam} (Q) \geq {1 \over 5} \delta (E_1, E_2)$ and therefore $$\sum _{Q \in B} \varepsilon^{ { 3 +\gamma \over 4} {\rm diam} (Q )} \leq \ \varepsilon ^{{1 + \gamma \over 10} \delta (E_1, E_2)}\ 4^d \Big ( {\rm diam} (E_1 ) + {4 \over 5} \delta (E_1 , E_2) \Big )^d$$ if $\varepsilon ^{{1 - \gamma \over 4} } \leq 4^{-d} $. It follows that, with another $C_m$, under this additional condition, $$ |\nabla _{ \lambda ^{(1)} }...\nabla _{ \lambda ^{(k)} } \ \sum _{Q \in B } T_Q \Big ( \widetilde \psi _{\Lambda }^{\tau }- \widetilde \psi _{\Lambda } \Big )(X, Y, t) \vert \leq t C _{m} \varepsilon ^{{1 + \gamma \over 10} \delta (E_1 , E_2) }\ \Big ( {\rm diam} (E_1 ) + {4 \over 5} \delta (E_1 , E_2) \Big )^d \leq $$ $$\leq t C'_m \varepsilon ^{ {\gamma \over 5} \delta (E_1 , E_2) }\ (1 + {\rm diam} (E_1 ) )^d $$ \smallskip \noindent If $Q \in C$, it follows, by a combination of the points i) and iii) of Proposition 6.1 that, with another $C_m$, we have, under the previous conditions, $$ | \nabla _{ \lambda ^{(1)} }...\nabla _{ \lambda ^{(k)} } T_Q \Big ( \widetilde \psi _{\Lambda }^{\tau }- \widetilde \psi _{\Lambda } \Big )(X, Y, t) \vert \leq t C _{m} \varepsilon ^{ {1 - \gamma \over 4 } {\rm diam }(Q) + {1+ \gamma \over 2 } \delta (Q, E_1)}.$$ We remark that, by the definition of $C$ $$\sum _{Q \in C} \varepsilon ^{ {1 - \gamma \over 4 } {\rm diam }(Q) + {1 + \gamma \over 2 } \delta (Q, E_1)} \leq \varepsilon ^{ {\gamma \over 5} \delta (E_1, E_2)} \sum _{Q \subset \Z^d } \varepsilon ^{ ({1 - \gamma \over 4 }) ( {\rm diam }(Q) + \delta (Q, E_1)) } \leq C \varepsilon ^{ {\gamma \over 5} \delta (E_1, E_2)} (1 + {\rm diam} (E_1))^d$$ if $\varepsilon ^{1- \gamma } \leq 16^{-d}$. By all the preceding inequalities, we can write, under conditions similar to (10.4) $$ | \nabla _{ \lambda ^{(1)} }...\nabla _{ \lambda ^{(k)} } \Big ( \widetilde \psi _{\Lambda }^{\tau }- \widetilde \psi _{\Lambda } \Big )(X, Y, t) \vert \leq \ \leq C( t, m)\ \varepsilon ^{ {\gamma \over 5} \delta (E_1, E_2)}\ (1 + \vert X_E - Y_E\vert )\ (1 + {\rm diam} (E_1))^d$$ under the hypotheses of the Lemma, if at least one of the $\lambda ^{(j)}$ is in $E\cap E_1$. If one of them is in $E\cap E_2$, which cannot happen if $E\cap E_2$ is void, we obtain, with some changes, the similar inequality with ${\rm diam} (E_1)$ replaced by ${\rm diam} (E_2)$ in the RHS. The Lemma is proved. \bigskip By the previous Lemma, we can write, if $X_{\Lambda \setminus E} = Y_{\Lambda \setminus E}$, $$\vert e^{- g_{\Lambda }^{\tau}(., ., t)} - 1 |^{(m)}_E (X, Y) \leq C (m, t ) \ e^{C t \vert X_E - Y_E \vert } \varepsilon ^{ {\gamma \over 5} \delta (E_1, E_2)}\ (1 + \vert X_E - Y_E\vert ) (1 + {\rm diam}(E_1) + {\rm diam}(E_2))^d. \leqno (10.17)$$ \medskip The Propositions 10.2, and, therefore, 10.1, follow from (10.11)-(10.15) and (10.17). \bigskip \noindent {\bf 11. Proof of Theorems 1.2, 1.3 and 1.4.} \bigskip Let $\Lambda$ be any box of $\Z^d$ and split it into two disjoint boxes $\Lambda_1$ and $\Lambda_2$. Suppose that the splitting happens orthogonally to the $j^{th}$ axis, then define $\Lambda_\bot$ as the box in $\Z^{d-1}$, projection of $\Lambda$ orthogonally to the $j^{th}$ axis. In the following, we shall assume that $j=1$ and that $\Lambda$, $\Lambda _1$ and $\Lambda _2$ can be defined more explicitly by $$\Lambda=\prod_{j=1}^d [\alpha_j, \beta_j] \ \ \ \ \ \ \ \Lambda_1= [\alpha _1 ,m] \times \Lambda _{\bot } \ \ \ \ \ \ \ \Lambda_2= [m+1, \beta _1 ] \times \Lambda _{\bot } \ \ \ \ \ \Lambda _{\bot }= \prod_{j=2}^d [\alpha_j, \beta_j ] \leqno (11.1)$$ where $\alpha _j \leq \beta _j$ and $\alpha _1 < m < \beta _1$. Let $\pi _j $ denotes the $j-$th projection from $\Z^d$ into $ \Z$, and let $P_j (m) = \{ \lambda \in \Z^d, \ \ \ \lambda _j=m \}$, ($1 \leq j \leq d$, $m\in \Z$). \bigskip As in Section 7, let $H_{\Lambda } (\varepsilon , \theta)$ be the Hamiltonian defined as in (1.1) with $V$ replaced by $V_{\Lambda , \varepsilon }(\theta)$ defined in (7.1), $U_{\Lambda , \varepsilon} (\theta, x, y, t) $ the corresponding heat kernel, and $\psi _{\Lambda}(x, y, t, h , \varepsilon, \theta)$ the function appearing in its expression (1.3). We set $\varphi _{\theta }(x, t)= \psi _ {\Lambda }(x, x, t, h , \varepsilon, \theta)$. \bigskip \noindent { For the proof of Theorem 1.2, we consider a finite subset $Q_0$ of $\Z^d$, and an element $A$ of ${\cal L}({\cal H}_{Q_0})$. If a box $\Lambda $ contains $Q_0$, $A$ can be considered as an element of ${\cal L}({\cal H}_{\Lambda })$) and we shall study the mean value $E_{\Lambda , \varepsilon } (A)$ defined in (1.14). Theorem 1.2 will be a consequence of the following Proposition. \bigskip \noindent {\bf Proposition 11.1.} {\it With the previous notations, we can write, for each boxes $\Lambda $, $\Lambda _1$ and $\Lambda _2$ of $\Z^d$ defined by (11.1), with $\alpha _1 < m < \beta _1$, if $Q_0$ is contained in $\Lambda _1$, if $A \in {\cal L}({\cal H}_{Q_0})$, we have, under conditions similar to (1.7), $$\vert E_{\Lambda , \varepsilon } (A) - E_{\Lambda _1, \varepsilon } (A)\vert \leq K(t, h, {\rm diam}(Q_0))\ (M(t) \varepsilon ^{ \gamma})^{{1\over 5} \delta (Q_0, P_1(m) ) } \Vert A \Vert \leqno (11.2)$$ } \bigskip \noindent {\it Proof. First step. } We shall give an expression of the LHS of (11.2). We set, with the notations of the beginning of this section, for each $\theta \in [0, 1]$, $$R(\theta , x, y, t) = {\partial \psi \over \partial \theta } (x_{Q_0}, x_{\Lambda \setminus Q_0} , y_{Q_0}, x_{\Lambda \setminus Q_0}, t) \ - \ {\partial \psi \over \partial \theta }(x, x, t). \leqno (11.3)$$ Let $B_Q(\theta)$ denotes the multiplication operator by the function $\Big (T_Q{\partial \psi\over\partial\theta}\Big ) _{\vert_D}$, $D$ being the diagonal $\{(x,y)\in (\R^p)^{\Lambda }\times (\R^p)^{\Lambda}\ \vert\ x=y\}$. For a given $K(x, y)$, in $ {\cal S} ((\R ^p)^{\Lambda } \times (\R ^p)^{\Lambda })$, let $Op(K)$ be the operator with integral kernel $K(x, y)$. Then we shall prove that $$E_{\Lambda , \varepsilon } (A) - E_{\Lambda _1, \varepsilon } (A) = \int _0^1 { {\rm Tr}(Op(U_{\Lambda , \varepsilon }(\theta) R (\theta ) )\circ A) \over {\rm Tr}\,(e^{-tH (\theta)}) }\ d \theta +{1\over 2} \sum_{Q \subseteq \Lambda }\int _0^1 cov (B_Q(\theta ) ,A)\ d \theta \leqno (11.4)$$ Here we define $cov _{\Lambda , \varepsilon , \theta} (B_Q,A)$ by (1.15), with $H_{\Lambda , \varepsilon }(\theta)$ as Hamiltonian, even if $A$ and $B_Q(\theta)$ do not act on different sets of variables, (in this case, it is no more commutative). In order to prove (11.4), we set $$ F(\theta)= { {\rm Tr}\,\Big ( e^{-tH_{\Lambda , \varepsilon }(\theta)} A \Big ) \over {\rm Tr}\, e^{-tH_{\Lambda , \varepsilon }(\theta)} }. $$ Consequently, $ E_{\Lambda ,\varepsilon } (A) - E_{\Lambda _1, \varepsilon } (A)\ =\ F(1)-F(0), $ and we shall calculate $F'_\theta$. We have: $$ ({\rm Tr} (e^{-tH_{\Lambda , \varepsilon }(\theta)}))'_\theta = {\rm Tr}( e^{-tH_{\Lambda , \varepsilon }(\theta)}B), $$ where $B$ is the multiplication operator by the function ${\partial \psi\over\partial\theta}(x, x, t)$. We see that $$ ({\rm Tr} (e^{-tH_{\Lambda , \varepsilon }(\theta)}A))'_\theta = {\rm Tr}( Op(U_{\Lambda , \varepsilon }(\theta) T)\circ A) $$ with $$T={\partial \psi\over\partial\theta}\vert_{D_0}, \hskip 1cm D_0=\{(x,y)\in (\R^p)^{\Lambda}\times (\R^p)^{\Lambda}\ \vert\ x_{\Lambda \backslash Q_0}=y_{\Lambda \backslash Q_0}\}. $$ Moreover, $T(x,y)={\partial \psi\over\partial\theta}(x, x) + R_{\theta } (x,y)$, where $R_{\theta }$ is defined above. Following Proposition 5.1, $$ {\partial \psi\over\partial\theta}\vert_D= {\partial \psi\over\partial\theta}(0,0)+ \sum_{Q \subseteq \Lambda } \Big ( T_Q{\partial \psi\over\partial\theta} \Big ) _{\vert_D}. $$ In particular, $$ ({\rm Tr} (e^{-tH_{\Lambda , \varepsilon } (\theta) }))'_\theta ={\partial \psi\over\partial\theta}(0,0){\rm Tr}( e^{-tH_{\Lambda , \varepsilon }(\theta )}) + \sum_Q{\rm Tr}( e^{-tH_{\Lambda , \varepsilon }(\theta )}B_Q(\theta)), \leqno(11.6) $$ and $$ ({\rm Tr} (e^{-tH_{\Lambda , \varepsilon }(\theta )}A))'_\theta ={\partial \psi\over\partial\theta}(0,0){\rm Tr}( e^{-tH_{\Lambda , \varepsilon }(\theta )}A) + {\rm Tr}( Op(U_{\Lambda , \varepsilon } (\theta) R (\theta) )\circ A) + \sum_{Q \subseteq \Lambda }{\rm Tr}( e^{-tH_{\Lambda , \varepsilon }(\theta )}B_Q(\theta)A). \leqno(11.7) $$ We write $X=(x',x'')\in (\R^p)^{\Lambda }\times (\R^p)^{\Lambda }$ and denote by $H'(\theta),A',B'_Q$ (resp. $H''(\theta),A'',B''_Q$) the operators $H_\theta,A,B_Q$ seen as operators on $L^2((\R^p)^{\Lambda}\times (\R^p)^{\Lambda })$ as in section 9. From (11.6) and (11.7), we derive that $$F'_\theta(\theta)= { {\rm Tr}(Op(U_{\Lambda}(\theta ) R(\theta) )\circ A) \over {\rm Tr}\,(e^{-tH_{\Lambda , \varepsilon }(\theta )}) } + \sum_{Q\subseteq \Lambda } { {\rm Tr}\,(e^{-t(H'(\theta) +H''(\theta))}B'_QA') - {\rm Tr}\,(e^{-t(H'(\theta)+H''(\theta))}B''_QA') \over {\rm Tr}\,(e^{-t(H'(\theta)+H''(\theta))}) }. $$ It follows, by (9.6), (exchanging the role of ' and ''), that $$F'_\theta(\theta)= { {\rm Tr}(Op(U_{\Lambda , \varepsilon } (\theta ) R (\theta))\circ A) \over {\rm Tr}\,(e^{-tH_{\Lambda , \varepsilon } (\theta)}) } +{1\over 2}\sum_{Q \subseteq \Lambda } cov _{\Lambda , \varepsilon , \theta} (B_Q(\theta),A). $$ Therefore (11.4) is proved. \bigskip \noindent {\it Second step.} For each $x _{\Lambda \setminus Q_0}$ in $(\R^p)^{\Lambda \setminus Q_0}$ and $t>0$, let $\Phi (x _{\Lambda \setminus Q_0}, t)$ be the operator in ${\cal H}_{Q_0}$ with integral kernel $(x_{Q_0}, y_{Q_0}) \rightarrow (U_{\Lambda , \varepsilon }R) (\theta, x_{Q_0} , x_{\Lambda \setminus Q_0},y _{Q_0}, x_{\Lambda \setminus Q_0}, t)$. By (10.2), we can write $$ \vert {\rm Tr}\,(Op(U_{\Lambda , \varepsilon}(\theta) R_{\theta } )\circ A) \vert \leq \Vert A \Vert \ \int _{(\R^p)^{\Lambda \setminus Q_0}} \Vert \Phi (x _{\Lambda \setminus Q_0}, t) \Vert _{Q_0}^{(tr)} \ d x _{\Lambda \setminus Q_0}$$ By Lemma 10.2, we have, if $N= 3 p \sharp (Q_0) + 3$, with the notations (9.25), (9.26), with $X=(x', x'')$ replaced by $(x, y)$, $$ \Vert \Phi (x _{\Lambda \setminus Q_0}, t) \Vert _{Q_0}^{(tr)} \leq $$ $$ \leq C_N \ \int _{ (\R^{2p})^{Q_0} } (1 + |x_{Q_0}| + |y_{Q_0}|)^N\ |U_{\Lambda , \varepsilon }(\theta, ., t ) | _{Q_0} ^{(N)} \ |R_{\theta } (., t)| _{Q_0} ^{(N)} (x_{Q_0} , x_{\Lambda \setminus Q_0},y _{Q_0}, x_{\Lambda \setminus Q_0})dx_{Q_0} dy _{Q_O}. $$ By (11.3) and by Proposition 7.1 (inequality (7.5)), we can write $$|R_{\theta } (., t)| _{Q_0} ^{(N)} (x_{Q_0} , x_{\Lambda \setminus Q_0},y _{Q_0}, x_{\Lambda \setminus Q_0})\leq tC_1 |x_{Q_0} - y_{Q_0}| \ \varepsilon ^{ \gamma \delta (Q_0, P_1(m))}.$$ By Theorem 1.1, we can write, if $x_{\Lambda \setminus Q_0} = y_{\Lambda \setminus Q_0}$, $$ |U_{\Lambda , \varepsilon }(\theta, ., t ) | _{Q_0} ^{(N)} (x, y)\leq C(t, h, \sharp (Q_0)) \ (1 + |x_{Q_0} - y_{Q_0}|)^M\ e^{ - {|x_{Q_0} - y_{Q_0}|^2 \over 2th}} \ e^{C t |x_{Q_0} - y_{Q_0}|} \ U_{\Lambda , \varepsilon }(\theta, x, x, t ).$$ Therefore, since $ \int U_{\Lambda , \varepsilon } (\theta , x, x, t) dx = {\rm Tr}\,(e^{-tH_{\Lambda } (\theta) })$, $$ \vert{ {\rm Tr}(Op(U_{\Lambda }(\theta) R_{\theta } )\circ A) \over {\rm Tr}\,(e^{-tH _{\Lambda } (\theta) }) }\vert\leq C(t, h, \sharp (Q_0)) \ \Vert A \Vert\ \varepsilon^{\gamma \delta (Q_0, P_1(m))}. $$ Now, we shall estimate the second term in (11.4). If we apply Proposition 7.2, first with inequality (7.7)), and then with (7.9), (7.10), we obtain: $$\Vert B_Q\Vert\leq t C_0 \varepsilon^{{1 +\gamma \over 2} {\rm diam }( Q)}, \hskip 1cm \Vert B_Q\Vert\leq t C_0 \varepsilon^{\gamma \delta(Q, P_1(m))}.$$ By Theorem 1.5 about the correlation of two operators, one of them being the multiplication by a bounded function, (which is applicable even if $Q$ and $Q_0$ are not disjoint), we have $$\vert cov (B_Q,A)\vert\leq K (t, h, {\rm diam} (Q_0))\ ((M(t)\varepsilon ^{{1 + \gamma \over 2} })^{{1 \over 5} \delta (Q_0, Q)} \Vert A \Vert \ \Vert B_Q \Vert , $$ and therefore, with another $K$ $$\vert cov (B_Q,A) \vert \leq K(t, h, {\rm diam} (Q_0)) (M(t) \varepsilon^{\gamma } )^{{1 \over 5} ({\rm diam }(Q)+ \delta (Q, Q_0) + \delta (Q, P_1(m))} \ \varepsilon ^{ {1- \gamma \over 10} ( {\rm diam }(Q)+ \delta (Q, Q_0)}$$ We have ${\rm diam }(Q)+ \delta (Q, Q_0) + \delta (Q, P_1(m)) \geq \delta (Q_0, P_1(m))$. By Lemma 11.2 below (point (11.8), applied with $\rho = \varepsilon ^{{1- \gamma \over 10}}$) we have, if $0< \rho \leq 2^{-d}$, $$\sum _{Q \subseteq \Lambda } \rho^{ {\rm diam}(Q)+ \delta (Q, Q_0)} \leq \sum _{\lambda \in Q_0} \sum _{Q \subset \Z^d} \rho ^{ {\rm diam}(Q)+ \delta (Q, \lambda )} \leq \sharp (Q_0) \left [ \sum _{I \subset \Z} \rho ^{{1 \over d} ({\rm diam}(I) + \delta (I, 0))} \right ]^d \leq \sharp (Q_0)\ C({1 \over 2})^d$$ Therefore, if $ \varepsilon ^{ {1- \gamma \over 10}} \leq 2^{-d}$, $\vert F'_\theta(\theta)\vert\leq K(t, h, {\rm diam}(Q_0))\ \Vert A\Vert (M(t) \varepsilon^{\gamma } )^{{1 \over 5} \delta (Q_0, P_1(m))}$. Thus, Proposition 11.1 is proved. \bigskip Theorem 1.2 follows easily, because, if $\Lambda _n$ is defined by (1.5) and if $Q_0$ is fixed, we have $\delta (Q_0, P_j(m)) \geq {m \over 2 }$ $(1 \leq j \leq d)$, if $m$ is large enough, and Proposition 11.1, applied $d$ times, proves that, if $m < n $ and $m$ is large enough $$\vert E_{\Lambda _m, \varepsilon } (A) - E_{\Lambda _n, \varepsilon } (A)\vert \leq K(t, h, {\rm diam}(Q_0))\ (M(t) \varepsilon ^{ \gamma})^{{m\over 10} } \Vert A \Vert . $$ Therefore, under the conditions (1.7), the sequence $ E_{\Lambda _n,\varepsilon } (A)$ has a limit $\omega _{h, t, \varepsilon } (A)$, and we have, for $n$ large enough, the estimation (1.8). \bigskip \noindent {\it Proof of Theorem 1.3.} It is a straightforward modification of the proof of Theorem 1.2. Let $A$ be the multiplication operator by a function $f$, depending only on the variables $x_\lambda$, $\lambda\in Q_0$, and satisfying (1.9). We prove equality (11.4)) exactly like for Theorem 1.2. Since $A$ is a multiplication, we have: $$ {\rm Tr}(Op(U_{\Lambda ,\varepsilon }(\theta) R_{\theta})\circ A) = \int _{(\R^p)^{\Lambda }} U_{\Lambda } (x, x, t, \theta) R_{\theta }(x, x, t) f(x) dx$$ We estimate $R_{\theta}$ as before, and we apply Proposition 8.2. We obtain, if $\Lambda$ and $\Lambda _1$ are defined by (11.1) and $Q_0 \subset \Lambda _1$, $${\vert {\rm Tr}(Op(U_{\Lambda }(\theta) R_{\theta } )\circ A)\vert \over {\rm Tr}\,(e^{-tH (\theta) }) } \leq C(t, m,h, \sharp (E))\ N_m(f)\ \varepsilon^{\gamma \delta (Q, P_1(m))}$$ Instead of Theorem 1.5, we use Proposition 9.1, point b), remarking that $B_Q$ is the multiplication by a bounded function, and we obtain $$\vert cov (B_Q,A)\vert\leq K (t, m, {\rm diam} (Q_0))\ ((M(t)\varepsilon ^{{1 + \gamma \over 2} })^{ \delta (Q_0, Q)} N_m(a)\ \ \Vert B_Q \Vert , $$ The rest of the proof is unchanged. \bigskip We used in the proof of Proposition 11.1, and we shall use again, the following Lemma. \bigskip \noindent {\bf Lemma 11.2.} {\it For each $\rho \in ]0, 1[$, there exists a constant $C(\rho)>0$ such that, for each intervals $I$ and $I'$ of $\Z$, we have: $$\sum _{\lambda \in \Z } \rho ^{\delta (\lambda , I)} \leq C(\rho) + \sharp (I) \leqno (11.8)$$ $$\sum _{\lambda \in \Z } \rho ^{\delta (\lambda , I) +\delta (\lambda , I') } \leq ( C(\rho) + \inf (\sharp (I), \sharp (I')) + \delta (I, I')) \ \rho ^{\delta (I, I')} \leqno (11.9)$$ Moreover, for each finite interval $\{ \alpha , \ldots , \beta \} $ of $\Z$, for each point $m$ of this interval, we have the following inequalities, where the summmations are on all the finite subintervals $I$, $I'$ and $I''$ of $\{ \alpha , \ldots , \beta \}$ or $\Z$. $$\sum _{I \subset \Z } \rho ^{{\rm diam } (I \cup \{ m \} )} \leq C(\rho) \leqno (11.10)$$ $$\sum _{I, I' \subset \Z } \rho ^{{\rm diam } (I) + {\rm diam } (I' \cup \{ m \} ) + \delta (I, I')} \leq C(\rho) \leqno (11.11)$$ $$\sum _{I, I' \subseteq \{ \alpha , \ldots , \beta \} } \rho ^{ {\rm diam } (I) + {\rm diam } (I') + \delta (I , I')} \leq C(\rho ) \sharp ( \{ \alpha ,... , \beta \} ) \leqno (11.12)$$ $$\sum _{I, I' , I'' \subset \Z } \rho ^{ {\rm diam } (I) + {\rm diam } (I') + \delta (I , I') + {\rm diam } (I'' \cup \{ m \} ) + \delta ( I \cup I' , I'') } \leq C(\rho) \leqno (11.13)$$ $$\sum _{I, I' , I'' \subseteq \{ \alpha , \ldots , \beta \} } \rho ^{ {\rm diam } (I) + {\rm diam } (I') + \delta (I , I') + {\rm diam } (I'' ) + \delta ( I \cup I' , I'') } \leq C(\rho) \sharp ( \{ \alpha ,... , \beta \} ) . \leqno (11.14)$$ } \bigskip \noindent {\it Proof.} The proof of (11.8) and (11.9) are direct. In (11.10), (11.11) and (11.13), we may assume that $m=0$. For (11.10), we remark that, if $I = [a, b]$, then ${\rm diam } (I \cup \{ 0 \})\geq {1\over 2}(\vert a \vert + \vert b \vert)$. For (11.11) and (11.13), we remark that, if $\rho \in ]0, 1[$, there is $C(\rho)>0$ such that, for each interval $J$ of $\Z$, we have $$\sum _{I \subset \Z} \rho ^{ {\rm diam }(I) + \delta (I, J)} \leq C(\rho) (1 + {\rm diam} (J))$$ (the sum is taken on all intervals $I$ of $\Z$). By this inequality, (11.11) follows from (11.10) and (11.13) from (11.11). The point (11.12) (resp.11.14) is an easy consequence of (11.11) (resp. (11.13)). For example, in (11.12), we can decompose the LHS as a sum of terms $S_m$ ($\alpha \leq m \leq \beta$), $S_m$ being the sum of all terms corresponding to triples $(I, I', I'')$ such that the lower bound of $I' $ is $ m$. \bigskip \noindent {\it Proof of Theorem 1.4. } For each finite subset $\Lambda $ of $\Z^d$, let $\psi_{\Lambda}(x,y,t,h)$ be the function defined by Theorem 1.1, and set $$X_{\Lambda }(t, h, \varepsilon ) = { \partial \over \partial t} {\rm ln } \left [ {\rm Tr} (e^{-tH_{\Lambda , \varepsilon }} ) \right ] = { \int_{\R^{p\vert\Lambda\vert}} e^{-\psi_{\Lambda}(x,x,t,h)} {\partial \psi_{\Lambda}\over \partial t}(x,x,t,h)\, dx \over \int_{\R^{p\vert\Lambda_n\vert}} e^{-\psi_{\Lambda}(x,x,t,h)} \, dx }. $$ \bigskip Theorem 1.4 will be a consequence of the following Proposition. \bigskip \noindent {\bf Proposition 11.3.} {\it With these notations, there exists a constant $\sigma _1>0$ and a function $t \rightarrow F(t)$, bounded on each compact of $]0, + \infty [$, with the following properties. We can write, for each box $\Lambda $ of $\Z^d$, split into two boxes $\Lambda _1$ and $\Lambda _2$ as in (11.1), for each $t>0$, $h$ and $\varepsilon $ satisfying conditions of type (1.12), $$ \vert X_\Lambda (t, h, \varepsilon)\ - X_{\Lambda _1}(t, h, \varepsilon)\ -X_{\Lambda_2}(t, h, \varepsilon)\ \vert \leq F(t) \ \sharp (\Lambda_\bot) .\leqno(11.15) $$ } \bigskip \noindent {\it Proof.} With the notations recalled in the beginning of this section, we set $\varphi _{\theta }(x, t)= \psi _ {\Lambda }(x, x, t, h , \varepsilon, \theta)$, and $$F(\theta)= { \int_{\R^{p\vert\Lambda\vert}} e^{-\phi_\theta(x)} {\partial \phi_\theta\over \partial t}(x)\, dx \over \int_{\R^{p\vert\Lambda\vert}} e^{-\phi_\theta(x)} \, dx }.$$ We can write $$ X_{\Lambda}-X_{\Lambda_1}-X_{\Lambda_2} = F(0)-F(1)$$ By computations, similar to those of Theorem 1.2, we find that $F'_\theta$, the derivative of $F$ w.r.t. $\theta$, satisfies $F'_\theta(\theta)=F_1(\theta)+F_2(\theta)$, where $$ F_1(\theta)= { \int_{\R^{p\vert\Lambda\vert}} e^{-\phi_\theta(x)} {\partial^2\phi_\theta(x) \over \partial t \partial \theta } \, dx \over \int_{\R^{p\vert\Lambda\vert}} e^{-\phi_\theta(x)} \, dx }$$ and, (defining $cov (A, B)$ for two operators $A$ and $B$ by (9.6) and identifying a function $f$ and the operator of multiplication by $f$, which allows us to use the notation $cov(f, g)$ for two functions $f$ and $g$) $$F_2(\theta)={1\over 2}cov ( {\partial \phi_\theta\over\partial t} , {\partial \phi_\theta\over\partial \theta}).\leqno (11.16)$$ {\it Estimation of $F_1(\theta)$.} Following (1.19), with $V$ replaced by $V_{\theta }$, and differentiating with respect to $\theta$, we obtain $$ \vert\partial_{t}\partial_\theta \phi_\theta(x, t)\vert \leq {h^2 \over 2} \sum_{\lambda\in\Lambda} \vert \Big ( \Delta _{{x_\lambda}}\partial_\theta \psi_\theta\Big ) (x, x, t)\vert + h^2 \sum_{\lambda\in\Lambda} \vert\nabla _{{x_\lambda}}\partial_\theta \psi_\theta (x, x, t) \vert \ \vert \nabla _{{x_\lambda}}\psi_\theta (x, x, t)\vert +\vert \partial_\theta V_\theta\vert . $$ We see (using Proposition 7.1, inequalities (7.3) and (7.5)) that the first two terms, in the RHS of the inequality above, are bounded, if $ht \leq \sigma _0$ and $\varepsilon$ is small enough, by $$ C h^2 (t + t^2) \ \sum_{\lambda \in \Lambda} \varepsilon ^{{1 \over 2} \vert \lambda_1-m\vert} \leq 4 C h^2(t+ t^2) \ \sharp (\Lambda_\bot ), $$where $C$ is independent of all the parameters. We remark that $$\partial_\theta V_\theta=(V_{\Lambda_1}\oplus V_{\Lambda_2})-V_{\Lambda}= \sum _{\lambda \in \Lambda _1, \mu \in \Lambda _2} \varepsilon ^{|\lambda - \mu |} B_{\lambda - \mu }(x_{\lambda} , x_{\mu })$$ and , if $\Lambda _1$ and $\Lambda _2$ are defined in (11.1), that we have $\lambda _1 \leq m \leq \mu _1$ and that $(\lambda _2, ... , \lambda _d)$ and $(\mu _2, ... , \mu _d)$ are in $\Lambda _{\bot }$ if $\lambda $ is in $\Lambda _1$ and $\mu $ in $\Lambda _2$. Therefore $$\vert \partial_\theta V_\theta(x)\vert \ \leq \ C\sum_{\lambda\in \Lambda_1, \mu\in\Lambda_2} \varepsilon^{\vert\lambda-\mu\vert}\ \leq \ 4^d\ C \sharp ( \Lambda_\bot )$$ if $\varepsilon \leq 2^{-d}$. Hence $\vert{\partial^2\phi_\theta(x) \over \partial t \partial \theta}\vert\leq C(1 + t h^2) \sharp (\Lambda_\bot )$ and therefore $\vert F_1(\theta)\vert\leq C ( 1 + t h^2 ) \ \sharp ( \Lambda_\bot)$, provided that $ht \leq \sigma _0$ and $\varepsilon $ are small enough. \smallskip \noindent {\it Estimation of $F_2(\theta )$.} By (1.19), setting $\varphi (x,t \theta) = \psi _{\Lambda } (\theta , x, x, t, h, \varepsilon )$, we have, omitting $h$ and $\varepsilon$, $${\partial \phi (x, t , \theta) \over\partial t} = {h^2\over 2} (\Delta _x \psi_{\Lambda } )(\theta , x, x, t) -{h^2\over 2}\vert (\nabla _x \psi_{\Lambda } (\theta , x, x, t) \vert^2 +V_\theta (x). \leqno (11.17)$$ Using the cluster decomposition for $\psi_\theta$, we get, by Proposition 5.1 ii),$$\psi _{\Lambda } (\theta , x , y , t) = \psi _{\Lambda } (\theta , 0 , y - x , t)+ \sum _{Q \subseteq \Lambda } (T_Q \psi _{\Lambda })(\theta , x , y , t).$$Now, we take the derivatives of this equality with respect to the variables $x_{\lambda }$ ($\lambda \in \Lambda $), we restrict then to the diagonal, and we report into (11.17), we take also the derivative with respect to $\theta$, and we report both in the expression (11.16) of $F_2(\theta)$. We remark that, if a function does not depend on $x$, its correlation with any other one vanishes. In order to write more shortly what we obtain, we introduce the following notations:$$ v_{\lambda } (t) = - ( \nabla _{x_{\lambda }} \psi _{\theta } ) (0, 0 , t)\ \ \ \ \ \ \ f_{ \lambda , Q }(x, t) = (\Delta _{x_{\lambda }} T_Q \psi _{\theta } ) (x, x, t) \ \ \ \ \ \ \ \ g_{ \lambda , Q }(x, t) = (\nabla _{x_{\lambda }} T_Q \psi _{\theta } ) (x, x, t)$$ $$h_Q(x, t) = (T_Q {\partial\over \partial\theta}\psi_\theta ) (x, x, t).$$ Thus, we obtain: $$F_2(\theta ) = {h^2 \over 4} \sum _{ \lambda \in \Lambda \atop Q, Q' \subseteq \Lambda} cov ( f_{ \lambda , Q }, h_{Q'})\ - h^2 \sum _{ \lambda \in \Lambda \atop Q, Q' \subseteq \Lambda} v_{\lambda }(t)\ .\ cov ( g_{\lambda , Q }\ , \ h_{Q'}) - \leqno (11.18)$$ $$- {h^2 \over 2 } \sum _{ \lambda \in \Lambda \atop Q, Q', Q'' \subseteq \Lambda} cov \Big ( g_{\lambda , Q}.g_{\lambda , Q'}\ , \ h_{Q''} \Big ) + \sum _{ Q, Q' \subseteq \Lambda} cov ( T_Q V _{\theta}, h_{Q'}).$$Now, we shall estimate all the terms. By Proposition 5.1 i), $f_{\lambda , Q }$, $g_{\lambda , Q }$ and $h_Q$ depend only on $x_Q$. By a combination of the two points of Proposition 6.1, we have, if $ht \leq \sigma _0$ and $\varepsilon \leq \varepsilon _0( {1 \over 2} )$, $$\vert f_{\lambda , Q} (x, t)\vert + \vert g_{\lambda , Q} (x, t)\vert \leq Ct \varepsilon ^{ {1 \over 4} ({\rm diam }(Q) + \delta (\lambda , Q) )} \leqno (11.19) $$ By Theorem 1.1, we can write $\vert v_{\lambda }(t)\vert \leq Ct$. By Proposition 5.3, we can write, when $\varepsilon ^{3/4} \leq 2^{-d}$, $$\vert T_Q V _{\theta } \vert \leq \left \{ \matrix { C \varepsilon ^{ {1 \over 4} {\rm diam }( Q)} & {\rm if} & \sharp (Q) \geq 2 \cr C (1 + \vert x_Q \vert )\ \ \ & {\rm if} & \sharp (Q) =1 \cr } \right . \leqno (11.20)$$ By a combination of the points (7.7) and (7.10) of Proposition 7.2, we have, if $ht \leq \sigma _0$ and $\varepsilon \leq \varepsilon _0 ({1 \over 2})$, for each box $Q$ of $\Lambda $, $$\vert h_{Q}(x,t)\vert \leq C t \varepsilon ^{ {1 \over 4}\big ( \delta (Q, P_1(m)) + {\rm diam} (Q) \big )}. \leqno (11.21)$$ \medskip We apply Proposition 9.1 to estimate the four correlations in the expression of $F_2(\theta )$. The hypothesis (9.1) is satisfied with $m_1= m_2=0$, excepted for the term $cov ( T_Q V _{\theta }, h_{Q'})$ when $Q$ is reduced to a single point: then (9.1) is satisfied with $m_1= 1$ and $m_2=0$. Thus Proposition 9.1 is applicable, with $\gamma = {1 \over 2}$, to all our correlations under the conditions (1.12), if we choose $\varepsilon _0$ and $\sigma _1$ small enough, (and we can choose them such that (11.19), (11.20)and (11.21) are valid under the same conditions). Under these conditions, we have, by Proposition 9.1a), (11.19) and (11.21), $${h^2 \over 4} \vert cov ( f_{\lambda , Q} , h_{Q'})\vert \leq t h^2 \ (M(t) \varepsilon ^{{1\over 2}} \ \sharp (Q)\ \Vert f_{\lambda , Q} \Vert \ \Vert h_{Q'}\Vert \ \leq \ C h^2 t (M(t) \varepsilon ^{{1 \over 5} H(\lambda , Q , Q')}$$ where $$H(\lambda , Q , Q') = {\rm diam }(Q)+ {\rm diam } (Q') + \delta (Q , Q') + \delta (\lambda , Q) + \delta ( Q' , P_1(m))$$By Lemma 11.2, for each $\rho$ in $]0, 1[$, there exists $C(\rho)>0$, (which we can consider as an increasing function of $\rho$), such that, for each box $\Lambda$ with the notation (11.1), for each integer $m$, we have $$\sum _{Q , Q' \subseteq \Lambda \atop \lambda \in \Lambda } \rho ^{H(\lambda , Q , Q')} \ \leq \ C( \rho) \ \sharp (\Lambda _{\perp}).$$Applying this inequality to $\rho =M(t) \varepsilon ^{{1 \over 5}}$, we obtain, under the condition (1.12), $${h^2 \over 4} \sum _{ \lambda \in \Lambda \atop Q, Q' \subseteq \Lambda} \vert cov ( f_{ \lambda , Q }, h_{Q'}) \vert \leq Cth^2 \sharp ( \Lambda _{\bot }).$$ The other terms in the expression (11.18) of $F_2(\theta)$ are bounded similarly, excepted the terms $cov (T_Q V _{\theta}, h_{Q'})$ such that $\sharp (Q)=1$, for which we need Proposition 9.1 with $m_1 = 1$, $m_2=0$, and $\gamma = {1 \over 2}$. The function $K(t, 1, 0, \sharp (Q))$ appearing in (9.4) is independent of $\sharp (Q')$ since $m_2=0$. If $t>0$, and if $h$ and $\varepsilon $ satisfy (1.12), we obtain, $$\vert cov ( T_{ \{ \lambda \} } V_{\theta} , h_{Q'})\vert \leq G(t) \ \ (M(t) \varepsilon ^{{1 \over 4}})^{H(\lambda , Q')}, $$where $G(t)= K(t, 1, 0, 1) C^2$, $C$ being the constant of (11.20) and (11.21), and $ H(\lambda , Q')= \delta (\lambda , Q') + {\rm diam }(Q')+ \delta (Q' , P_1(m))$. By Lemma 11.2, there is an increasing function $\rho \rightarrow C(\rho)$ on $]0, 1[$ such that $$\sum _{\lambda \in \Lambda , Q' \subseteq \Lambda } \rho^{H(\lambda , Q')} \ \leq \ C(\rho )\ \sharp (\Lambda _{\perp}).$$ If we apply that to $\rho = M(t) \varepsilon ^{{1 \over 4}}$, we obtain, under the condition (1.12), $$\sum _{ \lambda \in \Lambda , Q \subseteq \Lambda} \vert cov ( T_{ \{ \lambda \} } V_{\theta} , h_{Q'})\vert \leq G(t)C({1 \over 2}) \ \sharp ( \Lambda _{\bot }).$$ The Proposition is proved. \bigskip \noindent {\it End of the proof of Theorem 1.4.} Using Proposition 11.1 and the same arguments as Sj\"ostrand [17, Section 8, p.45-46], we conclude that, if $\Lambda _n = \{ -n, \ldots , n \} ^d$, and if the hypotheses (1.12) are satisfied, ${X_{\Lambda _n}\over \sharp \Lambda _n }$ has a limit $U(t, h, \varepsilon)$ and that we can write, with another $F(t)$, $$\vert U(t, h, \varepsilon) - {X_{\Lambda _n}\over \sharp \Lambda _n } \vert \leq { F(t) \over n}.$$ \bigskip \centerline {\bf References.} \bigskip \noindent [1] S. ALBEVERIO, Y. KONDRATIEV, T. PASUREK, M. R\"OCKNER, Euclidean Gibbs states of quantum crystals. {\it Moscow Math. Journal.} {\bf 1}, No 3, (2001), p. 307-313. \smallskip \noindent [2] S. ALBEVERIO, Y. KONDRATIEV, T. PASUREK, M. R\"OCKNER, Gibbs states on loop lattices : existence and a priori estimates. {\it C. R. Acad. Sc. Paris, S\'erie I}, {\bf 333}, (2001), p. 1005-1009. \smallskip \noindent [3] N. ASHCROFT, D. MERMIN, {\it Solid State Physics.} Saunders College . Fort Worth, 1976. \smallskip \noindent [4] V. BACH, J.S. M\"OLLER, Correlation at low temperature. I. Exponential decay. {\it J. Funct. Anal.}, {\bf 203} (2003), no. 1, 93-148. \smallskip \noindent [5] J. BELLISSARD, R. HOEGH-KROHN, Compactness and the maximal Gibbs state for random Gibbs fields on a lattice. {\it Comm. Math. Phys, } {\bf 84} (1982), no. 3, 297-327. \smallskip \noindent [6] O. BRATTELI, D.W. ROBINSON, {\it Operator algebras and quantum statistical mechanics. 2. Equilibrium states. Models in quantum statistical mechanics.} Second edition. Texts and Monographs in Physics. Springer-Verlag, Berlin, 1997. \smallskip \noindent [7] B. HELFFER, {\it Semi-classical analysis for Schr\"odinger operators, Laplace integrals and transfer operators in large dimension : an introduction.} Cours, Universit\'e de Paris-Sud, 1995. \smallskip \noindent [8] B. HELFFER, Remarks on the decay of correlations and Witten Laplacians, Brascamp-Lieb inequalities and semi-classical limit, {\it J. Funct. Analysis,} {\bf 155}, (2), (1998), p.571-586. \smallskip \noindent [9] B. HELFFER, Remarks on the decay of correlations and Witten Laplacians, II. Analysis of the dependence of the interaction. {\it Rev. Math. Phys.} {\bf 11} (3), (1999), p.321-336. \smallskip \noindent [10] B. HELFFER, Remarks on the decay of correlations and Witten Laplacians, III. Applications to the logarithmic Sobolev inequalities. {\it Ann. I.H.P. Proba. Stat,} {\bf 35}, (4), (1999), p.483-508. \smallskip \noindent [11] B. HELFFER, J. SJ\"OSTRAND, Semiclassical expansions of the thermodynamic limit for a Schr\"odinger equation. I. The one well case. {\it M\'ethodes semi-classiques, Volume 2}, Ast\'erique 210, S.M.F. (Paris), 1992. \smallskip \noindent [12] B. HELFFER, J. SJ\"OSTRAND, On the correlation for Kac like models in the convex case, {\it J. Stat. Physics}, {\bf 74} (1, 2), (1994), p.349-409. \smallskip \noindent [13] C. KITTEL, {\it Introduction to solid state Physics.} J. Wiley, New York, 1976. \smallskip \noindent [14] V. A. MALYSHEV, R. A. MINLOS, {\it Gibbs random fields. Cluster expansions. } Mathematics and its Applications (Soviet Series), 44. Kluwer Academic Publishers Group, Dordrecht, 1991. \smallskip \noindent [15] R. A. MINLOS, {\it Introduction to Mathematical Statistical Physics.} University Lecture Series {\bf 19}, American Mathematical Society, Providence, 2000. \smallskip \noindent [16] R. A. MINLOS, E.A. PECHERSKY, V. A. ZAGREBNOV, Analyticity of the Gibbs states for a quantum anharmonic crystal: no order parameter. {\it Ann. Henri Poincar\'e} {\bf 3} (2002), p. 921-938. \smallskip \noindent [17] R. A. MINLOS, A. VERBEURE, V. A. ZAGREBNOV, A quantum crystal model in the light-mass limit : Gibbs states. {\it Rev. in Math. Phys. } {\bf 12}, No 7, (2000), p. 981-1032. \smallskip \noindent [18] D. ROBERT, {\it Autour de l'approximation semiclassique.} Progress in Mathematics, 68. BirkhŠuser Boston, Inc., Boston, MA, 1987. \smallskip \noindent [19] Ch. ROYER, Formes quadratiques et calcul pseudodiff\'erentiel en grande dimension. {\it Pr\'e\-publication 00.05.} Reims, 2000. \smallskip \noindent [20] D. RUELLE, {\it Statistical Mechanics: rigorous results.} Addison-Wesley, 1969. \smallskip \noindent [21] B. SIMON, {\it The statistical Mechanics of lattice gases. } Vol. I. Princeton Series in Physics. Princeton, 1993. \smallskip \noindent [22] J. SJ\"OSTRAND, Evolution equations in a large number of variables, {\it Math. Nachr.} {\bf 166} (1994), 17-53. \smallskip \noindent [23] J. SJ\"OSTRAND, {\it Complete asymptotics for correlations of Laplace integrals in the semiclassical limit.} Memoires S.M.F., {\bf 83}, (2000). \bigskip laurent.amour@univ-reims.fr \medskip claudy.cancelier@univ-reims.fr \medskip pierre.levy-bruhl@univ-reims.fr \medskip jean.nourrigat@univ-reims.fr \end ---------------0312121401860--