\documentstyle{amsppt} %\mag=\magstep1 %\hcorrection{-1truecm} \pageheight{23truecm} \pagewidth{17truecm} %\NoRunningHeads \topmatter \title On the Poisson Limit Theorems of Sinai and Major \endtitle \author Nariyuki Minami \endauthor \affil Institute of Mathematics, Unversity of Tsukuba \\ Tsukuba, Ibaraki 305-8571, Japan \\ E-mail: minami\@ sakura.cc.tsukuba.ac.jp \endaffil \abstract Let $f(\varphi)$ be a positive continuous function on $0\leq\varphi\leq\Theta$ , where $\Theta\leq2\pi$ , and let $\xi$ be the number of two-dimensional lattice points in the domain $\Pi_R(f)$ between the curves $r=(R+c_1/R)f(\varphi)$ and $r=(R+c_2/R)f(\varphi)$ , where $c_10$ continous, consider the domain $\Pi_R(f)$ defined by \align \Pi_R(f)=\{x\in\bold R^2 &\vert\ 0\leq\varphi(x)\leq\Theta\ ,\tag1.2\\ & (R+c_1/R)f(\varphi(x))<\vert x\vert<(R+c_2/R)f(\varphi(x))\}\ . \endalign Here $c_10$ is a parameter which we shall let tend to infinity. Moreover $\vert x\vert=r=\sqrt{x_1^2 +x_2^2}$ is the distance of $x$ from the origin and $\varphi(x)$ is the angle between the vector $x=(x_1,x_2)$ and the positive real axis. If we set $$\lambda(f)=(c_2-c_1)\int_0^{\Theta}f(\varphi)^2d\varphi\ ,\tag 1.3$$ then as $R\to\infty$ , the area of $\Pi_R(f)$ is asymptotically given by $\lambda(f)(1+O(R^{-2}))$ . Thus the domain $\Pi_R(f)$ becomes thinner and thinner but its area is asymptotically constant. Now consider the quantity $$\xi=\xi(R;f)=\sharp\{\Pi_R(f)\cap\bold Z^2\}\ ,\tag 1.4$$ which is the number of the lattice points in $\Pi_R(f)$ . Then the problem, which is of physical significance, is the following:\par Suppose the parameter $R>0$ is randomly distributed on the interval $I_L=(a_1L,a_2L)$ , $00$ such that for $\bold P$-almost all $f\in\Cal X$ , $$Y(f)\equiv\sup_{0\leq\varphi_1<\varphi_2\leq\Theta} \frac{\vert f(\varphi_2)-f(\varphi_1)\vert}{\varphi_2-\varphi_1}\ \leq\ b_3\ . \tag1.7$$ Moreover there exists a $\tau\in(1,2)$ such that the inequality $$p_k(y_1,\ldots,y_k\vert\varphi_1\ldots,\varphi_k)\ \leq\ C_k\prod_{j=2}^k(\varphi_j-\varphi_{j-1})^{-\tau} \tag 1.8$$ holds with some constant $C_k$ for all $k\geq2$ and $0\leq\varphi_1<\cdots<\varphi_k\leq\Theta$ . \endroster \enddefinition \bigskip Under these conditions, we can prove Theorems 1 and 2 below: \bigskip \proclaim{Theorem 1} As $R\to\infty$ , the distribution of $\xi(R;\cdot)$ under the probability measure $\bold P$ converges to the mixture of Poisson distributions each of which having the parameter $\lambda(f)$ , where $\lambda(f)$ was defined in (1.3). Namely we have $$\lim_{R\to\infty}\bold P(\xi(R;\cdot)=k)=\bold E_{\bold P} \left[e^{-\lambda(f)}\frac{\lambda(f)^k}{k!}\right]\ ,\ k=0,1,\ldots. \tag 1.9$$ \endproclaim \bigskip As an immediate corollary, we obtain Sinai's theorem ([S2]). \bigskip \proclaim{Corllary} As $L\to\infty$ , the distribution of $\xi(\cdot;\cdot)$ under the probability measure $\mu_L\times\bold P$ converges to the mixture of Poisson distribution, each of which having the parameter $\lambda(f)$ , namely $$\lim_{L\to\infty}(\mu_L\times\bold P)(\xi=k)= \bold E_{\bold P}\left[e^{-\lambda(f)}\frac{\lambda(f)^k}{k!}\right]\ , \ k=0,1,\ldots. \tag 1.10$$ \endproclaim \bigskip Sinai proved this assertion under conditions which are close to our conditions (I) and (II-2) with $\tau=1$ . Under the same conditions, he also showed the following theorem, which we shall prove, simultaneously with Theorem 3, under wider conditions (I) and (II).\par \bigskip \proclaim{Theorem 2} There is a sequence $\bar{L}_n\to\infty$ such that for $\bold P$-almost all $f\in\Cal X$ , the distribution of $\xi(\cdot;f)$ under $\mu_{\bar{L}_n}$ converges to the Poisson distribution with parameter $\lambda(f)$ , namely we have $$\lim_{n\to\infty}\mu_{\bar{L}_n}(\xi(\cdot;f)=k)= e^{-\lambda(f)}\frac{\lambda(f)^k}{k!}\ ,\ k=0,1,\ldots, \tag 1.11$$ for $\bold P$-almost all $f\in\Cal X$ . \endproclaim \bigskip If we assume, in addition to Conditions (I) and (II), the following technical condition, we can get rid of subsequences.\par \bigskip \definition{Condition (III)} For some constants $A_k>0$ , $\nu_k\in(0,d/2)$ , where $k\geq2$ and $d=1-\sigma$ or $=2-\tau$ according to which of (II-1) and (II-2) $\bold P$ satisfies, we have $$\vert\partial_i p_k(y_1,\ldots,y_k\vert\varphi_1,\ldots,\varphi_k)\vert \ \leq\ A_k\exp(\beta^{-\nu_k}) \tag 1.12$$ whenever $k\geq2$ , $b_1\leq y_j\leq b_2$ , $\vert\varphi_i-\varphi_j\vert\geq\beta>0$ , $i,j=1,\ldots,k$ . Here $\partial_i$ denotes either $\partial/\partial y_i$ or $\partial/\partial\varphi_i$ . \enddefinition \bigskip \proclaim{Theorem 3} If we assume Condition (III) in addition to Conditions (I) and (II), then for $\bold P$-almost all $f\in\Cal X$ , the distribution of $\xi(\cdot;f)$ under $\mu_L$ converges to the Poisson distribution with parameter $\lambda(f)$ as $L\to\infty$ , namely we have $$\lim_{L\to\infty}\mu_L(\xi(\cdot;f)=k)=e^{-\lambda(f)} \frac{\lambda(f)^k}{k!}\ ,\ k=0,1\ldots, \tag 1.13$$ for $\bold P$-almost all $f\in\Cal X$ . \endproclaim \bigskip This was proved by Major under Conditions (I), (II-2) and a slightly more restrictive condition than (III).\par \bigskip Before closing this introduction, we shall give an example of $\bold P$ satisfying our conditions (I), (II-1) and (III). (An example which satisfies (II-2) instead of (II-1) was discussed by Sinai [S2] and Major [M].)\par Let $\{X_t;t\ge0\}$ be the reflecting Brownian motion on the interval $S=[b_1,b_2]$ with fixed $X_0=b$ , and let $\bold P$ be its probability law. Then if we set $f(\varphi)=X_{1+\varphi}$ , the law of $\{f(\varphi); 0\le\varphi\le\Theta\}$ under $\bold P$ satisfies our requirements.\par In fact, the infinitesimal generator of $\{X_t\}$ is the Neumann Laplacian $\frac1{2}\Delta$ on the interval $S$ . Let $\cdots<-\lambda_n<\cdots<-\lambda_1 <-\lambda_0=0$ be its eigenvalues and $\psi_n(x)$ , $n\ge0$ be the corresponding normalized eigenfunctions. The transition density $P_t(x,y)$ of $\{X_t\}$ has the eigenfunction expansion: $$P_t(x,y)=\sum_{n\ge0}e^{-\lambda_n t}\psi_n(x)\psi_n(y)\ ,\tag1.16$$ and if $0\le\varphi_1<\cdots<\varphi_k\le\Theta$ , the joint distribution of $(f(\varphi_1),\ldots,f(\varphi_k))$ under $\bold P$ has the density \align &\ p_k(y_1,\ldots,y_k\vert\varphi_1,\ldots,\varphi_k) \tag1.17\\ &=P_{1+\varphi_1}(b,y_1)P_{\varphi_2-\varphi_1}(y_1,y_2)\cdots P_{\varphi_k-\varphi_{k-1}}(y_{k-1},y_k)\ . \endalign Now it is elementary to calculate $\lambda_n$ and $\psi_n(x)$ explicitly: $$\lambda_n=\frac{\pi^2 n^2}{2(b_2-b_1)^2} \quad;\ \psi_n(x)=\sqrt{\frac2{b_2-b_1}}\cos\frac{n\pi}{b_2-b_1}(x-b_1)\ ,\ n\ge0\ .\tag1.18$$ >From this, it is clear that $P_t(x,y)$ is $C^1$ on $[0,\infty)\times[b_1,b_2]^2$ and hence from (1.17), our $p_k(\cdot\vert\cdot)$ is $C^1$ in all of its variables $y_1,\ldots,y_k$ and $\varphi_1,\ldots,\varphi_k$ as required in Condition (I).\par It is clear from (1.16), (1.17) and (1.18) that there is a constant $B'_k$ such that $$p_k(y_1,\ldots,y_k\vert\varphi_1,\ldots,\varphi_k)\leq B'_k\prod_{j=2}^k\left(\sum_{n\ge0}e^{-\lambda_n(\varphi_j-\varphi_{j-1})} \right)\ .\tag1.19$$ On the other hand, if we let $$N(\lambda)=\sum_{\lambda_n\le\lambda}1=\left[\frac{\sqrt{2\lambda}} {\pi}(b_2-b_1)\right]\ , \tag1.20$$ then $$\sum_{n\ge0}e^{-\lambda_n t}=\int_{0-}^{\infty}e^{-\lambda t}dN(\lambda) \tag1.21$$ and since $$N(\lambda)\sim\frac{\sqrt{2}(b_2-b_1)}{\pi}\sqrt{\lambda} \tag1.22$$ as $\lambda\to+\infty$ , we can use the Abelian theorem to obtain $$\sum_{n\ge0}e^{-\lambda_n t}\sim\text{const.}t^{-1/2} \tag1.23$$ as $t\downarrow0$ , where the constant can be explicitly given. Hence by (1.19), there is another constant $B_k>0$ such that $$p_k(y_1,\ldots,y_k\vert\varphi_1,\ldots,\varphi_k)\leq B_k\prod_{j=2}^k(\varphi_j-\varphi_{j-1})^{-1/2}\ ,\tag1.24$$ so that Condition (II-1) is valid with $\sigma=1/2$ .\par As is obvious from (1.17), in order to verify Condition (III), it suffices to obtain an upper bound of $\vert(\partial/\partial y)P_t(x,y)\vert$ and $\vert(\partial/\partial t)P_t(x,y)\vert$ . Differentiating (1.16) term by term, we obtain $$\left\vert\frac{\partial}{\partial y}P_t(x,y)\right\vert \leq\text{const.}\int_{0-}^{\infty}e^{-\lambda t}dN_1(\lambda) \tag1.25$$ and $$\left\vert\frac{\partial}{\partial t}P_t(x,y)\right\vert \leq\text{const.}\int_{0-}^{\infty}e^{-\lambda t}dN_2(\lambda)\ ,\tag1.26$$ where we have set $$N_1(\lambda)=\sum_{n;\lambda_n\le\lambda}n \quad;\ N_2(\lambda)=\sum_{n;\lambda_n\le\lambda}n^2\ .\tag1.27$$ Since as $\lambda\to\infty$ we have $$N_1(\lambda)\sim\text{const.}\lambda\quad;\ N_2(\lambda)\sim\text{const.}\lambda^{3/2}\ ,\tag1.28$$ we can again use the Abelian theorem to obtain $$\left\vert\frac{\partial}{\partial y}P_t(x,y)\right\vert =O(t^{-1}) \tag1.29$$ and $$\left\vert\frac{\partial}{\partial t}P_t(x,y)\right\vert =O(t^{-3/2}) \tag1.30$$ as $t\downarrow0$ . Hence we can conclude that for some constant $A_k>0$ , $$\vert\partial_i p_k(y_1,\ldots,y_k\vert\varphi_1,\ldots,\varphi_k)\vert \leq A_k\beta^{-3/2} \tag1.31$$ whenever $b_1\le y_j\le b_2$ , $\vert\varphi_i-\varphi_j\vert\ge\beta>0$ , $i,j=1,\ldots,k$ , which is much stronger than (1.12).\par \head {\bf \S2 Proof of Theorem 1} \endhead For $k\geq1$ , define $$\xi_k=\xi_k(R;f)=\pmatrix \xi \\ k \endpmatrix 1_{\{\xi\geq k\}} =\sum_{n=k}^{\infty}\frac{n!}{k!(n-k)!}1_{\{\xi=n\}}\ .\tag 2.1$$ As in [S2], for the proof of Theorem 1, it suffices to prove $$\lim_{R\to\infty}\bold E_{\bold P}[\xi_k(R;\cdot)] =\frac1{k!}\bold E_{\bold P}[\lambda(f)^k] \tag 2.2$$ for each $k\geq1$ . On the other hand, by a simple combinatorial argument, we can write $$\xi_k=\sum_{\sharp A=k}1_{\{A\subset\Pi_R(f)\}}\ ,\tag 2.3$$ where the summation ranges over all $k$-point subsets $A$ of $\bold Z^2$ . \par If $m_1\ ,m_2\in\bold Z^2$ are distinct but $\varphi(m_1)=\varphi(m_2)$ , then we must have $\vert m_1-m_2\vert\geq1$ . Hence by the definition of $\Pi_R(f)$ , if $R>0$ is large, then $m_1\ ,m_2\in\Pi_R(f)$ with $m_1\ne m_2$ is possible only if $\varphi(m_1)\ne\varphi(m_2)$ . Noting $b_1\leq f\leq b_2$ , we can therefore rewrite (2.3) as $$\xi_k=\sum_{(m_1,\ldots,m_k)\in\Cal Z_k(R)}1_{\{m_1,\ldots,m_k\in\Pi_R(f)\}} \tag2.4$$ when $R>0$ is large, where we have set \align \Cal Z_k(R)=\{(m_1,\ldots,m_k)\in(\bold Z^2)^k\vert\ & \frac{b_1}2 R\leq\vert m_j\vert\leq2b_2R\ , j=1,\ldots,k \tag 2.5\\ &\text{and}\ 0\leq\varphi(m_1)<\cdots<\varphi(m_k)\leq\Theta\}\ . \endalign Now it is easy to see that $m\in\Pi_R(f)$ is equivalent to $$f(\varphi(m))\in I_m^R\equiv\left(\frac{\vert m\vert}{R+c_2/R}, \frac{\vert m\vert}{R+c_1/R}\right)\ ,\tag2.6$$ and hence $$\bold E_{\bold P}[\xi_k(R;\cdot)]=\sum_{\Cal Z_k(R)}\bold P (f(\varphi(m_j))\in I_{m_j}^R\ ,\ j=1,\ldots,k)\ .\tag 2.7$$ Take a $\beta>0$ and define $$\Cal Z_k(\beta;R)=\{(m_1,\ldots,m_k)\in\Cal Z_k(R)\vert\ \varphi(m_j)-\varphi(m_{j-1})\geq\beta\ ,\ j=2,\ldots,k\}\ ;\tag 2.8$$ and $$\Cal Z'_k(\beta;R)=\Cal Z_k(R)\setminus\Cal Z_k(\beta;R) \tag 2.9$$ so that $$\sum_{\Cal Z_k(R)}=\sum_{\Cal Z_k(\beta;R)}+\sum_{\Cal Z'_k(\beta;R)} \ .\tag 2.10$$ Let us first estimate the sum over $\Cal Z_k(\beta;R)$ . By the definition (2.6) of $I_m^R$ , we have as $R\to\infty$ , $$\text{the center of}\ I_m^R=\frac{\vert m\vert}R(1+O(R^{-2})) \tag2.11$$ and $$\text{the length of}\ I_m^R=\frac{(c_2-c_1)\vert m\vert}{R^3} (1+O(R^{-2}))\ .\tag2.12$$ In particular, if $(m_1,\ldots,m_k)\in\Cal Z_k(R)$ , the length of $I_m^R$ is of $O(R^{-2})$ for $j=1,\ldots,k$ .\par Define for $\beta>0$ \align \Psi_k(\beta)=&\sup\Sb (y_1,\ldots,y_k)\\ \in[b_1,b_2]^k\endSb \sup\Sb (\varphi_1,\ldots,\varphi_k)\in[0,\Theta]^k\\ \varphi_j-\varphi_{j-1}\ge\beta\\ j=2,\ldots,k\endSb \max_{1\le i\le k} \tag2.13\\ &\max\left\{\vert\frac{\partial}{\partial y_i} p_k(y_1,\ldots,y_k\vert\varphi_1,\ldots,\varphi_k)\vert\ ,\ \vert\frac{\partial}{\partial \varphi_i} p_k(y_1,\ldots,y_k\vert\varphi_1,\ldots,\varphi_k)\vert\right\} \endalign and $$\psi_k(\beta)=\sup\Sb (y_1,\ldots,y_k)\\ \in[b_1,b_2]^k\endSb \sup\Sb (\varphi_1,\ldots,\varphi_k)\in[0,\Theta]^k\\ \varphi_j-\varphi_{j-1}\ge\beta\\ j=2,\ldots,k\endSb p_k(y_1,\ldots,y_k\vert\varphi_1,\ldots,\varphi_k)\ . \tag2.14$$ Then by Condition (I), and (2.11), (2.12) and the remark following it, we will have \align &\ \bold P(f(\varphi(m_j))\in I_{m_j}^R\ ,\ j=1,\ldots,k) \tag2.15\\ &=\int_{I_{m_1}^R}dy_1\cdots\int_{I_{m_k}^R}dy_k\ p_k(y_1,\ldots,y_k\vert\varphi(m_1),\ldots,\varphi(m_k))\\ &=p_k(\frac{\vert m_1\vert}R,\ldots,\frac{\vert m_k\vert}R \vert\varphi(m_1),\ldots,\varphi(m_k))\prod_{j=1}^k \frac{(c_2-c_1)\vert m_j\vert}{R^3}\\ &\quad +O((\psi_k(\beta)+\Psi_k(\beta))R^{-2k-2})\ . \endalign Hence noting $\sharp\Cal Z_k(R)=O(R^{2k})$ , we get \align &\ \sum_{\Cal Z_k(\beta;R)}\bold P(f(\varphi(m_j))\in I_{m_j}^R\ , \ j=1,\ldots,k) \tag2.16\\ &=(c_2-c_1)^k\sum_{\Cal Z_k(\beta;R)} p_k(\frac{\vert m_1\vert}R,\ldots,\frac{\vert m_k\vert}R \vert\varphi(m_1),\ldots,\varphi(m_k))\left(\prod_{j=1}^k \frac{\vert m_j\vert}{R}\right)\left(\frac1{R^2}\right)^k\\ &\quad +O((\psi_k(\beta)+\Psi_k(\beta))R^{-2})\ . \endalign Now let us consider a function on $(\bold R^2)^k$ defined by $$R(x_1,\ldots,x_k)=p_k(\vert x_1\vert,\ldots,\vert x_k\vert\ \vert\ \varphi(x_1),\ldots,\varphi(x_k))\left(\prod_{j=1}^k\vert x_j\vert\right)\ . \tag2.17$$ Then the sum on the right hand side of (2.16) is the approximation by Riemann sum of the integral $$(c_2-c_1)^k\idotsint_{D_k(\beta)}dx_1\cdots dx_k\ R(x_1,\ldots,x_k)\ , \tag 2.18$$ where the domain $D_k(\beta)$ of integration is given by \align D_k(\beta)=\{(x_1,\ldots,x_k)\in\bold (\bold R^2)^k\vert &\frac{b_1}2\le\vert x_j\vert\le2b_2\ ,\ j=1,\ldots,k\ , \tag2.19\\ &0\le\varphi(x_j)\le\Theta\ ,\\ &\varphi(x_j)-\varphi(x_{j-1})\ge\beta\ ,\ j=2,\ldots,k\ \}\ . \endalign It is easy to see that $$\max_{1\le j\le k}\left\Vert\frac{\partial R}{\partial x_j}\right\Vert \leq\text{const.} (\psi_k(\beta)+\Psi_k(\beta)) \tag 2.20$$ holds on $D_k(\beta)$ . This gives the accuracy of the above Riemann sum approximation, and we obtain, after replacing the domain of integration $D_k(\beta)$ by \align D_k=\{(x_1,\ldots,x_k)\in(\bold R^2)^k\vert &\frac{b_1}2\le\vert x_j\vert\le2b_2\ ,\ j=1,\ldots,k\ , \tag2.21\\ &0\le\varphi(x_{j-1})<\varphi(x_j)\le\Theta\ ,\ j=2,\ldots,k\}\ , \endalign and tranforming the variable into the polar coordinate, \align &\ \sum_{\Cal Z_k(\beta;R)}\bold P(f(\varphi(m_j))\in I_m^R\ ,\ j=1,\ldots,k) \tag 2.22\\ &=(c_2-c_1)^k\idotsint_{0\le\varphi_1<\cdots<\varphi_k\le\Theta} d\varphi_1\cdots d\varphi_k\idotsint_{b_1\le y_j\le b_2\ ,\ j=1,\ldots,k} dy_1\cdots dy_k\times\\ &\quad\times\left(\prod_{j=1}^k y_j^2\right) p_k(y_1,\ldots,y_k\vert\varphi_1,\ldots,\varphi_k)\\ &\quad +O((\psi_k(\beta)+\Psi_k(\beta))R^{-1}+\beta)\\ &=\frac1{k!}\bold E_{\bold P}[\lambda(f)^k]+ O((\psi_k(\beta)+\Psi_k(\beta))R^{-1}+\beta)\ . \endalign In order to complete the proof of (2.2), it remains to prove $$\lim_{\beta\downarrow0}\varlimsup_{R\uparrow\infty} \sum_{\Cal Z'_k(\beta;R)}\bold P(f(\varphi(m_j))\in I_{m_j}^R\ ,\ j=1,\ldots,k) =0\ .\tag2.23$$ Suppose first that $\bold P$ satisfies (II-1). By the remark which follows (2-12), we see that $$\bold P(f(\varphi(m_j))\in I_{m_j}^R\ ,\ j=1,\ldots,k) \le \text{const.}R^{-2k} \prod_{j=2}^k(\varphi(m_j)-\varphi(m_{j-1}))^{-\sigma}\ .\tag2.24$$ Hence Lemma 1 below gives $$\varlimsup_{R\uparrow\infty} \sum_{\Cal Z'_k(\beta;R)}\bold P(f(\varphi(m_j))\in I_m^R\ ,\ j=1,\ldots,k) =O(\beta^{1-\sigma})\tag2.25$$ for small $\beta>0$ , and (2.23) follows immediately from this. \bigskip \proclaim{Lemma 1} For large $R>0$ and for small $\beta>0$ , we have the estimate $$\max_{m\in\Cal Z_1(R)}\sum\Sb m'\in\Cal Z_1(R)\ ;\\ 0<\varphi(m')-\varphi(m)<\beta\endSb (\varphi(m')-\varphi(m))^{-\sigma} =O(R^2(R^{\sigma-1}+\beta^{1-\sigma}))\ .$$ \endproclaim \demo{Proof} We may assume $0\le\varphi(m)\le\pi/4$ , because $\bold Z^2$ is invariant under the rotation by $\pi j/2$ , $j=1,2,3$ , and because the lattice obtained by rotating $\bold Z^2$ by $\pi/4+\pi j/2$ , $j=1,2,3$ is contained in the lattice $(1/\sqrt{2})\bold Z^2$ . But in this case, we have $\rho\equiv\tan\varphi(m)\in[0,1]$ .\par Writing $m'=(\mu_1,\mu_2)$ , we get \align &\ \sum\Sb m'\in\Cal Z_1(R)\ ;\\ 0<\varphi(m')-\varphi(m)<\beta\endSb (\varphi(m')-\varphi(m))^{-\sigma} \tag 2.26\\ &\leq\sum_{1\le\mu_1\le2b_2R}\ \sum _{\mu_2;\rho<\frac{\mu_2}{\mu_1}<\tan(\varphi(m) +\beta)}\left\{\tan^{-1}\frac{\mu_2}{\mu_1}-\varphi(m)\right\}^{-\sigma} \ . \endalign Since $$\min\{\mu_2\ \vert\ \frac{\mu_2}{\mu_1}>\rho\}=[\rho\mu_1]+1\ ,\tag2.27$$ the inner summation on the right hand side of (2.26) is further estimated as below: \align &\ \sum_{\mu_2;\rho<\frac{\mu_2}{\mu_1}<\tan(\varphi(m) +\beta)}\left\{\tan^{-1}\frac{\mu_2}{\mu_1}-\varphi(m)\right\}^{-\sigma} \tag 2.28\\ &\leq\mu_1\int_{\rho}^{\tan(\varphi(m)+\beta)}(\tan^{-1}y-\varphi(m)) ^{-\sigma}dy\\ &+\quad\frac{\{\rho\mu_1\}}{\mu_1}\left(\tan^{-1} \frac{[\rho\mu_1]+1}{\mu_1}-\tan^{-1}\rho\right)^{-\sigma}\\ &\equiv F(\mu_1)+G(\mu_1)\ . \endalign Here $[a]$ and $\{a\}$ are respectively the integer and the fractional parts of a real number $a$ . Now we have \align \sum_{1\le\mu_1\le2b_2R}F(\mu_1)&= \left(\sum_{1\le\mu_1\le2b_2R}\mu_1\right) \int_{\varphi(m)}^{\varphi(m)+\beta}(\varphi-\varphi(m))^{-\sigma} \frac{d\varphi} {\cos^2\varphi} \tag2.29\\ &=O(R^2\beta^{1-\sigma})\ , \endalign where we have used the assumption that $0\le\varphi(m)\le\pi/4$ and that $\beta>0$ is small so that $\cos^2\varphi$ is bounded away from 0 on the range of integration.\par In order to estimate the sum of $G(\mu_1)$ we set $\rho=p/q$ with $p$ and $q$ mutually prime integers. Since $m\in\Cal Z_1(R)$ , and $0\le\rho\le1$ , we must have $0\le p\le q\le2b_2R$ . Moreover for any integer $\ell$ , \align \left\{\{\rho n\}\ \vert\ n\in\bold N\right\}&= \left\{\{\rho(q\ell+j)\}\ \vert\ j=1,\ldots,q\right\}\tag2.30\\ &=\{0,\frac1{q},\ldots,\frac{q-1}{q}\}\ . \endalign Then noting that $$0\le\{\rho\mu_1\}\le1\ ;\ \frac{[\rho\mu_1]+1}{\mu_1} =\rho+\frac{1-\{\rho\mu_1\}}{\mu_1} \tag2.31$$ hold, we can compute \align &\ \sum_{1\le\mu_1\le2b_2R}G(\mu_1) \tag2.32\\ &\leq \sum_{1\le j\le q-1}G(j)+\sum_{1\le\ell\le2b_2R/q} \ \sum_{0\le j\le q-1}G(q\ell+j)\\ &\leq\sum_{1\le j\le q-1}\left\{\tan^{-1}(\rho+\frac{1-j/q}q)-\varphi(m) \right\}^{-\sigma}\\ &+\quad\sum_{1\le\ell\le2b_2R/q}\ \sum_{0\le j\le q-1} \frac1{q\ell}\left\{\tan^{-1}(\rho+\frac{1-j/q}{q(\ell+1)})-\varphi(m) \right\}^{-\sigma}\\ &\leq q\int_0^1\left\{\tan^{-1}(\rho+\frac{1-x}q)-\varphi(m)\right\} ^{-\sigma}dx\\ &\quad+\sum_{1\le\ell\le2b_2R/q}\frac1{\ell}\int_0^1 \left\{\tan^{-1}(\rho+\frac{1-x}{q(\ell+1)})-\varphi(m)\right\}^{-\sigma}dx \\ &=q^2\int_{\varphi(m)}^{\tan^{-1}(\rho+1/q)}(\varphi-\varphi(m))^{-\sigma} \frac{d\varphi}{\cos^2\varphi}\\ &\quad+\sum_{1\le\ell\le2b_2R/q}\frac{q(\ell+1)}{\ell}\int_{\varphi(m)} ^{\tan^{-1}(\rho+1/q(\ell+1))}(\varphi-\varphi(m))^{-\sigma} \frac{d\varphi}{\cos^2\varphi}\\ &=O(q^{\sigma+1}+R^{\sigma})\\ &=O(R^{\sigma+1})\ , \endalign completing the proof of Lemma 1. \enddemo \bigskip Next we consider the case where $\bold P$ satisfies (II-2). By (2.11) and (2.12), there is a constant $C>0$ such that the conditions $$\frac1{2}b_1R\leq\vert m\vert\leq2b_2R$$ and $$f(\varphi(m))\in I_m^R$$ imply the condition $$\left\vert f(\varphi(m))-\frac{\vert m\vert}R\right\vert \le\frac1{2}CR^{-2}\ .\tag2.33$$ By the Lipschitz continuity of $f(\cdot)$ , we have the following implication for large $R>0$ : \align &\ f(\varphi(m_j))\in I_{m_j}^R\ ,\ \frac1{2}b_1R\le\vert m_j\vert\le 2b_1R\ ,\ j=1,\ldots,k \tag2.34\\ &\Longrightarrow\left\vert\frac{\vert m_j\vert}R- \frac{\vert m_{j-1}\vert}R\right\vert\le\frac{C}{R^2}+ b_3\vert\varphi(m_j)-\varphi(m_{j-1})\vert\ ,\ j=2,\ldots,k\ . \endalign On the other hand, it is easy to see that for some absolute constant $M>0$ , $$\min_{(m,m')\in\Cal Z_2(R)}(\varphi(m')-\varphi(m))\ge MR^{-2} \tag 2.35$$ for large $R>0$ . Hence if we set $K=b_3+C/M$ , the right hand side of (2.34) can be rewritten as $$\left\vert \vert m_j\vert-\vert m_{j-1}\vert \right\vert \leq KR \vert\varphi(m_j)-\varphi(m_{j-1})\vert\ ,\ j=2,\ldots,k\ . \tag 2.36$$ >From this consideration and (1-8), we obtain \align &\ \sum_{\Cal Z'_k(\beta;R)}\bold P(f(\varphi(m_j))\in I_{m_j}^R\ ,\ j=1,\ldots,k) \tag 2.37\\ &\le \text{const.}\sum_{\Cal Z''_k(\beta;R)}R^{-2k}\prod_{j=2}^k (\varphi(m_j)-\varphi(m_{j-1}))^{-\tau}\ , \endalign where $\Cal Z''_k(\beta;R)$ is the totality of $(m_1,\ldots,m_k)\in \Cal Z'_k(\beta;R)$ for which (2.36) holds. Then by Lemma 3 below, we will have $$\varlimsup_{R\to\infty} \sum_{\Cal Z''_k(\beta;R)}R^{-2k}\prod_{j=2}^k (\varphi(m_j)-\varphi(m_{j-1}))^{-\tau}=O(\beta^{2-\tau})\ ,\ \beta\downarrow0\ , \tag 2.38$$ which completes the proof of Theorem 1. \par \bigskip \proclaim{Lemma 2} Let $A>0$ , $S>0$ , $T>0$ and $R\geq1$ , where $T\geq\delta R$ and $S\geq\delta R$ for some $\delta>0$ . Let further $\beta>(AR)^{-1}$ . Then there is a constant $C>0$ such that $$\sum_{m\in\bold Z^2; (AR)^{-1}<\varphi(m)-\varphi_0<\beta} \ 1_{\{\vert\vert m\vert-S\vert\leq T(\varphi(m)-\varphi_0)\}} (\varphi(m)-\varphi_0)^{-\tau}\leq CST\beta^{2-\tau}$$ for any $\varphi_0\in[0,\Theta)$ . \endproclaim \demo {Proof} Divide the interval $(\varphi_0+(AR)^{-1},\varphi_0+\beta]$ into subintervals $\Delta_{\ell}$ of length $(AR)^{-1}$ , where $\ell=1,\ldots,[AR\beta]-1$ , and an interval $\Delta_{[AR\beta]}$ of length $\leq(AR)^{-1}$ . Let $N_{\ell}$ be the number of lattice points $m\in\bold Z^2$ such that $\varphi(m)\in\Delta_{\ell}$ and that $$\left\vert \vert m\vert-S \right\vert\leq T(\varphi(m)-\varphi_0)\ .$$ Then $N_{\ell}\le \text{const.}R^{-2}ST\ell$ and we have \align &\ \sum_{m\in\bold Z^{2};(AR)^{-1}<\varphi(m)-\varphi_0<\beta} 1_{\{\vert\vert m\vert-S\vert\leq T(\varphi(m)-\varphi_0)\}} (\varphi(m)-\varphi_0)^{-\tau} \tag2.39\\ &=\sum_{\ell=1}^{[R\beta]}\sum_{m;\varphi(m)\in\Delta_{\ell}} 1_{\{\vert\vert m\vert-S\vert\leq T(\varphi(m)-\varphi_0)\}} (\varphi(m)-\varphi_0)^{-\tau} \\ &\leq \sum_{\ell=1}^{[R\beta]}\sum_{m;\varphi(m)\in\Delta_i} 1_{\{\vert\vert m\vert-S\vert\leq T(\varphi(m)-\varphi_0)\}} (iR^{-1})^{-\tau} \\ &\leq\text{const.}\sum_{\ell=1}^{[R\beta]}\frac{ST}{R^2} \ell^{1-\tau}R^{\tau} \\ &\leq\text{const.}\frac{ST}{R^{2-\tau}}(R\beta)^{2-\tau} \\ &=CST\beta^{2-\tau}\ , \endalign and this estimate is uniform in $\varphi_0$ . \enddemo As a corollary of Lemma 2, we get the following \proclaim{Lemma 3} For large $R>0$ and small $\beta>0$ , we have $$\max_{m'\in\Cal Z_1(R)}\sum_{m;(m',m)\in\Cal Z''_2(\beta;R)} (\varphi(m)-\varphi(m'))^{-\tau}=O(KR^2\beta^{2-\tau})\ .$$ \endproclaim \demo {Proof} By an elementary geometric consideration, we can show the existence of an absolute constant $M'>0$ such that $$\min_{m\in\Cal Z_1(R)}\min \Sb m';\varphi(m')>\varphi(m),\\ \vert\vert m'\vert-\vert m\vert\vert\le R (\varphi(m')-\varphi(m)) \endSb (\varphi(m')-\varphi(m)) \geq M'R^{-1}\ .\tag 2.40$$ Hence if we take $A^{-1}=M'$ , $S=\vert m'\vert$ , $T=KR$ and $\varphi_0=\varphi(m')$ in Lemma 2, then the condition $(AR)^{-1}<\varphi(m)-\varphi_0$ under the summation symbol can be dropped and we obtain the desired assertion. \enddemo \head {\bf \S3 Proof of Theorems 2 and 3} \endhead In this section, we shall prove simultaneously Theorems 2' and 3' , which are equivalent variants of Theorems 2 and 3, admitting a technical lemma which will be proved in \S4. After that, we shall sketch how to transform the modified version of our theorems back to the original ones. \par In order to state the modified version of our theorems--which the author believs are in fact closer to the original problem raised by Berry and Tabor [BT]-- let us introduce the planar domain $\tilde{\Pi}_R(f)$ by \align \tilde{\Pi}_R(f)=\{x\in\bold R^2 &\vert\ 0\leq\varphi(x)\leq\Theta\ ,\tag3.1\\ & \sqrt{R+2c_1}f(\varphi(x))<\vert x\vert<\sqrt{R+2c_2}f(\varphi(x))\}\ . \endalign Then the area of $\tilde{\Pi}_R(f)$ is not only asymptotic to, but is equal to $\lambda(f)$ for all $R>0$ . Given positive numbers $00$ , let $\mu_L^{a_1,a_2}(dR)$ be the uniform probability distribution on the interval $I_L=(a_1L,a_2L)$ . Finally we define $$\tilde{\xi}=\tilde{\xi}(R;f)=\sharp\{\tilde{\Pi}_R(f)\cap\bold Z^2\}\ , \tag 3.2$$ to be the number of lattice points in the domain $\tilde{\Pi}_R(f)$ . \par Now we shall prove the following two theorems: \bigskip \proclaim{Theorem 2'} Suppose Conditions (I) and (II) hold. Then there is a sequence $\bar{L}_n\to\infty$ such that for $\bold P$-almost all $f\in\Cal X$ , the distribution of $\tilde{\xi}(\cdot;f)$ under $\mu_{\bar{L}_n}^{a_1,a_2}$ converges to the Poisson distribution with parameter $\lambda(f)$ . \endproclaim \bigskip \proclaim{Theorem 3'} If we assume Condition (III) in addition to Conditions (I) and (II), then for $\bold P$-almost all $f\in\Cal X$ , the distribution of $\tilde{\xi}(\cdot;f)$ under $\mu_L^{a_1,a_2}$ converges to the Poisson distribution with parameter $\lambda(f)$ as $L\to\infty$ . \endproclaim \bigskip Let us turn to the proof of Theorem 3'. \par For $k=1,2,\ldots$ , set $$\bold E_L=\bold E_{L,k}(f)=\int_{I_L}\tilde{\xi}(R;f)\mu_L(dR)\ ,\tag3.3$$ where $\mu_L=\mu_L^{a_1,a_2}$ , and $$\tilde{\xi}_k=\tilde{\xi}_k(R;f)=\pmatrix \tilde{\xi} \\ k \endpmatrix 1_{\{\tilde{\xi}\geq k\}} =\sum_{n=k}^{\infty}\frac{n!}{k!(n-k)!}1_{\{\tilde{\xi}=n\}}\ .\tag 3.4$$ As was done in [M], Theorem 3' will be proved as soon as we have shown $$\lim_{L\to\infty}\bold E_{L,k}(f)=\frac1{k!}\lambda(f)^k\ ,\ k=1,2,\ldots \tag3.5$$ for $\bold P$ -almost all $f\in\Cal X$ . Similarly, Theorem 2' will be proved as soon as we have shown the existence of a sequence $\bar{L}_j\to\infty$ such that (3.5) holds along this sequence for $\bold P$ -almost all $f\in\Cal X$ . \par For a lattice point $m\in\bold Z^2$ , let $$D_m=D_m(f)=\{R>0\ \vert\ \tilde{\Pi}_R(f)\ni m\}\ .\tag3.6$$ By the definition (3.1) of $\tilde{\Pi}_R(f)$ , we have $$D_m=\left(\frac{\vert m\vert^2}{f(\varphi(m))^2}-2c_2, \frac{\vert m\vert^2}{f(\varphi(m))^2}-2c_1\right)\ .\tag 3.7$$ In particular, $\vert D_m\vert=2(c_2-c_1)$ . For convenience, we let $$\gamma_m=\gamma_m(f)=\frac{\vert m\vert^2}{f(\varphi(m))^2}\ .\tag 3.8$$ Now set $L_n=2^n$ , $n=1,2,\ldots$ . Then if $\tilde{\Pi}_R(f)\ni m$ , $R\in I_L$ and if $L_n\le L0$ , consider the planar domain \align \Lambda_{\epsilon}&=\Lambda_{\epsilon}(f)\tag 3.17\\ &=\{x\in\bold R^2\ \vert \ 0\leq\varphi(x)\leq\Theta\ \text{and}\ a_1 f(\varphi(x))^2<\vert x\vert^2<(a_2+\epsilon)f(\varphi(x))^2\}\ . \endalign Since $f(\cdot)$ is a continuous function, the boundary of $\Lambda_{\epsilon}$ has zero Lesbesgue measure, and hence the indicator of $\Lambda_{\epsilon}$ is Riemann integrable. We have therefore \align \varlimsup_{L\to\infty}\bar{\bold E}_{L,1}(f)&\leq \varlimsup_{L\to\infty}\frac{2(c_2-c_1)}{a_2-a_1} \left(\frac1{\sqrt{L}}\right)^2 \sum_{m\in\bold Z^2}1_{\Lambda_{\epsilon}}\left(\frac{m}{\sqrt{L}}\right) \tag 3.18\\ &=\frac{2(c_2-c_1)}{a_2-a_1}\times\text{the area of}\ \Lambda_{\epsilon} \\ &=\left(1+\frac{\epsilon}{a_2-a_1}\right)\lambda(f)\ . \endalign Letting $\epsilon\searrow0$ , we obtain $$\varlimsup_{L\to\infty}\bar{\bold E}_{L,1}(f)\leq\lambda(f)\ .\tag 3.19$$ By a similar argument, we can also prove $$\varliminf_{L\to\infty}\underline{\bold E}_{L,1}(f)\geq\lambda(f)\ . \tag 3.20$$ So that $$\lim_{L\to\infty}\bold E_{L,1}(f)=\lambda(f) \tag 3.21$$ for every $f\in\Cal X$ . \par Next we consider the case $k\geq2$ . \par If $L_n\leq L0$ is arbitrary. Obviously $$z_i=a_1L_n+\frac{i}{q_n}(a_2L_{n+1}-a_1L_n)\ ,\ i=1,\ldots,q_n\ .\tag 3.22$$ If we let $$\bar{A}_n(L)=\frac{a_1(L-L_n)+2c_1}{a_2L_{n+1}-a_1L_n}q_n\ ;\ \bar{B}_n(L)=1+\frac{a_2L-a_1L_n+2c_2}{a_2L_{n+1}-a_1L_n}q_n\ ,\tag 3.22$$ and $$\underline{A}_n(L)=1+\frac{a_1(L-L_n)+2c_2}{a_2L_{n+1}-a_1L_n}q_n\ ;\ \underline{B}_n(L)=\frac{a_2L-a_1L_n+2c_1}{a_2L_{n+1}-a_1L_n}q_n\ ,\tag 3.23$$ then we have $$\gamma_m\in[z_{i-1},z_i)\ ,\ D_m\cap I_L\ne\phi\Longrightarrow \bar{A}_n(L)0 which depends only on T>0 such that for each n\geq1 and \beta>L_n^{-1/2} ,$$\max_{1\leq i\leq q_n+1}U_{i,n}\leq C_T(\Psi_{2k}(\beta)+\sqrt{L_n}\beta^d) L_n^{3/2}q_n^{-2}\ ,\tag 3.36$$where d=1-\sigma or d=2-\tau according to which of the conditions (II-1) or (II-2) is satisfied by \bold P . \endproclaim \bigskip Take \beta=\beta_n=An^{-1/{\nu_k}} in the above lemma. Then by (3.32) and (3.35),$$\align \bold E[\Gamma_n(\cdot)]&\leq \frac1{\vert I_{L_n}\vert} \sum_{i=1}^{q_n+1}\sqrt{U_{i,n}} \tag 3.37\\ &\leq \text{const.}L_n^{-1}q_n(\sqrt{\Psi_{2k}(\beta_n)}+L_n^{1/4} \beta_n^{d/2}) L_n^{3/4}q_n^{-1} \\ &=\text{const.}(L_n^{-1/4}\sqrt{\Psi_{2k}(\beta_n)}+\beta_n^{d/2}) \\ &\leq \text{const.}(L_n^{-1/4}\exp(\frac12 A^{-\nu_k}n)+n^{-d/{2\nu_k}})\ . \endalign$$If we take A>0 sufficiently large, then the first term on the right hand side decays exponentially first, while d/{2\nu_k}>1 . Hence we have$$\sum_{n=1}^{\infty}\bold E[\Gamma_n(\cdot)]<\infty\ ,\tag 3.38$$which completes the proof of Theorem 3. \par When we do not assume Condition (III), \Psi_{2k}(\beta) can be any positive function monotonically tending to \infty as \beta\searrow0 . It is possible to choose a decreasing sequence \{\beta_n\} tending to 0 so slowly that$$L_n^{-1/2}\Psi_{2k}(\beta_n)\longrightarrow0 \tag 3.39$$holds. Then choose a subsequence \{n'\} such that$$\sum_{n'}(L_{n'}^{-1/4}\sqrt{\Psi_{2k}(\beta_{n'})}+\beta_{n'}^{d/2})<\infty \ .\tag 3.40$$Choose a number \bar{L}_{n'} arbitrarily from the interval [L_{n'},L_{n'+1}) for each n' . Then obvioiusly the sequence \{\bar{L}_{n'}\} meets the requirement of Theorem 2' . \par \bigskip Before closing this section, we sketch how to obtain Theorems 2 and 3 from what we have shown up to now. \par For this purpose, we first note that for any f\in\Cal X and any 00\vert\tilde{\xi}(R;f)\ne\xi(\sqrt{R};f) \})=0\ . \tag 3.41$$ Indeed, $\tilde{\xi}(R;f)\ne\xi(\sqrt{R};f)$ implies $$\{\tilde{\Pi}_R(f)\Delta\Pi_R(f)\}\cap\bold Z^2\ne\phi\ ,\tag 3.42$$ and since $a_1L\leq R\leq a_2L$ , $b_1\leq f(\cdot)\leq b_2$ , it holds that \align & \mu_L^{a_1,a_2}(\{R>0\vert\tilde{\xi}(R;f)\ne\xi(\sqrt{R};f)\}) \tag 3.43\\ &\leq \mu_L^{a_1,a_2}(\{R>0\vert(\tilde{\Pi}_R(f)\Delta\Pi_R(f))\cap\bold Z^2 \ne\phi\}) \\ &\leq \sum_{m\in\Cal Z_{1,n}}\mu_L^{a_1,a_2}(\{R>0\vert\ m\in\tilde{\Pi}_R(f)\Delta\Pi_R(f)\})\ . \endalign If we set for each $m\in\bold Z^2$ with $0\leq\varphi(m)\leq\Theta$ , $$J_m=\{R>0\vert\ m\in\tilde{\Pi}_R(f)\Delta\Pi_R(f)\}\ ,\tag 3.44$$ and $$J_m^{(i)}=\{R>0\vert\ \sqrt{R+2c_i}f(\varphi(m))\leq\vert m\vert\leq (\sqrt{R}+\frac{c_i}{\sqrt{R}})f(\varphi(m))\}\ ,\ i=1,2\ ,\tag 3.45$$ then $J_m\subset J_m^{(1)}\cup J_m^{(2)}$ , and it suffices for our purpose to show $$\lim_{L\to\infty}\sum_{m\in\Cal Z_{1,n}}\mu_L^{a_1,a_2}(J_m^{(i)})=0\ , \ i=1,2\ .\tag 3.46$$ Now it is easy to see that $R\in J_m^{(i)}$ implies $$\frac{\vert m\vert^2}{f(\varphi(m))^2}-2c_i-\frac{c_1^2}{a_1L}\leq R\leq \frac{\vert m\vert^2}{f(\varphi(m))^2}-2c_i\ ,\tag 3.47$$ and hence $$\mu_L^{a_1,a_2}(J_m^{(i)})\leq\frac{c_i^2}{(a_2-a_1)a_1}\frac1{L^2}\ . \tag 3.48$$ Since the number of lattice points in $\Cal Z_{1,n}$ is bounded by a constant times $L$ , we finally arrive at $$\sum_{m\in\Cal Z_{1,n}}\mu_L^{a_1,a_2}(J_m^{(i)})=\Cal O(L^{-1}) \tag 3.49$$ as desired. \par Now fix $0a_1$ . This means that for $\bold P-$almost all $f\in\Cal X$ and for Lesbesgue almost all $\alpha>a_1$ , one has $$\lim_{L\to\infty}\frac1{L^2}\int_{a_1^2L^2}^{\alpha L^2}1_{\{\tilde{\xi} (R;f)=k\}}dR= \lim_{L\to\infty}\frac1{L^2}\int_{a_1^2L^2}^{\alpha L^2} 1_{\{\xi(\sqrt{R};f)=k\}}dR= (\alpha-a_1^2)p_k(f)\ ,\ k\geq1\ .\tag 3.52$$ We now fix $f\in\Cal X$ for which (3.50) and (3.52) are valid, and letting $g_k(R)=1_{\{\xi(R;f)=k\}}$ for brevity, we further compute as follows: \align & \mu_L^{a_1,a_2}(\{R>0\vert\ \xi(R;f)=k\}) \tag 3.53\\ &=\frac1{(a_2-a_1)L}\int_{a_1L}^{a_2L}\ g_k(R)dR \\ &=\frac{1}{2(a_2-a_1)L}\int_{a_1^2L^2}^{a_2^2L^2}\ g_k(\sqrt{s}) \frac{ds}{\sqrt{s}} \\ &=\frac{1}{2(a_2-a_1)a_2}\frac1{L^2} \int_{a_1^2L^2}^{a_2^2L^2}\ g_k(\sqrt{s})ds +\frac{1}{2(a_2-a_1)}\int_{a_1}^{a_2}\frac{d\tau}{\tau^2} \left\{\frac1{L^2}\int_{a_1^2L^2}^{\tau^2L^2}g(\sqrt{s})ds\right\}\ . \endalign The first term on the right hand side converges to $\frac{a_2^2-a_1^2}{2(a_2-a_1)a_2}p_k(f)$ as $L\to\infty$ because of (3.50). On the other hand, we have $$\left\vert\frac{1}{L^2}\int_{a_1^2L^2}^{\tau^2L^2}g_k(\sqrt{s})ds\right\vert \leq a_2^2-a_1^2 \tag 3.54$$ for all $\tau\in[a_1,a_2]$ and $$\lim_{L\to\infty}\frac1{L^2}\int_{a_1^2L^2}^{\tau^2L^2}g(\sqrt{s})ds =(\tau^2-a_1^2)p_k(f) \tag 3.55$$ for Lesbesgue almost all $\tau\in[a_1,a_2]$ because of (3.52). Now apply Lesbesgue's dominated convergence theorem to the second term on the right hand side, to obtain \align & \lim_{L\to\infty}\mu_L^{a_1,a_2}(\{R>0\vert\ \xi(R;f)=k\}) \tag 3.56\\ &=\frac{a_2^2-a_1^2}{2(a_2-a_1)a_2}p_k(f)+ \frac1{2(a_2-a_1)}\int_{a_1}^{a_2}\frac{d\tau}{\tau^2}(\tau^2-a_1^2)p_k(f) \\ &=p_k(f)\ , \endalign completing the proof of Theorem 3. \par Finally let $\{\bar{L}_n\}$ be the sequence taken according to Theorem 2'. We can repeat the above argument to see that the sequence $\{\sqrt{\bar{L}_n}\}$ meets the requirement of Theorem 2. \head {\bf \S4 Proof of Lemma 4} \endhead Let us fix an $i\in\{1,\ldots,1+q_n\}$ and a $T>1$ and assume $T^{-1}\leq a_1L_n^{-1/2}$ , define a subset $\Cal W_n(\beta)$ of $\Cal Z_{k,n}\times\Cal Z_{k,n}$ to be the totality of those $(\bold m;\bold m')\mathbreak=(m_1,\ldots,m_k;m'_1,\ldots,m'_k)$ such that $$\varphi(m_s)-\varphi(m_{s-1})\geq\beta\ ,\ \varphi(m'_s)-\varphi(m'_{s-1})\geq\beta\ ,\ s=2,\ldots,k \tag 4.19$$ and that $$\vert\varphi(m_s)-\varphi(m'_t)\vert\geq\beta\ ,\ s,t=1,\ldots,k\ . \tag 4.20$$ Then $$X_n^i=\left(\sum_{\Cal W_n(\beta)}\ + \ \sum_{(\Cal Z_{k,n}\times\Cal Z_{k,n})\setminus\Cal W_n(\beta)}\right) Q^i(\bold m;\bold m')\ . \tag 4.21$$ Let $X_n^i(\beta)$ be the first sum on the right hand side of (4.21). If for given $m_1,m'_1\in\Cal Z_{1,n}$ with $\vert\varphi(m_1)-\varphi(m'_1)\vert\geq\beta$ one defines \align &\ \Cal W_n(\beta;m_1,m'_1) \tag 4.22 \\ &=\{(m_2,\ldots,m_k;m'_2,\ldots,m'_k)\ ;\ (m_1,m_2,\ldots,m_k;m'_1,m'_2,\ldots,m'_k)\in\Cal W_n(\beta)\}\ , \endalign then $$X_n^i(\beta)=\sum\Sb m_1,m'_1\in\Cal Z_{1,n} \\ \vert\varphi(m_1)-\varphi(m'_1)\vert\geq\beta\endSb \sum_{\Cal W_n(\beta;m_1,m'_1)}Q^i(m_1,\ldots,m_k;m'_1,\ldots,m'_k)\ . \tag 4.23$$ Fix $m_1,m'_1\in\Cal Z_{1,n}$ such that $$\vert\varphi(m_1)-\varphi(m'_1)\vert\geq\beta \tag 4.24$$ and $y_1$ , $y'_1$ such that $$\frac{\vert m_1\vert}{\sqrt{z_i}}\leq y_1\leq \frac{\vert m_1\vert}{\sqrt{z_{i-1}}}\ ;\ \frac{\vert m'_1\vert}{\sqrt{z_i}}\leq y'_1\leq \frac{\vert m'_1\vert}{\sqrt{z_{i-1}}}\ . \tag 4.25$$ Take also $\zeta_s$ , $\zeta'_s$ , $s=2,\ldots,k$ from the interval $(-A,A)$ . Then for some positive constant $C_T>0$ , it holds that $$C_T^{-1}L_n^{1/2}\leq\ell_s\ ,\ \ell'_s\leq C_T L_n^{1/2}\ ,\ s=2,\ldots,k \ . \tag 4.26$$ Now we shall evaluate approximately the sum \align & \sum\Sb (m_2,\ldots,m_k;m'_2,\ldots,m'_k) \\ \in\Cal W_n(\beta;m_1,m'_1) \endSb \ \prod_{s=2}^k\left\{\left( \frac12\left\vert\frac{m_s}{\ell_s}\right\vert\right) \left( \frac12\left\vert\frac{m'_s}{\ell'_s}\right\vert\right) \frac1{\ell^2_s}\frac1{\ell^{\prime 2}_s}\right\}\times \tag 4.27\\ &\times p_{2k}(y_1,\left\vert\frac{m_2}{\ell_2}\right\vert, \ldots,\left\vert\frac{m_k}{\ell_k}\right\vert,y'_1, \left\vert\frac{m'_2}{\ell'_2}\right\vert,\ldots, \left\vert\frac{m'_k}{\ell'_k}\right\vert\ \vert\ \varphi(m_1),\ldots,\varphi(m_k),\varphi(m'_1),\ldots,\varphi(m'_k))\ , \endalign the approximation being uniform in $\zeta_2,\ldots,\zeta_k$ , $\zeta'_2,\ldots,\zeta'_k$ . Note that the summand appears as a part of the integrand in (4.18). \par For this purpose, we introduce a function \align F&=F_{m_1,m'_1,y_1,y'_1}(x_2,\ldots,x_k;x'_2,\ldots,x'_k) \tag 4.28\\ &\equiv\prod_{s=2}^k(\frac{\vert x_s\vert}{2}\frac{\vert x'_s\vert}{2}) p_{2k}(y_1,\vert x_2\vert,\ldots,\vert x_k\vert,y'_1,\vert x'_2\vert, \ldots,\vert x'_k\vert\ \left\vert\ \right. \\ &\quad \varphi(m_1),\varphi(x_2), \ldots,\varphi(x_k),\varphi(m'_1),\varphi(x'_2), \ldots,\varphi(x'_k)) \endalign which is defined on the set \align & \Cal D_{m_1,m'_1}\equiv\{(x_2,\ldots,x_k;x'_2,\ldots,x'_k) \in(\bold R^2)^{2(k-1)}\ \left\vert\ b_1\leq\vert x_s\vert,\vert x'_s\vert \leq b_2\ ,\right. \tag 4.29 \\ &\qquad \varphi(m_1)<\varphi(x_2)<\cdots< \varphi(x_k)\ ;\ \varphi(m'_1)<\varphi(x'_2)< \cdots<\varphi(x'_k))\}\ . \endalign It is easy to see that for some $C>0$ , $$\left\Vert\frac{\partial F}{\partial x_j}\right\Vert\ ,\ \left\Vert\frac{\partial F}{\partial x'_j}\right\Vert\ \leq\ C(\psi_{2k}(\theta)+\Psi_{2k}(\theta)) \tag 4.30$$ holds on $\Cal D_{m_1,m'_1}$ , where $\theta$ is the minimum of $\varphi(x_2)-\varphi(m_1)$ , $\varphi(x_3)-\varphi(x_2)\ldots$ etc.. \par In view of Conditions (II) and (III) , we may suppose $\psi_{2k}(\theta)<\Psi_{2k}(\theta)$ , so that (4.30) is actually $$\left\Vert\frac{\partial F}{\partial x_j}\right\Vert\ ,\ \left\Vert\frac{\partial F}{\partial x'_j}\right\Vert\ \leq\ C\Psi_{2k}(\theta)\ . \tag 4.31$$ The sum (4.27) can then be viewed as a Riemann sum approximation for the integral of the function $F$ , and (4.26), (4.31) can be used to estimate its accuracy. Namely (4.27) is equal to \align & \sum\Sb (m_2,\ldots,m_k;m'_2,\ldots,m'_k) \\ \in\Cal W_n(\beta;m_1,m'_1) \endSb \left(\prod_{s=2}^k \frac1{\ell^2_s}\frac1{\ell^{\prime 2}_s}\right) F_{m_1,m'_1,y_1,y'_1}(\frac{m_2}{\ell_2},\ldots,\frac{m_k}{\ell_k}, \frac{m'_2}{\ell'_2},\ldots,\frac{m'_k}{\ell'_k}) \tag 4.32 \\ &=\int dx_2\cdots\int dx_k\int dx'_2\cdots\int dx'_k F_{m_1,m'_1,y_1,y'_1}(x_2,\ldots,x_k,x'_2,\ldots,x'_k)\\ &\quad+\Cal O_T(L_n^{-1/2}\Psi_{2k}(\beta))\ , \endalign where the integration ranges over those $(x_2,\ldots,x_k,x'_2,\ldots,x'_k) \in\Cal D_{m_1,m'_1}$ for which \align & \varphi(x_2)-\varphi(m_1)\geq\beta\ , \ \varphi(x_s)-\varphi(x_{s-1})\geq\beta\ ,\ s=3,\ldots,k\ ; \tag 4.33 \\ & \varphi(x'_2)-\varphi(m'_1)\geq\beta\ , \ \varphi(x'_t)-\varphi(x'_{t-1})\geq\beta\ ,\ t=3,\ldots,k\ ;\\ & \vert\varphi(x_s)-\varphi(x'_t)\vert\geq\beta\ ,\ s,t=2,\ldots,k\ . \endalign Transforming each of $x_s$ and $x'_s$ into the polar coordinates, the above integral becomes \align & \int d\varphi_2\cdots\int d\varphi_k\int d\varphi'_2 \cdots\int d\varphi'_k \int_0^{\infty} dr_2\cdots\int_0^{\infty} dr_k \int_0^{\infty} dr'_2\cdots\int_0^{\infty} dr'_k \times \tag 4.34\\ &\times \prod_{s=2}^k\left(\frac{r_s^2}2\frac{r_s^{\prime 2}}2\right)\times \\ &\times p_{2k}(y_1,r_2,\ldots,r_k,y'_1,r'_2,\ldots,r'_k\ \vert\ \varphi(m_1),\varphi_2,\ldots,\varphi_k,\varphi(m'_1), \varphi'_2,\ldots,\varphi'_k)\ , \endalign where the integration ranges over those $\varphi_s$ , $\varphi'_s$ such that \align & \varphi(x_2)-\varphi(m_1)\geq\beta\ , \ \varphi(x_s)-\varphi(x_{s-1})\geq\beta\ ,\ s=3,\ldots,k\ ; \tag 4.35 \\ & \varphi(x'_2)-\varphi(m'_1)\geq\beta\ , \ \varphi(x'_t)-\varphi(x'_{t-1})\geq\beta\ ,\ t=3,\ldots,k\ ; \endalign and that $$\vert\varphi_s-\varphi'_t\vert\geq\beta\ ,\ s,t=1,\ldots,k\ .\tag4.36$$ \par If we integrate out with respect to the variables $r_s$ , $r'_s$ , $s=2,\ldots,k$ , we see without difficulty that the difference made by letting $\beta=0$ in the range of integration of (4.34) is less than $$C\beta p_2(y_1,y'_1\ \vert\ \varphi(m_1),\varphi(m'_1)\ .\tag 4.37$$ We have thus approximated the sum (4.27) uniformly in the parameters $\zeta_2,\ldots,\zeta_k$ , $\zeta'_2,\ldots,\zeta'_k$ . In order to finish the estimation of $X^i_n(\beta)$ (recall (4.23) for the definition), we need to get an approximation of the sum of $Q^i(m_1,\ldots,m_k;m'_1,\ldots,m'_k)$ over $\Cal W(\beta;m_1,m'_1)$ . This is done by integrating the sum (4.27) over parameters $y_1$ , $y'_1$ , $\zeta_2,\ldots,\zeta_k$ , $\zeta'_2,\ldots,\zeta'_k$ . By noting the above mentioned uniformity of the integral-approximation in $\zeta_s$ and $\zeta'_s$ , and by using the formula $$\int_{-A}^{A}d\zeta_2\cdots \int_{-A}^{A}d\zeta_k \{A-(0\vee\zeta_2\vee\cdots\vee\zeta_k)+ (0\wedge\zeta_2\wedge\cdots\wedge\zeta_k)\}_+ =A^k\ ,\ k\geq2\ ,\tag 4.38$$ which we shall prove as Lemma 5 at the end of this section, we obtain \align & X_n^i(\beta)=\sum\Sb m_1,m'_1\in\Cal Z_{1,n} \\ \vert\varphi(m_1)-\varphi(m'_1)\vert\geq\beta\endSb A^{2k} \int_{\vert m_1\vert/\sqrt{z_i}}^{\vert m_1\vert/\sqrt{z_{i-1}}}dy_1 \int_{\vert m'_1\vert/\sqrt{z_i}}^{\vert m'_1\vert/\sqrt{z_{i-1}}}dy'_1 \times \tag 4.39\\ &\times\left\{\idotsint_{\varphi(m_1)<\varphi_2<\cdots<\varphi_k} d\varphi_2\cdots d\varphi_k \idotsint_{\varphi(m'_1)<\varphi'_2<\cdots<\varphi'_k} d\varphi'_2\cdots d\varphi'_k\times \right.\\ &\times\int_0^{\infty}dr_2\cdots\int_0^{\infty}dr_k \int_0^{\infty}dr'_2\cdots\int_0^{\infty}dr'_k \prod_{s=2}^k\left(\frac{r_s^2}2\frac{r^{\prime 2}_s}2\right)\times\\ &\times p_{2k}(y_1,r_2,\ldots,r_k,y'_1,r'_2,\ldots,r'_k\ \vert\ \varphi(m_1),\varphi_2,\ldots,\varphi_k,\varphi(m'_1), \varphi'_2,\ldots,\varphi'_k) +\\ &\left.\quad+\Cal O_T(L_n^{-1/2}\Psi_{2k}(\beta)+\beta p_2(y_1,y'_1\vert \varphi(m_1),\varphi(m'_1))\right\} \\ &=4(c_2-c_1)^{2k}\sum\Sb m_1,m'_1\in\Cal Z_{1,n} \\ \vert\varphi(m_1)-\varphi(m'_1)\vert\geq\beta\endSb S^i(m_1,m'_1) +\Cal O_T(L_n^{3/2}q_n^{-2}\Psi_{2k}(\beta)) \\ &+\Cal O\left(\sum\Sb m_1,m'_1\in\Cal Z_{1,n} \\ \vert\varphi(m_1)-\varphi(m'_1)\vert\geq\beta\endSb \beta \int_{\vert m_1\vert/\sqrt{z_i}}^{\vert m_1\vert/\sqrt{z_{i-1}}}dy_1 \int_{\vert m'_1\vert/\sqrt{z_i}}^{\vert m'_1\vert/\sqrt{z_{i-1}}}dy'_1 p_2(y_1,y'_1\ \vert\ \varphi(m_1),\varphi(m'_1))\right)\ , \endalign where we have used (4.11) and $\sharp(\Cal Z_{1,n}\times\Cal Z_{1,n})= \Cal O_T(L_n)$ to get the bound of the first error term. See also (4.5) for the definition of $S^i(m_1,m'_1)$ . In order to estimate the second error term, let us first consider the case where our probability measure $\bold P$ satisfies Condition (II-1). Then the sum $$\sum\Sb m_1,m'_1\in\Cal Z_{1,n} \\ \vert\varphi(m_1)-\varphi(m'_1)\vert\geq\beta\endSb \beta \int_{\vert m_1\vert/\sqrt{z_i}}^{\vert m_1\vert/\sqrt{z_{i-1}}}dy_1 \int_{\vert m'_1\vert/\sqrt{z_i}}^{\vert m'_1\vert/\sqrt{z_{i-1}}}dy'_1 p_2(y_1,y'_1\ \vert\ \varphi(m_1),\varphi(m'_1)) \tag 4.40$$ is bounded by \align & C\beta q_n^{-2}\sum\Sb m_1,m'_1\in\Cal Z_{1,n} \\ \vert\varphi(m_1)-\varphi(m'_1)\vert\geq\beta\endSb \vert\varphi(m_1)-\varphi(m'_1)\vert^{-\sigma} \tag 4.41 \\ &\leq C\beta\beta^{-\sigma}q_n^{-2}\sum_{m_1,m'_1\in\Cal Z_{1,n}}\ 1 \\ &\leq C_T L_n^2q_n^{-2}\beta^{1-\sigma}\ . \endalign \par Next suppose that $\bold P$ satisfies Condition (II-2). Then we have $$p_2(y_1,y'_1\ \vert\ \varphi(m_1),\varphi(m'_1))\leq C1_{\{\vert y_1-y'_1\vert\leq b_3\vert\varphi(m_1)-\varphi(m'_1)\vert\}} \vert\varphi(m_1)-\varphi(m'_1)\vert^{-\tau}\ . \tag 4.42$$ Then the sum (4.40) is bounded by \align & C\beta\int_{b_1}^{b_2}dy_1\int_{b_1}^{b_2}dy'_1 \sum_{m_1,m'_1\in\Cal Z_{1,n}} 1_{\{y_1\sqrt{z_{i-1}}\leq\vert m_1\vert y_1\leq\sqrt{z_i}\}}\times \tag 4.43\\ &\quad\times1_{\{y'_1\sqrt{z_{i-1}}\leq\vert m'_1\vert\leq y'_1\sqrt{z_i}\}} 1_{\{\vert y_1-y'_1\vert\vee(b_3\beta)\leq \vert\varphi(m_1)-\varphi(m'_1)\vert \}}\times \\ &\quad\times\vert\varphi(m_1)-\varphi(m'_1)\vert^{-\tau}\ . \endalign For fixed $y_1,y'_1$ and $m_1$ , we shall estimate $$\sum_{m'_1\in\Cal Z_{1,n}} 1_{\{y'_1\sqrt{z_{i-1}}\leq\vert m'_1\vert\leq y'_1\sqrt{z_i}\}} 1_{\{\vert y_1-y'_1\vert\vee(b_3\beta)\leq \vert\varphi(m_1)-\varphi(m'_1)\vert \}} \vert\varphi(m_1)-\varphi(m'_1)\vert^{-\tau}\ .\tag 4.44$$ It is sufficient to do this assuming $\varphi(m_1)>\varphi(m'_1)$ . To this end, let us divide the interval $[\beta\vee(\vert y_1-y'_1\vert/b_3), \Theta]$ into subintervals $\Delta_j$ of length $L_n^{-1/2}$ . Since the number of those $m'_1\in\Cal Z_{1,n}$ for which $y'_1\sqrt{z_{i-1}}\leq\vert m'_1\vert\leq y'_1\sqrt{z_i}$ and $\varphi(m'_1)\in\Delta_j$ hold is bounded by $C_T L_n^{1/2}q_n^{-1}$ , the sum (4.44) is less than \align & C_T L_n^{1/2}q_n^{-1}\sum_{j=1}^{\infty} \{jL_n^{-1/2}+(\beta\vee\frac{\vert y_1-y'_1\vert}{b_3})\}^{-\tau} \tag 4.45\\ &\leq C_T L_nq_n^{-1}\int_{\beta\vee(\frac{\vert y_1-y'_1\vert}{b_3})}^{\infty} t^{-\tau}dt \\ &=C_T L_nq_n^{-1}(\beta^{1-\tau})\wedge(\vert y_1-y'_1\vert^{1-\tau})\ . \endalign Inserting this estimate into (4.43), we obtain the following new bound of (4.40): \align & C_T\beta\int_{b_1}^{b_2}dy_1\int_{b_1}^{b_2}dy'_1 \left(\sum_{m_1\in\Cal Z_{1,n}} 1_{\{y_1\sqrt{z_{i-1}}\leq\vert m_1\vert\leq y_1\sqrt{z_i}\}} \right) L_nq_n^{-1}\{\beta^{1-\tau}\wedge\vert y_1-y'_1\vert^{1-\tau}\} \tag4.46\\ &\leq C_T\beta(L_nq_n^{-1})^2\int_{b_1}^{b_2}dy_1\int_{b_1}^{b_2}dy'_1 \{\beta^{1-\tau}\wedge\vert y_1-y'_1\vert^{1-\tau}\} \\ &=C_T L_n^2q_n^{-2}\beta^{2-\tau}\ . \endalign Returning to (4.39), we have thus proved $$X_n^i(\beta)=4(c_2-c_1)^{2k}\sum\Sb m_1,m'_1\in\Cal Z_{1,n} \\ \vert\varphi(m_1)-\varphi(m'_1)\vert\geq\beta\endSb S^i(m_1,m'_1)+ \Cal O_T(L_n^{3/2}q_n^{-2}\Psi_{2k}(\beta)+L_n^2q_n^{-2}\beta^d)\ , \tag 4.47$$ where $d=1-\sigma$ or $d=2-\tau$ . (Recall $A=2(c_2-c_1)$ .) \par To finish the discussion on $X_n^i(\beta)$ , we estimate the error $$\sum\Sb m_1,m'_1\in\Cal Z_{1,n} \\ \vert\varphi(m_1)-\varphi(m'_1)\vert<\beta\endSb S^i(m_1,m'_1) \tag 4.48$$ which is made by dropping the condition $\vert\varphi(m_1)-\varphi(m'_1)\vert\geq\beta$ under the summation symbol in (4.47). For this purpose, we divide the sum (4.48) into two parts, namely we rewrite (4.48) as $$\sum_{0<\vert\varphi(m_1)-\varphi(m'_1)\vert<\beta}S^i(m_1,m'_1)+ \sum_{\varphi(m_1)=\varphi(m'_1)}S^i(m_1,m'_1)\equiv J_1+J_2 \tag 4.49$$ when Condition (II-1) holds, or $$\sum_{L_n^{-1/2}\leq\vert\varphi(m_1)-\varphi(m'_1)\vert<\beta}S^i(m_1,m'_1)+ \sum_{\vert\varphi(m_1)-\varphi(m'_1)\vertL_n^{-1/2} , we see$$J_1\leq C_T L_n^2q_n^{-2}\beta^{1-\sigma}\ . \tag 4.52$$On the other hand, if \varphi(m_1)=\varphi(m'_1) , one will have$$S(m_1,m'_1)\leq C\int_{b_1}^{b_2}dy_1 1_{\{y_1\sqrt{z_{i-1}}\leq\vert m_1\vert,\vert m'_1\vert \leq y_1\sqrt{z_i}\}} \ . \tag 4.53$$Since y_1(\sqrt{z_{i}}-\sqrt{z_{i-1}})=\Cal O_T(L_n^{1/2}q_n^{-1}) , we have$$J_2\leq C_T L_n^{3/2}q_n^{-2}\ . \tag 4.54$$Next suppose that Condition (II-2) holds. Then if \varphi(m_1)\ne\varphi(m'_1) ,$$\align S(m_1,m'_1)&\leq C\int_{b_1}^{b_2}dy_1\int_{b_1}^{b_2}dy'_1 1_{\{y_1\sqrt{z_{i-1}}\leq\vert m_1\vert \leq y_1\sqrt{z_i}\}} 1_{\{y'_1\sqrt{z_{i-1}}\leq\vert m'_1\vert \leq y'_1\sqrt{z_i}\}}\times \tag4.55\\ &\quad\times 1_{\{\vert y_1-y'_1\vert\leq b_3\vert\varphi(m_1)-\varphi(m'_1)\vert\}} \vert\varphi(m_1)-\varphi(m'_1)\vert^{-\tau}\ . \endalign$$Hence we have$$\align K_1&\leq C\int_{b_1}^{b_2}dy_1\int_{b_1}^{b_2}dy'_1 1_{\{\vert y_1-y'_1\vert\leq\beta\}} \sum_{m_1\in\Cal Z_{1,n}} 1_{\{y_1\sqrt{z_{i-1}}\leq\vert m_1\vert\leq y_1\sqrt{z_i}\}}\times \tag 4.56\\ &\quad\times\sum_{m'_1\in\Cal Z_{1,n}} 1_{\{y'_1\sqrt{z_{i-1}}\leq\vert m'_1\vert\leq y'_1\sqrt{z_i}\}} 1_{\{\vert y_1-y'_1\vert\vee(b_3 L_n^{-1/2})\leq \vert\varphi(m_1)-\varphi(m'_1)\vert \}} \\ &\qquad\times\vert\varphi(m_1)-\varphi(m'_1)\vert^{-\tau}\ . \endalign$$Let us estimate the summation over m'_1 . We can assume \varphi(m'_1) >\varphi(m_1) . As we did in the estimation of the sum (4.44) , we divide the interval [L_n^{-1/2}\vee(\frac1{b_3}\vert y_1-y'_1\vert),\Theta] into subintervals of length L_n^{-1/2} , and obtain$$\align & \sum_{m'_1\in\Cal Z_{1,n}} 1_{\{y'_1\sqrt{z_{i-1}}\leq\vert m'_1\vert\leq y'_1\sqrt{z_i}\}} 1_{\{\vert y_1-y'_1\vert\vee(b_3 L_n^{-1/2})\leq \vert\varphi(m_1)-\varphi(m'_1)\vert \}} \vert\varphi(m_1)-\varphi(m'_1)\vert^{-\tau} \tag 4.57 \\ &\leq C_T L_n^{1/2}q_n^{-1}\sum_{j=0}^{\infty} \{(L_n^{-1/2}\vee\frac1{b_3}\vert y_1-y'_1\vert)+jL_n^{-1/2}\}^{-\tau} \\ &\leq C_T L_n q_n^{-1}(L_n^{-1/2}\vee\frac1{b_3}\vert y_1-y'_1\vert)^{1-\tau} \ .\endalign$$Inserting this inequality into (4.56), one easily get the bound$$K_1\leq C_T L_n^2q_n^{-2}\beta^{2-\tau}\ . \tag 4.58$$We now turn to estimating K_2 . By \sqrt{z_i}\leq C_T L_n^{1/2} and \vert\varphi(m_1)-\varphi(m'_1)\vert\leq L_n^{-1/2} , one easily sees that there is a constant C_T>0 such that the condition,$$f(\varphi(m'_1))\sqrt{z_{i-1}}\leq\vert m'_1\vert \leq f(\varphi(m'_1))\sqrt{z_i} \tag 4.59$$implies$$f(\varphi(m_1))\sqrt{z_{i-1}}-C_T\leq\vert m'_1\vert \leq f(\varphi(m_1))\sqrt{z_i}+C_T\ . \tag 4.60$$For each fixed m_1\in\Cal Z_{1,n} , the number of m'_1 which satisfies the condition (4.60) and \vert\varphi(m_1)-\varphi(m'_1)\vert0 being chosen later. \par In either case, we let$$\Cal W'_n(\beta)=(\Cal Z_{1,n}\times\Cal Z_{1,n})\setminus (\Cal W_n(\beta)\cup \Cal W_n^{\prime\prime})\ . \tag 4.68$$Recall \Cal W_n(\beta) was defined just before (4.21). \par Suppose (II-1) is satisfied. We shall estimate X'_n(\beta) . By (4.66) and (4.68), X'_n(\beta) is the sum of Q^i(\bold m;\bold m') over those (\bold m;\bold m') \in\Cal Z_{k,n}\times\Cal Z_{k,n} such that \varphi(m_s) and \varphi(m'_t) are all different but that at least one of the following is less than \beta :$$\align & \varphi(m_s)-\varphi(m_{s-1})\ ,\ \varphi(m'_s)-\varphi(m'_{s-1})\ , s=2,\ldots,k\ ; \tag 4.69\\ & \vert\varphi(m_s)-\varphi(m'_t)\vert\ ,\ s,t=1,2,\ldots,k\ . \endalign$$Now let \bar{m}_1\ldots,\bar{m}_{2k} be the rearrangement of m_1,\ldots,m_k , m'_1,\ldots,m'_k in the increasing order of \varphi(\cdot) , namely$$\varphi(\bar{m}_1)<\cdots<\varphi(\bar{m}_{2k})\ ,\ \{\bar{m}_1\ldots,\bar{m}_{2k}\}=\{m_1,\ldots,m_k,m'_1,\ldots,m'_k\}\ . \tag 4.70$$There are (2k)!/(k!)^2 ways of dividing a given \{\bar{m}_1\ldots,\bar{m}_{2k}\} into two classes \{m_1,\ldots,m_k\} and \{m'_1,\ldots,m'_k\} , a typical one, let us call it \sigma, being as follows:$$\align & \varphi(m_1)<\cdots<\varphi(m_r)<\varphi(m'_1)< \varphi(\bar{m}_{r+2})<\cdots<\varphi(\bar{m}_{2k})\ ; \tag 4.71 \\ & \{\bar{m}_{r+2},\ldots,\bar{m}_{2k}\}=\{m_{r+1},\ldots,m_k,m'_2,\ldots, m'_k\}\ . \endalign$$Accordingly \Cal W'_n(\beta) is divided into (2k)!/(k!)^2 disjoint classes:$$\Cal W'_n(\beta)=\bigcup_{\sigma}\Cal W'_{n,\sigma}(\beta)\ ,\tag 4.72$$and obviously it is sufficient to estimate the sum X'_{n,\sigma}(\beta) of Q^i(m_1,\ldots,m_k;m'_1,\ldots,m'_k) over one of these classes. Recalling the definition (3.10) of \Cal Z_{k,n} , of which \Cal W'_n(\beta) is a subset, and the definition (4.17) of \ell_s , we see that \vert m_s\vert/\ell_s and \vert m'_s\vert/\ell'_s are of \Cal O_T(1) , and hence$$\align & X'_n(\beta) \tag 4.73\\ &\leq C_T L_n^{-2(k-1)}\sum_{\Cal W'_n(\beta)} \int_{\vert m_1\vert/\sqrt{z_i}}^{\vert m_1\vert/\sqrt{z_{i-1}}}dy_1 \int_{\vert m'_1\vert/\sqrt{z_i}}^{\vert m'_1\vert/\sqrt{z_{i-1}}}dy'_1 \int_{-A}^{A}d\zeta_2\cdots\int_{-A}^{A}d\zeta_k \int_{-A}^{A}d\zeta'_2\cdots\int_{-A}^{A}d\zeta'_k\times \\ &\times p_{2k}(y_1,\left\vert\frac{m_2}{\ell_2}\right\vert, \ldots,\left\vert\frac{m_k}{\ell_k}\right\vert,y'_1, \left\vert\frac{m'_2}{\ell'_2}\right\vert,\ldots, \left\vert\frac{m'_k}{\ell'_k}\right\vert\ \vert \varphi(m_1),\ldots,\varphi(m_k),\varphi(m'_1),\ldots,\varphi(m'_k))\ . \endalign$$At this point, we assume that (m_1,\ldots,m_k;m'_1,\ldots,m'_k) are arranged like (4.71) and set$$\bar{y}_1=y_1\ ,\ \bar{y}_j=\frac{\vert m_s\vert}{\ell_s}\ ;\ \bar{\zeta}_j=\zeta_s\quad \text{if}\ \bar{m}_j=m_s\ \text{for some}\ s=2,\ldots,k \tag 4.74$$and$$\bar{y}_{r+1}=y'_1\ ,\ \bar{y}'_j=\frac{\vert m'_t\vert}{\ell'_t}\ ;\ \bar{\zeta}_j=\zeta'_t\quad \text{if}\ \bar{m}_j=m'_t\ \text{for some}\ t=2,\ldots,k \tag 4.75$$Then by Condition (II-1), we get$$\align & X'_n(\beta) \tag 4.76\\ &\leq C_T L_n^{-2(k-1)}\sum_{\Cal W'_{n,\sigma}(\beta)} \int_{\vert m_1\vert/\sqrt{z_i}}^{\vert m_1\vert/\sqrt{z_{i-1}}}dy_1 \int_{-A}^{A}d\zeta_2\cdots \int_{-A}^{A}d\zeta_r\times\\ &\times\int_{\vert m'_1\vert/\sqrt{z_i}}^ {\vert m'_1\vert/\sqrt{z_{i-1}}}dy'_1 \int_{-A}^{A}d\bar{\zeta}_{r+2}\cdots \int_{-A}^{A}d\bar{\zeta}_{2k}\times \\ &\times p_{2k}(y_1,\bar{y}_2,\ldots,\bar{y}_r,y'_1, \bar{y}_{r+2},\ldots,\bar{y}_{2k}\ \vert \varphi(m_1),\ldots,\varphi(m_r), \varphi(m'_1),\ldots,\varphi(\bar{m}_{r+2}), \ldots,\varphi(\bar{m}_{2k})) \\ &\leq C_T L_n^{-2(k-1)}\sum \Sb \bar{m}_1\ldots,\bar{m}_{2k},\\ 0<\bar{m}_j-\bar{m}_{j-1}<\beta_j \endSb q_n^{-2}\prod_{j=2}^{2k}(\varphi(\bar{m}_j)-\varphi(\bar{m}_{j-1})) ^{-\sigma}\ , \endalign$$where \beta_j=\beta or \beta_j=\Theta and at least one of \beta_j is equal to \beta . We now apply Lemma 1 with R=L_n^{1/2} , successively to the summation with respect to \bar{m}_j , j=2k,2k-1,\ldots,2 , to obtain$$X'_n(\beta)\leq C_T q_n^{-2}L_n^2\prod_{j=2}^{2k}\bar{\beta}_j^{1-\sigma} \leq C_T L_n^2q_n^{-2}\beta^{1-\sigma}\ . \tag 4.77$$Next let us estimate X_n^{\prime\prime} , still assuming Condition (II-1). By definition, we can write$$X_n^{\prime\prime}=\sum_{\ell=1}^k\ \sum_{1\leq s_1<\cdots0$,$\vert D_{m'_1}\cap\cdots\cap D_{m'_k}\vert>0\$ , then we must have $$\vert\gamma_{m_{s_1}}-\gamma_{m_1}\vert0 sufficiently small. Then as can be seen from the proof of Lemma 3, as far as we have (\bold m;\bold m')\in\Cal Z_{k,n}\times\Cal Z_{k,n} , the condition$$\varphi(m_s)-\varphi(m_{s-1})\geq M'L_n^{-1/2}\ ;\ \varphi(m'_s)-\varphi(m'_{s-1})\geq M'L_n^{-1/2}\ ,\ s=2,\ldots,k \tag 4.99$$is automatically satisfied. \par As before, let \Cal W'_{n,\sigma}(\beta) be the subset of \Cal W'_n(\beta) consisting of those (\bold m;\bold m') which can be arranged like (4.71) . \par Let us define, for j=1,\ldots,2k ,$$\bar{y}_j=\cases y_1 &\quad \bar{m}_j=m_1 \\ y'_1 &\quad \bar{m}_j=m'_1 \\ \vert m_s\vert/\ell_s &\quad \bar{m}_j=m_s\ \text{for some}\ s=2,\ldots,k\\ \vert m'_t\vert/\ell'_t&\quad \bar{m}_j=m'_t\ \text{for some}\ t=2,\ldots,k \endcases\tag 4.100$$Then if we note that$$C_T^{-1}L_n^{1/2}\leq \ell_s,\ell'_s\leq C_T L_n^{1/2} \tag 4.101$$for some constant C_T\geq1 , we see from (4.18) and Condition (II-2), that$$\align & Q^i(\bold m;\bold m') \tag 4.102 \\ &\leq C_T L_n^{-2(k-1)} \int_{\vert m_1\vert/\sqrt{z_i}}^{\vert m_1\vert/\sqrt{z_{i-1}}}dy_1 \int_{-A}^{A}d\zeta_2\cdots \int_{-A}^{A}d\zeta_r \int_{\vert m'_1\vert/\sqrt{z_i}}^{\vert m'_1\vert/\sqrt{z_{i-1}}}dy'_1 \times \\ &\quad\times \int_{-A}^{A}d\bar{\zeta}_{r+1} \cdots\int_{-A}^{A}d\bar{\zeta}_{2k} \prod_{j=2}^{2k} 1_{\{\vert \bar{y}_j-\bar{y}_{j-1}\vert\leq b_3(\varphi(\bar{m}_j)- \varphi(\bar{m}_{j-1})\}}(\varphi(\bar{m}_j)-\varphi(\bar{m}_{j-1}))^{-\tau}\ , \endalign$$where \bar{\zeta}_j=\zeta_s or \bar{\zeta}_j=\zeta'_t according to \bar{m}_j=m_s or \bar{m}_j=m'_t . \par Let X'_{n,\sigma}(\beta) be the sum of Q^i(\bold m;\bold m') over \Cal W'_{n,\sigma}(\beta) . Noting (4.99), we apply Lemma 2 successively to the summation over \bar{m}_{2k},\ldots,\bar{m}_{r+2} , and integrate out over d\bar{\zeta}_j , to obtain$$\align & X'_{n,\sigma}(\beta) \leq C_T L_n^{-r+1}\left(\prod_{j=r+2}^{2k}\bar{\beta}_j^{2-\tau}\right) \times \tag 4.103\\ &\times \sum_{m_1\in\Cal Z_{1,n}} \int_{\vert m_1\vert/\sqrt{z_i}}^{\vert m_1\vert/\sqrt{z_{i-1}}}dy_1 \prod_{s=2}^r \int_{-A}^{A}d\zeta_s \sum_{m_s}1_{\{\vert \bar{y}_s-\bar{y}_{s-1}\vert\leq b_3(\varphi(\bar{m}_s)- \varphi(\bar{m}_{s-1})\}} (\varphi(\bar{m}_s)-\varphi(\bar{m}_{s-1})^{-\tau}\times \\ &\times\int_{b_1}^{b_2}dy'_1\sum_{m'_1} 1_{\{y'_1\sqrt{z_{i-1}}\leq\vert m'_1\vert\leq y'_1\sqrt{z_i}\}} 1_{\{\vert \bar{y}_s-\bar{y}_{s-1}\vert\leq b_3(\varphi(\bar{m}_s)- \varphi(\bar{m}_{s-1})\}} (\varphi(\bar{m}_s)-\varphi(\bar{m}_{s-1})^{-\tau}\ , \endalign$$where as before \bar{\beta}_j=\beta or \bar{\beta}_j=\Theta . \par We can apply the same method which we used in obtaining (4.57), namely we see without difficulty that$$\align & \sum_{m'_1}1_{\{y'_1\sqrt{z_{i-1}}\leq\vert m'_1\vert\leq y'_1\sqrt{z_i}\}} 1_{\{\vert \bar{y}_s-\bar{y}_{s-1}\vert\leq b_3(\varphi(\bar{m}_s)- \varphi(\bar{m}_{s-1}))\}}\times (\varphi(\bar{m}_s)-\varphi(\bar{m}_{s-1}))^{-\tau} \\ &\leq C_T q_n^{-1}L_n^{1/2}(L_n^{\tau/2}\wedge b_3^{-1} \vert \bar{y}_r-y'_1\vert)^{1-\tau}\ .\tag 4.104 \endalign$$Integrating with respect to y'_1 on the region \{\vert y'_1-y_r\vert< \bar{\beta}_{r+1}\} , we get$$\align & \int dy'_1 \sum_{m'_1}1_{\{y'_1\sqrt{z_{i-1}}\leq\vert m'_1\vert\leq y'_1\sqrt{z_i}\}} 1_{\{\vert \bar{y}_s-\bar{y}_{s-1}\leq b_3(\varphi(\bar{m}_s)- \varphi(\bar{m}_{s-1}))\}} (\varphi(\bar{m}_s)-\varphi(\bar{m}_{s-1}))^{-\tau} \tag 4.105\\ &\leq C_T q_n^{-1}L_n(L_n^{\tau/2-1}+\bar{\beta}_{r+1}^{2-\tau}) \\ &\leq C_T q_n^{-1}L_n\bar{\beta}_{r+1}^{2-\tau}\ , \endalign$$because of \beta\geq L_n^{-1/2} . \par We insert the estimate (4.102), which is uniform in m_1,\ldots,m_r , y_1 and in \zeta_2,\ldots,\zeta_r , apply Lemma 2 successively to the summation over m_r,\ldots,m_2 , and then integrate out over \zeta_2,\ldots,\zeta_r , to obtain$$\align X'_{n,\sigma}(\beta) &\leq C_T L_n^{-r+1}L^{r-1} \left(\prod_{j=r+2}^{2k}\bar{\beta}_j^{2-\tau}\right) q_n^{-1}L_n \sum_{m_1\in\Cal Z_{1,n}} \int_{\vert m_1\vert/\sqrt{z_i}}^{\vert m_1\vert/\sqrt{z_{i-1}}}\ 1\ dy_1 \tag 4.106\\ &\leq C_T L_n^2q_n^{-2}\left(\prod_{j=r+2}^{2k}\bar{\beta}_j^{2-\tau}\right)\\ &\leq C_T L_n^2q_n^{-2}\beta^{2-\tau}\ , \endalign$$because at least one of \bar{\beta}_j 's is \beta . \par This estimate being similar for all arrangement (4.71), we can now conclude$$X'_n(\beta)\leq C_T L_n^2q_n^{-2}\beta^{2-\tau}\ . \tag 4.107$$Finally, we turn to the estimate of X_n^{\prime\prime} under Condition (II-2), to finish the estimate of X_n . \par Recall that this time, X_n^{\prime\prime} is the sum of Q^i(\bold m;\bold m') over those (\bold m;\bold m')\in\Cal Z_{k,n}\times\Cal Z_{k,n} for which (4.99) holds but \vert\varphi(m_s)-\varphi(m'_t)\vert0 , we assume, without loss of generality, that s_j=t_j=j , j=1,\ldots,r holds. \par This time, \varphi(m_s) , \varphi(m'_t) , s,t=1,\ldots,k are all different, and we have the representation (4.18) for Q^i(\bold m;\bold m') . Now in (4.18), y'_1 can vary only within the interval \{\vert y'_1-y_1\vert0 and k\geq2 ,$$\int_{-A}^Ad\zeta_2\cdots \int_{-A}^Ad\zeta_k [A-\{0\vee\max_{2\leq s\leq k}\zeta_s\}+ \{0\wedge\min_{2\leq s\leq k}\zeta_s\}]_+=A^k\ .$$\endproclaim \demo {Proof} We denote the left hand side of the above formula by I_k . Moreover, if we set \bar{\tau}_k=0\vee\max_{2\leq s\leq k}\zeta_s and \underline{\tau}_k=0\wedge\min_{2\leq s\leq k}\zeta_s for brevity, then we can compute$$\align I_k&=\idotsint_{\{\bar{\tau}_{k-1}-\underline{\tau}_{k-1}