Content-Type: multipart/mixed; boundary="-------------0711120403174" This is a multi-part message in MIME format. ---------------0711120403174 Content-Type: text/plain; name="07-270.comments" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="07-270.comments" 32 pages ---------------0711120403174 Content-Type: text/plain; name="07-270.keywords" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="07-270.keywords" semicircle law, Wigner random matrices, random Schroedinger operator, density of states, localization, extended states ---------------0711120403174 Content-Type: application/x-tex; name="rm.tex" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="rm.tex" \documentclass[draft]{article} %\usepackage{amsmath,amsfonts,latexsym, amsrefs,amssymb} \usepackage{amsmath,amsfonts,latexsym, amssymb} \usepackage{color} %\newcommand{\sidenote}[1]{} \newcommand{\sidenote}[1]{\marginpar{\color{red}\footnotesize #1}} \oddsidemargin=0in \evensidemargin=0in \textwidth=6.5in %\usepackage[notref,notcite]{showkeys} %\usepackage{showkeys} \newcommand{\const}{\mbox{const}} \newcommand{\La}{\Lambda} \newcommand{\e}{\varepsilon} \newcommand{\eps}{\varepsilon} \newcommand{\pt}{\partial} \newcommand{\rd}{{\rm d}} \newcommand{\bR}{{\mathbb R}} \newcommand{\bbZ}{{\mathbb Z}} \newcommand{\bke}[1]{\left( #1 \right)} \newcommand{\bkt}[1]{\left[ #1 \right]} \newcommand{\bket}[1]{\left\{ #1 \right\}} \newcommand{\norm}[1]{\| #1 \|} \newcommand{\Norm}[1]{\left\Vert #1 \right\Vert} \newcommand{\abs}[1]{\left| #1 \right|} \newcommand{\bka}[1]{\left\langle #1 \right\rangle} \newcommand{\vect}[1]{\begin{bmatrix} #1 \end{bmatrix}} \newcommand{\ba}{{\bf{a}}} \newcommand{\bb}{{\bf{b}}} \newcommand{\bx}{{\bf{x}}} \newcommand{\by}{{\bf{y}}} \newcommand{\bu}{{\bf{u}}} \newcommand{\bv}{{\bf{v}}} \newcommand{\bw}{{\bf{w}}} \newcommand{\bz}{{\bf {z}}} \newcommand{\bc}{{\bf{c}}} \newcommand{\bd}{{\bf{d}}} \newcommand{\bh}{{\bf{h}}} \newcommand{\ui}{{\underline i}} \newcommand{\uj}{{\underline j}} \newcommand{\ual}{{\underline \al}} \newcommand{\bX}{{\bf{X}}} \newcommand{\bY}{{\bf{Y}}} \newcommand{\bZ}{{\bf{Z}}} \newcommand{\wG}{{\widehat G}} \newcommand{\al}{\alpha} \newcommand{\de}{\delta} \newcommand{\be}{\begin{equation}} \newcommand{\ee}{\end{equation}} \newcommand{\ga}{{\gamma}} \newcommand{\Ga}{{\Gamma}} \newcommand{\la}{\lambda} \newcommand{\Om}{{\Omega}} \newcommand{\om}{{\omega}} \newcommand{\si}{\sigma} \renewcommand{\th}{\theta} \newcommand{\td}{\tilde} \newcommand{\ze}{\zeta} \newcommand{\cL}{{\cal L}} \newcommand{\cE}{{\cal E}} \newcommand{\cN}{{\cal N}} \newcommand{\im}{{\text Im }} \newcommand{\E}{{\mathbb E }} \newcommand{\R}{{\mathbb R }} \newcommand{\N}{{\mathbb N}} \renewcommand{\P}{{\mathbb P}} \newcommand{\bC}{{\mathbb C}} \newcommand{\pd}{{\partial}} \newcommand{\nb}{{\nabla}} \newcommand{\lec}{\lesssim} \newcommand{\ind}{{\,\mathrm{d}}} %\newcommand{\qed}{\hfill\fbox{}\par\vspace{0.3mm}} \newcommand{\ph}{{\varphi}} \renewcommand{\div}{\mathop{\mathrm{div}}} \newcommand{\curl}{\mathop{\mathrm{curl}}} \newcommand{\spt}{\mathop{\mathrm{spt}}} \newcommand{\wkto}{\rightharpoonup} \newenvironment{pf}{{\bf Proof.}} {\hfill\qed} \newcommand{\wt}{\widetilde} \newcommand{\lv}{{\bar v}} \newcommand{\lp}{{\bar p}} \newtheorem{theorem}{Theorem} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} %\theoremstyle{definition} \newtheorem{remark}{Remark} \newtheorem{definition}{Definition} \newcommand{\qed}{\hfill\fbox{}\par\vspace{0.3mm}} \newenvironment{proof}{{\bf Proof.}} {\hfill\qed} % NUMBERING SCHEME \numberwithin{equation}{section} \numberwithin{theorem}{section} \numberwithin{definition}{section} %\numberwithin{corollary}{section} %\numberwithin{lemma}{section} % set the depth for the table of contents (0-2) \setcounter{tocdepth}{1} \title{Semicircle law on short scales and delocalization of eigenvectors for Wigner random matrices} \author{L\'aszl\'o Erd\H os${}^1$, Benjamin Schlein${}^1$\thanks{Supported by Sofja-Kovalevskaya Award of the Humboldt Foundation. On leave from Cambridge University, UK}\; and Horng-Tzer Yau${}^2$\thanks{Partially supported by NSF grant DMS-0602038} \\ \\ Institute of Mathematics, University of Munich, \\ Theresienstr. 39, D-80333 Munich, Germany${}^1$ \\ \\ Department of Mathematics, Harvard University\\ Cambridge MA 02138, USA${}^2$ \\ \\ \\} \begin{document} \maketitle \begin{abstract} We consider $N\times N$ Hermitian random matrices with i.i.d. entries. The matrix is normalized so that the average spacing between consecutive eigenvalues is of order $1/N$. We study the connection between eigenvalue statistics on microscopic energy scales $\eta\ll 1$ and (de)localization properties of the eigenvectors. Under suitable assumptions on the distribution of the single matrix elements, we first give an upper bound on the density of states on short energy scales of order $\eta \sim \log N/N$. We then prove that the density of states concentrates around the Wigner semicircle law on energy scales $\eta \gg N^{-2/3}$. We show that most eigenvectors are fully delocalized in the sense that their $\ell^p$-norms are comparable with $N^{\frac{1}{p} -\frac{1}{2}}$ for $p\ge 2$, and we obtain the weaker bound $N^{\frac{2}{3}\big(\frac{1}{p} -\frac{1}{2}\big)}$ for all eigenvectors whose eigenvalues are separated away from the spectral edges. We also prove that, with a probability very close to one, no eigenvector can be localized. Finally, we give an optimal bound on the second moment of the Green function. \end{abstract} {\bf AMS Subject Classification:} 15A52, 82B44 \medskip {\it Running title:} Semicircle law on short scales \medskip {\it Key words:} Semicircle law, Wigner random matrix, random Schr\"odinger operator, density of states, localization, extended states. %\received{} %\maketitle \section{Introduction} Denote the $(ij)$-th entry of an $N\times N$ matrix $H$ by $h_{ij}$. We shall assume that the matrix is Hermitian, i.e., $h_{ij} = \overline {h_{ji}}$. These matrices form a {\it Hermitian Wigner ensemble} if \be h_{i j} = N^{-1/2} [ x_{ij} + \sqrt{-1}\; y_{ij}], \quad (i < j), \quad \text{and} \quad h_{i i} = N^{-1/2} x_{ii}, \label{wig} \ee where $x_{ij}, y_{ij}$ ($i0$ such that \be \int e^{\delta x^2}\rd \nu(x) <\infty, \label{x2} \ee and the same holds for $\wt\nu$. \item[{\bf C3)}] The measures $\nu, \wt\nu$ satisfy the spectral gap inequality, i.e. there exists a constant $C$ such that for any function $u$ \be \int \Big|u-\int u\; \rd\nu\Big|^2 \rd \nu \leq C \int|\nabla u|^2 \rd \nu, \qquad \label{gap} \ee and the same holds for $\wt \nu$. \item[{\bf C4)}] The measures $\nu, \wt \nu$ satisfy the logarithmic Sobolev inequality, i.e. there exists a constant $C$ such that for any density function $u>0$ with $\int u\, \rd\nu =1$, \be \int u\log u\; \rd \nu \leq C\int |\nabla \sqrt{u}|^2 \rd \nu\, , \label{logsob} \ee and the same holds for $\wt \nu$. \end{itemize} We remark that C4) implies C3) and that all conditions are satisfied if $c_1 \leq g'',\wt g''\leq c_2$ for some positive constants $c_1, c_2$. \bigskip {\it Notation.} We will use the notation $|A|$ both for the Lebesgue measure of a set $A\subset \bR$ and for the cardinality of a discrete set $A\subset \bbZ$. The usual Hermitian scalar product for vectors $\bx,\by\in \bC^N $ will be denoted by $\bx\cdot \by$ or by $( \bx, \by)$. We will use the convention that $C$ denotes generic large constants and $c$ denotes generic small positive constants whose values may change from line to line. Since we are interested in large matrices, we always assume that $N$ is sufficiently large. \section{Upper bound on the density of states} The typical number of eigenvalues in an interval $I$ within the spectrum is expected to be of order $N|I|$. The following theorem proves the corresponding upper bound. \begin{theorem}\label{du} Let $H$ be an $N\times N$ Wigner matrix as described in \eqref{wig} and we assume condition \eqref{gM}. Let $I \subset \bR$ be an interval with $|I|\ge (\log N)/N$ and denote by $\cN_I$ the number of eigenvalues of $H$ in the interval $I$. Then there exists a constant $c>0$ such that for any $K$ large enough \be \P \big\{ \cN_I \geq KN|I| \big\}\leq e^{-c K N|I|}. \label{cn} \ee \end{theorem} For a fixed spectral parameter, $z=E+i\eta$ with $E\in\bR$, $\eta>0$, we denote $G_z = (H-z)^{-1}$ the Green function. Let $\mu_1\leq \mu_2\leq \ldots \leq \mu_N$ be the eigenvalues of $H$ and let $F(x)$ be the empirical counting function of the eigenvalues \be F(x)= \frac{1}{N}\big| \, \big\{ \al \; : \; \mu_\al \leq x\big\}\Big|\; . \label{Fdef} \ee We define the Stieltjes transform of $F$ as \be m= m(z) =\frac{1}{N}\text{Tr} \; G_z = \int_\bR \frac{\rd F(x)}{x-z}\, , \label{Sti} \ee and we let \be \rho=\rho_{\eta}(E) = \frac{ \text{Im} \; m(z)}{\pi}= \frac{1}{N\pi} \text{Im} \; \text{Tr} \; G_z =\frac{1}{N\pi}\sum_{\al=1}^N \frac{\eta}{(\mu_\al-E)^2+\eta^2} \label{rhodef} \ee be the normalized density of states of $H$ around energy $E$ and regularized on scale $\eta$. The random variables $m$ and $\varrho$ also depend on $N$, when necessary, we will indicate this fact by writing $m_N$ and $\varrho_N$. The counting function $\cN_I$ for intervals of length $|I|=\eta$ and the regularized density of states are closely related. On one hand, for the interval $I= [E-\frac{\eta}{2}, E+\frac{\eta}{2}]$, we obviously have \be \cN_I\leq CN|I|\varrho_\eta(E)\,. \label{Nrho} \ee On the other hand, Theorem \ref{du} provides the following upper bound for $m(z)$ under an additional assumption. \begin{corollary}\label{ducor} Let $z=E+i\eta$ with $E\in \bR$ and $\eta \ge \log N/N$. We assume conditions \eqref{gM} and \eqref{x2}. Then there exists $c>0$ such that for any sufficiently large $K$ \be \P \Big\{ \sup_E |m(E+i\eta)| \leq K\Big\}\ge 1- e^{-cKN\eta}\,. \label{rholde} \ee In particular, there exists a universal constant $C$ such that \be \sup_E \E \, |m(E+i\eta)| \leq C\, . \label{Erho} \ee The same bounds hold for the density since $\varrho_\eta(E)\leq \pi^{-1} |m(E+i\eta)|$. \end{corollary} \begin{proof} It is well known that if the tail of the distribution of the matrix elements decays sufficiently fast, then the eigenvalues of $H$ lie within a compact set with the exception of an exponentially small probability. For completeness we will prove in Lemma \ref{lm2} that there is a universal constant $c_0$ depending only on $\delta$ in \eqref{x2} such that for any sufficiently large $K_0$ we have \be \P \big\{\max_\al |\mu_\al| \ge K_0\big\}\leq e^{-c_0K_0^2N}\, . \label{tail} \ee Cover the interval $[-K_0,K_0]$ by the union of subintervals $I_n=[ (n-\frac{1}{2})\eta, (n+\frac{1}{2})\eta]$ of length $\eta$ where the integer index $n$ runs from $-[K_0\eta^{-1}]-1$ to $[K_0\eta^{-1}]+1$ (here $[\, \cdot\, ]$ denotes the integer part). Clearly $$ |m(E+i\eta)|\leq \frac{\pi}{N\eta}\; \max_n \cN_{I_n} $$ assuming that $\max_\al |\mu_\al| \le K_0$. Adding up the probabilities of the exceptional sets where $\cN_{I_n}\ge K_0N\eta$ and recalling $\eta\ge\log N/N$, we proved \eqref{rholde}. The proof of \eqref{Erho} obviously follows from \eqref{rholde} and from the deterministic bound $|m(E+i\eta)|\leq \eta^{-1}$. This completes the proof of Corollary \ref{ducor}. \end{proof} \bigskip In order to prove Theorem \ref{du}, we start with the following lemma: \begin{lemma}\label{lm:BL} Suppose that $x_j$ and $y_j$, $j=1,2,\ldots, N$, are i.i.d. real random variables with mean zero and with a density function $(\const.)e^{-g(x)}$. The expectation w.r.t. their joint probability measure $\rd\mu=(\const.)\prod_{j=1}^N e^{-g(x_j)- g(y_j)}\rd x_j\rd y_j$ is denoted by $\E$. We assume that $g$ satisfies \be g^{\prime \prime}(x) < M \label{gprime} \ee with some finite constant $M$. We set $z_j =x_j + \sqrt{-1}\; y_j$ and let $\bz = (z_1, \ldots, z_N)\in \bC^N$. Let $P$ be an orthogonal projection of rank $m$ in $\bC^N$. Then for any constant $c>0$ there exists a positive constant $\tilde c$, depending only on $c$ and $M$, such that \[ \E \exp \left [ - c X \right ] \le e^{- \tilde c m}, \qquad X= ( P \bz \, , \, P \bz ) \; . \] \end{lemma} \begin{proof} Let $\mu_t$ be the probability measure on $\bR^{2N}\cong\bC^N$ given by \[ \rd \mu_t : = Z_t^{-1} \exp \left [ - t X \right ] \rd \mu, \qquad Z_t = \int \exp \left [ - t X \right ] \rd \mu \] and denote the expectation w.r.t $\mu_t$ by $\E_t$. In case $t=0$, we shall drop the subscript. The covariance of two random vectors $\bY,\bZ\in \bC^N$ w.r.t. the measure $\mu_t$ is denoted by $$ \langle \bY; \bZ\rangle_{\mu_t} : = \E_t (\bY, \bZ) - ( \E_t \bY, \E_t \bZ). $$ Simple differentiation gives \[ \partial_t \log \E \exp \left [ - t X \right ] = - \E_t \, X = - \langle \, P \bz\, ; \, P \bz \, \rangle_{\mu_t} - ( \E_t \, P \bz, \; \E_t\, P \bz ) \le - \, \langle \, P \bz\, ; \, P \bz \, \rangle_{\mu_t}. \] Let $\nu_t$ denote the product measure on $\bR^{2N}\cong\bC^N$ with density for $z_j=x_j+\sqrt{-1}\; y_j$ to be proportional to $e^{ - (M+2t) |z_j|^2/2}$, $j=1,2, \ldots, N$. We can rewrite \[ \rd \mu_t = Z_t^{-1} \exp \left [ - t X \right ] \rd \mu = \frac {\rd \mu_t } {\rd \nu_t} \rd \nu_t\; . \] {F}rom the assumption \eqref{gprime} on $g$ and from $0\leq P \leq I$, we obtain that $\frac {d \mu_t } {d \nu_t}$ is log convex on $\bR^{2N}$. {F}rom the Brascamp-Lieb inequality (Theorem 5.4 in \cite{BL}) we have \[ \langle \, P \bz\, ; \, P \bz \, \rangle_{\mu_t} \ge \langle \, P \bz\, ; \, P \bz \, \rangle_{\nu_t}. \] By computing the Gaussian covariance explicitly, there exists a constant $c'>0$, depending only on $M$ and $c$, such that $$ \langle \, P \bz\, ; \, P \bz \, \rangle_{\nu_t} \ge c' m \qquad \forall \; t\in [0, c]. $$ We have thus obtained that \[ \partial_t \log \E \exp \left [ - t X \right ] \le - c' m \qquad \forall \; t\in [0, c]. \] Integrating this inequality from $t=0$ to $c$, we obtain the Lemma. \end{proof} \bigskip We will use this result in the following setup. Let $\bv_1, \bv_2, \ldots \bv_{N-1}$ form an orthonormal basis in $\bC^{N-1}$. Let $$ \xi_\al : = | \bz\cdot \bv_\al|^2, $$ where the components of $\bz= \bx + \sqrt{-1}\; \by\in \bC^{N-1}$ are distributed according to $(\const.)\prod_j e^{-g(x_j)-g(y_j)}\rd x_j \rd y_j$. With this notation, a standard large deviation argument yields the following corollary to Lemma \ref{lm:BL}: \begin{corollary} \label{cor:BL} Under the condition \eqref{gprime}, there exists a positive $c$ such that for any $\delta$ small enough \begin{equation}\label{ld} \P \left( \sum_{\alpha \in A} \xi_\alpha \leq \delta m \right) \le e^{- c m}\; \end{equation} for all $A \subset \{1, \cdots, N-1 \}$ with cardinality $|A|=m$. \end{corollary} \bigskip {\bf Proof of Theorem \ref{du}.} To prove \eqref{cn}, we decompose the Hermitian $N\times N$ matrix $H$ as follows \be\label{Hd} H = \begin{pmatrix} h & \ba^* \\ \ba & B \end{pmatrix} \ee where $\ba= (h_{12}, \dots h_{1N})^*$ and $B$ is the $(N-1) \times (N-1)$ matrix obtained by removing the first row and first column from $H$. Recall that $\mu_1\leq \mu_2\leq \ldots \leq \mu_N$ denote the eigenvalues of $H$ and let $\lambda_1\leq \lambda_2 \leq \ldots \leq \lambda_{N-1}$ denote the eigenvalues of $B$. Note that $B$ is an $(N-1)\times (N-1)$ Hermitian Wigner matrix with a normalization off by a factor $(1-\frac{1}{N})^{1/2}$. The following Lemma is well-known, we include a short proof for completeness. \begin{lemma}\label{lm:interlace} (i) With probability one, the eigenvalues of any Hermitian Wigner matrix \eqref{wig} are simple. (ii) The eigenvalues of $H$ and $B$ are interlaced: \be\label{interlace} \mu_1 < \lambda_1 < \mu_2 < \lambda_2 < \mu_3 < \ldots \ldots \mu_{N-1} < \lambda_{N-1} <\mu_N. \ee \end{lemma} \begin{proof} The proof of (i) follows directly from the continuity of the distribution of the matrix elements and is left to the reader. For the proof of (ii), suppose that $\mu$ is one of the eigenvalues of $H$. Let $\bv= (v_1, \dots, v_N)^t$ be a normalized eigenvector associated with $\mu$. {F}rom the continuity of the distribution it also follows that $v_1\neq 0$ almost surely. {F}rom the eigenvalue equation $H \bv = \mu \bv$ and from \eqref{Hd} we find that \be\label{ee} h v_1 + \ba \cdot \bw = \mu v_1, \quad \text{and } \quad \ba v_1 + B \bw = \mu \bw \ee with $\bw= (v_2, \dots ,v_N)^t$. From these equations we obtain \be\label{ee1} \bw = (\mu-B)^{-1} \ba v_1\qquad \text{ and thus } \quad (\mu-h)v_1 = \ba \cdot (\mu-B)^{-1} \ba v_1 = \frac{v_1}{N}\sum_{\al} \frac{\xi_\al}{\mu-\lambda_\al} \ee using the spectral representation of $B$, where we set $$ \xi_{\alpha} = |\sqrt{N} \ba \cdot \bu_{\alpha}|^2, $$ with $\bu_{\alpha}$ being the normalized eigenvector of $B$ associated with the eigenvalue $\lambda_{\alpha}$. Since $v_1\neq0$, we have \be \mu-h = \frac{1}{N}\sum_{\al} \frac{\xi_\al}{\mu-\lambda_\al}, \label{inter} \ee where $\xi_\al$'s are strictly positive almost surely (notice that $\ba$ and $\bu_\al$ are independent). In particular, this shows that $\mu\neq \lambda_\al$ for any $\al$. In the open interval $\mu\in (\lambda_{\al-1}, \lambda_\al)$ the function $$ \Phi(\mu): =\frac{1}{N}\sum_{\al} \frac{\xi_\al}{\mu-\lambda_\al} $$ is strictly decreasing from $\infty$ to $-\infty$, therefore there is exactly one solution to the equation $\mu-h = \Phi(\mu)$. Similar argument shows that there is also exactly one solution below $\lambda_1$ and above $\lambda_{N-1}$. This completes the proof. \end{proof} \bigskip We continue the proof of Theorem \ref{du}. Using the decomposition \eqref{Hd}, we obtain the following formula for the Green function $G_z = (H-z)^{-1}$, $z=E+i\eta$ with $E\in\bR$, $\eta>0$: \be G_z(1,1)= \frac{1}{h -z - \ba \cdot (B-z)^{-1}\ba} =\Big[ h-z - \frac{1}{N} \sum_{\al=1}^{N-1}\frac{\xi_\al}{\lambda_\al- z}\Big]^{-1}. \label{G11} \ee This formula in this context has already appeared in \cite{B}. In particular, by considering only the imaginary part, we obtain $$ |G_z(1,1)|\leq \eta^{-1}\Big| 1+ \frac{1}{N}\sum_{\al=1}^{N-1} \frac{\xi_\al}{(\lambda_\al- E)^2+\eta^2}\Big|^{-1}. $$ Similarly, for any $k=1,2,\ldots, N$, we define $B^{(k)}$ to be the $(N-1)\times (N-1)$ minor of $H$ obtained after removing the $k$-th row and $k$-th column. Let $\ba^{(k)} = (h_{k1}, h_{k2}, \ldots h_{k,k-1}, h_{k,k+1}, \ldots h_{kN})^*$ be the $k$-th column of $H$ without the $h_{kk}$ element. Let $\lambda_1^{(k)} < \lambda_2^{(k)}< \ldots$ be the eigenvalues and $\bu_1^{(k)}, \bu_2^{(k)}, \ldots$ the corresponding eigenvectors of $B^{(k)}$ and set $\xi_\al^{(k)}:= N|\ba^{(k)}\cdot \bu_\al^{(k)}|^2$. Then we have the estimate \be |G_z(k,k)| \leq \eta^{-1} \Big| 1+ \frac{1}{N} \sum_{\al=1}^{N-1}\frac{\xi_\al^{(k)}}{(\lambda_\al^{(k)}-E)^2 +\eta^2 }\Big|^{-1}. \label{gz} \ee For the interval $I\in \bR$ given in Theorem \ref{du}, set $E$ to be its midpoint and $\eta=|I|$, i.e. $I= [E-\frac{\eta}{2}, E+\frac{\eta}{2}]$. {F}rom \eqref{rhodef}, \eqref{Nrho} and \eqref{gz} we obtain \be \cN_I\leq C\eta\sum_{k=1}^N |G_z(k,k)| \leq CN\eta^2\sum_{k=1}^N \Big| \sum_{\al: \lambda_\al^{(k)}\in I}\xi_\al^{(k)}\Big|^{-1}, \label{nxi} \ee where we restricted the $\al$ summation in \eqref{gz} only to eigenvalues lying in $I$. For each $k=1,2,\ldots N$, we define the event $$ \Omega_k :=\Big\{ \sum_{\al: \lambda_\al^{(k)}\in I}\xi_\al^{(k)} \leq \delta (\cN_I-1)\Big\} $$ for some small $\delta>0$. By the interlacing property of the $\mu_\al$ and $\lambda_\al^{(k)}$ eigenvalues, we know that there is at least $\cN_I-1$ eigenvalues of $B^{(k)}$ in $I$. By Corollary \ref{cor:BL}, there exists a positive universal constant $c$ such that $\P(\Omega_k)\leq e^{-c(\cN_I-1)}$. Setting $\wt\Omega = \bigcup_{k=1}^N\Omega_k$, we see that \be \P(\wt\Omega \; \text{and} \; \cN_I\ge KN|I|) \leq N e^{-c(\cN_I-1)}\leq e^{-c'KN|I|} \label{wt} \ee if $K$ is sufficiently large, recalling that $\eta=|I|\ge \log N/N$. On the complement event, $\wt\Omega^c$, we have from \eqref{nxi} that $$ \cN_I\leq \frac{CN^2\eta^2}{ \delta(\cN_I-1)} $$ i.e. $\cN_I\leq (C/\delta)^{1/2} N\eta$. Choosing $K$ sufficiently large, we obtain \eqref{cn} from \eqref{wt}. This proves Theorem \ref{du}. \qed \section{Fluctuations of the density of states}\label{sec:fluc} \begin{theorem}\label{thm:fluc} Let $H$ be an $N\times N$ Wigner matrix as described in \eqref{wig} and we assume the condition \eqref{gM} and \eqref{x2}. Fix $E,\eta\in \bR$ with $(\log N) / N\le \eta \le 1$ and set $z=E+i\eta$. \begin{itemize} \item[i)] Suppose that the measures $\nu, \wt\nu$ satisfy the spectral gap condition \eqref{gap}, then there exists a constant $C$ such that the covariance of the Stieltjes transform of the empirical eigenvalue distribution \eqref{Sti} satisfies \[ \big\langle \; m(z) ; m(z)\; \big\rangle = \E \left| m(z) - \E m(z)\right|^2 \leq \frac{C }{N^2 \eta^3} \, . \] \item[ii)] Suppose that the measures $\nu$ and $\wt\nu$ satisfy the logarithmic Sobolev inequality \eqref{logsob}, then there exists $c>0$ such that \be \P \left\{ |m(z) - \E m(z)| \geq\e \right\} \leq e^{-cN\eta\e \, \min \{1, N\eta^2\e\}} \label{ldem} \ee holds for any $\e>0$. \end{itemize} The same bounds hold if $m(z)$ is replaced with the density of states $\varrho_\eta(E)=\frac{1}{\pi}\, \text{Im}\, m(z)$. \end{theorem} We remark that estimates on the covariance were obtained in \cite{B}, \cite{BMT} down to scale $\eta \gg N^{-1/2}$. Concentration estimates down to the same scale were proven in \cite{GZ}. \bigskip \begin{proof} We start proving i). Denote by $\mu_{\alpha}, \alpha=1, \dots ,N$, the eigenvalues of $H$. Since, by first order perturbation theory, \begin{equation} \begin{split} \frac{\partial \mu_{\alpha}}{\partial \text{Re} \, h_{ij}} &= \overline\bv_{\alpha} (i) \bv_{\alpha} (j) + \overline\bv_{\alpha} (j) \bv_{\alpha} (i) = 2 \text{Re} \, \overline\bv_{\alpha} (i) \bv_{\alpha} (j) \\ \frac{\partial \mu_{\alpha}}{\partial \text{Im} \, h_{ij}} &= \sqrt{-1} \big[ \overline\bv_{\alpha} (i) \bv_{\alpha} (j) - \overline\bv_{\alpha} (j) \bv_{\alpha} (i) \big] = 2 \text{Im} \, \overline\bv_{\alpha} (j) \bv_{\alpha} (i) \end{split} \end{equation} for all $1\leq i0$ is the constant from Theorem \ref{du}. To bound the last term on the r.h.s. of (\ref{eq:dbeta2}) we use that $|m(z)|\leq \eta^{-1}$ and \eqref{tail}: $$ \frac{C e^{\beta}}{N^3 \eta^4} \|u\|_\infty \P\big\{ \exists \alpha : |\mu_{\alpha}| \geq K_0 \big\} \leq \frac{C e^{\beta}}{N^3 \eta^4} e^{C\eta^{-1}e^\beta} e^{-c_0K_0^2N} \leq \frac{Ce^{\beta}}{N^3 \eta^4} $$ as long as $N\eta \ge C_0e^{\beta}$ with a sufficiently big $C_0$. Putting everything together, we obtain, from (\ref{eq:dbeta2}), \begin{equation} \begin{split} \frac{\rd}{\rd \beta} &e^{-\beta} \log \int e^{e^{\beta} |m(z) - \E \, m(z)|} \rd \P \leq \frac{C e^{\beta}}{N^2 \eta^3} \end{split} \end{equation} for all $\beta$ such that $N\eta \ge C_1e^{\beta}$ with a sufficiently big $C_1$. Integrating this inequality from $\beta=-\infty$ to $\beta=\log L$ and exponentiating, we find that \[ \E \; e^{L |m(z) - \E \, m(z)|} \leq \exp{ \big( CL^2N^{-2} \eta^{-3} \big)} \] holds for any $L$ satisfying $00$ after optimizing for $L$ under the condition $L \leq C_1^{-1} N\eta$. \end{proof} \section{Semicircle law on short scales} For any $z=E+i\eta$ we let $$ m_{sc}= m_{sc}(z) = \int_\bR \frac{\varrho_{sc}(x)\rd x}{x - z} $$ be the Stieltjes transform of the Wigner semicircle distribution function whose density is given by $$ \varrho_{sc}(x) = \frac{1}{2\pi} \sqrt{4-x^2} {\bf 1}(|x|\leq 2)\rd x . $$ For $\kappa, \wt\eta>0$ we define the set $$ S_{N,\kappa,\wt\eta}:= \Big\{ z=E+i\eta\in \bC\; : \; |E|\leq 2-\kappa, \; \wt\eta\leq \eta \leq 1\Big\} $$ and for $\wt\eta = N^{-2/3}\log N$ we write $$ S_{N,\kappa}:= \Big\{ z=E+i\eta\in \bC\; : \; |E|\leq 2-\kappa, \; \frac{\log N}{N^{2/3}}\leq \eta \leq 1\Big\}. $$ \begin{theorem}\label{thm:sc} Let $H$ be an $N\times N$ Wigner matrix as described in \eqref{wig} and assume the conditions \eqref{gM}, \eqref{x2} and \eqref{logsob}. Then for any $\kappa>0$, the Stieltjes transform $m_N(z)$ (see \eqref{Sti}) of the empirical eigenvalue distribution of the $N\times N$ Wigner matrix satisfies \be \lim_{N\to \infty} \sup_{z\in S_{N,\kappa}} |\E m_N(z) - m_{sc}(z)| =0. \label{eq:sc} \ee \end{theorem} Combining this result with Theorem \ref{thm:fluc}, we obtain \begin{corollary}\label{cor:sc} Let $\kappa>0$ and $\eta \in [N^{-2/3}\log N, 1]$ and assume the conditions of Theorem \ref{thm:sc}. Then we have \be \P \Big\{ \sup_{z\in S_{N,\kappa,\eta}} |m_N(z)- m_{sc}(z)| \ge \e\Big\} \leq e^{-cN\eta \e \min\{ 1, N\eta^2\e\}} \label{mcont} \ee for any $\e>0$. In particular, the density of states $\varrho_\eta(E)$ converges to the Wigner semicircle law in probability uniformly for all energies away from the spectral edges and for all energy windows at least $N^{-2/3}\log N$. Let $\eta^*=\eta^*(N)$ such that $\eta\ll\eta^*\ll 1$ as $N\to \infty$, then we have the convergence of the counting function as well: \be \P \Big\{ \sup_{|E|\leq 2-\kappa} \Big| \frac{\cN_{\eta^*}(E)}{2N\eta^*} - \varrho_{sc}(E)\Big|\ge \e\Big\}\leq e^{-cN\eta \e \min\{ 1, N\eta^2\e\}} \label{ncont} \ee for any $\e>0$, where $\cN_{\eta^*}(E)= |\{ \al\; : \; |\mu_\al - E| \leq \eta^*\}|$ denotes the number of eigenvalues in the interval $[E-\eta^*, E+\eta^*]$. \end{corollary} We remark, that Bai et. al. \cite{BMT} have investigated the speed of convergence of the empirical eigenvalue distribution to the semicircle law. Their results directly imply \eqref{eq:sc} for $\eta =\text{Im} \, z \gg N^{-1/2}$ and \eqref{ncont} for $\eta \ge N^{-2/5}$. \bigskip {\bf Proof of Corollary \ref{cor:sc}.} For any two points $z, z'\in S_{N, \kappa,\eta}$ we have $$ |m_N(z)-m_N(z')|\leq CN^{4/3}|z-z_j| $$ since the gradient of $m_N(z)$ is bounded by $C |\text{Im}\; z|^{-2}\leq CN^{4/3}$ on $S_{N, \kappa,\eta}$. We can choose a set of at most $M= C\e^{-2}N^{4}$ points, $z_1, z_2, \ldots, z_M$, in $S_{N, \kappa,\eta}$ such that for any $z\in S_{N,\kappa,\eta}$ there exists a point $z_j$ with $|z-z_j|\leq \e N^{-2} $. In particular, $|m_N(z)-m_N(z_j)|\le \e/4$ if $N$ is large enough. Then using \eqref{ldem} we obtain $$ \P \Big\{ \sup_{z\in S_{N,\kappa,\eta}} |m_N(z)- \E \, m_N(z)| \ge \e\Big\} \leq \sum_{j=1}^M \P \Big\{ |m_N(z_j)- \E \, m_N(z_j)| \ge \frac{\e}{2}\Big\} \leq e^{-cN\eta \e \min\{ 1, N\eta^2\e\}} $$ under the condition that $\eta\ge N^{-2/3}\log N$ since $\text{Im} \, z_j\ge \eta$. Combining this estimate with \eqref{eq:sc}, we have proved \eqref{mcont}. To prove \eqref{ncont}, we set \[ R(\lambda)= \frac{1}{\pi}\int_{E-M\eta}^{E+M\eta} \frac{\eta}{(\lambda-x)^2+\eta^2} \; \rd x = \frac{1}{\pi}\Bigg[\arctan \Big( \frac{E-\lambda}{\eta}+M\Big) - \arctan \Big( \frac{E-\lambda}{\eta}-M\Big) \Bigg]\\ \] and let $ {\bf 1}_{I^*}(\lambda)$ denote the characteristic function of the interval $I^*= [E-\eta^*, E+\eta^*]$ with $\eta^*=M\eta$. From elementary calculus it follows that $ {\bf 1}_{I^*}-R $ can be decomposed into a sum of three functions, $ {\bf 1}_{I^*}-R= T_1+ T_2+T_3$ with the following properties: $$ |T_1|\leq CM^{-1/2}, \qquad \text{supp}(T_1)\in I_1= [E-2\eta^*, E+2\eta^*]; $$ $$ |T_2|\leq 1, \qquad \text{supp}(T_2) = J_1\cup J_2 $$ where $J_1$ and $J_2$ are two intervals of length $M^{1/2}\eta$ with midpoint at $E-\eta^*$ and at $E+\eta^*$, respectively; and $$ |T_3(\lambda)|\leq \frac{C\eta\eta^*}{(\lambda-E)^2+[\eta^*]^2}, \qquad \text{supp}(T_3)\in I_1^c \, . $$ We thus have \be \frac{\cN_{\eta^*}(E)}{2N\eta^*} = \frac{1}{2\eta^*}\int {\bf 1}_{I^*} (\lambda) \rd F(\lambda) = \frac{1}{2\eta^*}\int R(\lambda) \rd F(\lambda) + \frac{1}{2\eta^*}\int \big[ T_1(\lambda)+T_2(\lambda)+T_3(\lambda)\big] \rd F(\lambda)\, . \label{R}\ee The last three terms are estimated trivially by \[ \begin{split} \frac{1}{2\eta^*}\int \big| T_1+T_2+T_3\big| \rd F& \leq \| T_1\|_\infty\frac{\cN_{I_1}}{2N\eta^*} + \frac{\cN_{J_1}+\cN_{J_2}}{2N\eta^*} + \frac{C\eta}{\eta^*} \varrho_{\eta^*}(E) \\ &\leq \frac{C}{M^{1/2}} \big[\varrho_{2\eta^*}(E) + \varrho_{M^{1/2}\eta}(E-\eta^*) + \varrho_{M^{1/2}\eta}(E+\eta^*) +\varrho_{\eta^*}(E)\big] \end{split} \] Using the bound \eqref{rholde}, this error term is bounded by $CM^{-1/2}$ uniformly in $E$ apart from an event of exponentially small probability. In particular, this term is smaller than $\e/3$ if $M=\eta^*/\eta$ is sufficiently large. The main term in \eqref{R} is computed as $$ \frac{1}{2\eta^*}\int R(\lambda) \rd F(\lambda) = \frac{1}{2\eta^*}\int_{E-\eta^*}^{E+\eta^*} \varrho_\eta(x) \, \rd x = \frac{1}{2\eta^*}\int_{E-\eta^*}^{E+\eta^*} \varrho_{sc}(x) \, \rd x + \frac{1}{2\eta^*}\int_{E-\eta^*}^{E+\eta^*} \big[ \varrho_\eta(x)- \varrho_{sc}(x)\big] \, \rd x $$ and the first term converges to $\varrho_{sc}(E)$ as long as $\eta^*\to 0$. Using \eqref{mcont}, the second term is smaller than $\e/3$ apart from a set of probability $\exp{(-cN\eta \e \min\{ 1, N\eta^2\e\})}$. Putting these estimates together, we arrive at \eqref{ncont}. \qed \bigskip {\bf Proof of Theorem \ref{thm:sc}.} Recall from the proof of Theorem \ref{du} that $B^{(k)}$ denotes the $(N-1)\times (N-1)$ minor of $H$ after removing the $k$-th row and $k$-th column. Similarly to the definition of $m(z)$ in \eqref{Sti}, we also define the Stieltjes transform of the density of states of $B^{(k)}$ $$ m^{(k)}= m^{(k)}(z) = \frac{1}{N-1}\, \text{Tr}\, \frac{1}{B^{(k)}-z} =\int_\bR \frac{\rd F^{(k)}(x)}{x - z} $$ with the empirical counting function $$ F^{(k)}(x) = \frac{1}{N-1} \big| \, \big\{ \al \; : \; \lambda_{\al}^{(k)}\leq x \big\}\big|, $$ where $\lambda_{\al}^{(k)}$ are the eigenvalues of $B^{(k)}$. The spectral parameter $z$ is fixed throughout the proof and we will omit from the argument of the Stieltjes transforms. {F}rom a formula analogous to \eqref{G11} but applied to the $k$-th minor we get \be m = \frac{1}{N}\sum_{k=1}^N G_z(k,k) =\frac{1}{N}\sum_{k=1}^N \frac{1}{ h_{kk} - z - \ba^{(k)} \cdot\frac{1}{B^{(k)}-z} \ba^{(k)}}, \label{mm} \ee where recall that $\ba^{(k)}$ is the $k$-th column without the diagonal. Let $\E_k$ denote the expectation value w.r.t the random vector $\ba^{(k)}$. Define the random variable \be X_k: =\ba^{(k)}\cdot \frac{1}{B^{(k)}-z} \ba^{(k)} - \E_k\; \ba^{(k)} \cdot \frac{1}{B^{(k)}-z} \ba^{(k)} \label{def:X} \ee and note that $$ \E_k\; \ba^{(k)} \cdot \frac{1}{B^{(k)}-z} \ba^{(k)} = \frac{1}{N} \sum_\al\frac{1}{\lambda_{\al}^{(k)}-z} = \Big( 1-\frac{1}{N}\Big)m^{(k)}. $$ With this notation, it follows from \eqref{mm} that \be \E \, m =-\E \; \Bigg[ \frac{1}{X_1 +(1-\frac{1}{N})\big[ m^{(1)} - \E m^{(1)}\big] + \big[ (1-\frac{1}{N}) \E m^{(1)} -\E m\big] +\big[ \E m + z\big] -h_{11} -\frac{1}{N} \E m^{(1)} } \Bigg] \; , \label{m-m} \ee where we used that the distribution of $X_k$ and $m^{(k)}$ is independent of $k$. Fix $\e >0$. The first term in the denominator of \eqref{m-m} is estimated in the following lemma whose proof is given at the end of the section. \begin{lemma}\label{lm:x4} For the random variable $X_1$ from \eqref{def:X} we have \be \E |X_1|^4 \leq \frac{C(\log N)^2}{N^2\eta^2}, \label{x4}\ee in particular $$ \P\big\{ |X_1|\ge \e\big\}\leq \frac{C(\log N)^2}{N^2\eta^2\e^4}. $$ \end{lemma} For the second term in the denominator in \eqref{m-m} we apply the large deviation estimate from Theorem \ref{thm:fluc} to the Stieltjes transform of $B^{(1)}$ $$ \P \Big\{ \big| m^{(1)} - \E m^{(1)}\big|\ge \e\Big\} \leq e^{-cN\eta\e\, \min\{ 1, N\eta^2\e\}}\; . $$ For the third term, we use that $$ \Big| m - \Big(1-\frac{1}{N}\Big)m^{(1)}\Big| =\Big| \int \frac{\rd F(x)}{x-z} - \Big(1-\frac{1}{N}\Big)\int \frac{\rd F_1(x)}{x-z}\Big| = \frac{1}{N}\Big| \int \frac{NF(x)-(N-1)F_1(x)}{(x-z)^2} \rd x\Big|. $$ By the interlacing property between the eigenvalues of $H$ and $B^{(1)}$, we have $\max_x|NF(x)-(N-1)F_1(x)|\leq 1$, thus $$ \Big| m - \Big(1-\frac{1}{N}\Big)m^{(1)}\Big| \leq \frac{1}{N} \int \frac{\rd x}{|x-z|^2} \leq \frac{C}{N\eta}\, , $$ and therefore $|\E m - (1-N^{-1})\E m^{(1)}|\leq C(N\eta)^{-1}$. Finally, from $\E \, x_{11}^2 <\infty$ we have $$ \P \big\{ |h_{11}|\ge \e \big\} \le \frac{C}{N\e^2}. $$ We define the set of events $$ \Omega: = \big\{ \big| X_1\big|\ge \e \big\} \cup \big\{ \big| m^{(1)} - \E m^{(1)}\big|\ge \e \big\} \cup \big\{ \big| h_{11}\big|\ge \e \big\} $$ then $$ \P (\Omega)\leq e^{-cN\eta\e\, \min\{ 1, N\eta^2\e\}} +\frac{C}{N\e^2}+ \frac{C(\log N)^2}{N^2\eta^2\e^4}. $$ Let $$ Y = X_1 + (1-N^{-1})\big[ m^{(1)} - \E m^{(1)}\big] + \big[ (1-N^{-1}) \E m^{(1)} -\E m\big] -h_{11}, $$ then, similarly to \eqref{gz}, we have $$ \Big| Y + \E \, m + z\Big| \ge \Big| \text{Im} z + \ba^{(k)} \cdot \frac{1}{B^{(k)}-z} \ba^{(k)}\Big| \ge \eta \; . $$ We also have $|\E \; m + z|\ge \eta$ since $\text{Im} \, m \ge 0$. Set $\wt Y: = Y\cdot {\bf 1}_{\Omega^c}$, then obviously $|\wt Y|\leq 4\e$. Moreover, from \eqref{m-m} we have \be \E \, m + \frac{1}{ \E \, m + z } = \E \; {\bf 1}_{\Omega^c} \Big[ \frac{1}{ \E \, m + z } -\frac{1}{ \E \, m + z + \wt Y}\Big] + \E \; {\bf 1}_\Omega\Big[ \frac{1}{ \E \, m + z } -\frac{1}{ \E \, m + z + Y} \Big]\, . \label{EE} \ee The second term is bounded by $$ \Bigg| \E \; {\bf 1}_\Omega\Big[ \frac{1}{ \E \, m + z } -\frac{1}{ \E \, m + z + Y} \Big]\Bigg| \leq 2\eta^{-1} P(\Omega) \leq \frac{C}{\e^4 \log N} \leq C\e $$ uniformly for $z\in S_{N,\kappa}$ if $N\ge N(\e)$. In the first term we use the stronger bounds $$ |\E \, m + z|\ge \text{Im} \, m(z) +\eta \qquad |\E \, m + z + \wt Y|\ge \text{Im} \, m(z) +\eta - 4\e $$ on the denominators. Thus, from \eqref{EE} we obtain \be \Big| \E \, m + \frac{1}{ \E \, m + z } \Big| \leq \frac{C\e}{ \big[ \text{Im} \, m(z)+\eta \big] \big[ \text{Im} \, m(z)+\eta -4\e\big]} + C\e\; \label{cont1} \ee uniformly for $z\in S_{N,\kappa}$. We note that the equation \be M+ \frac{1}{M+z} =0 \label{stab1} \ee has a unique solution for any $z\in S_{N,\kappa}$ with $\text{Im} \, M>0$, namely $M= m_{sc}(z)$, the Stieltjes transform of the semicircle law. Note that there exists $c(\kappa)>0$ such that $\text{Im} \, m_{sc}(E+i\eta) \ge c(\kappa)$ for any $|E|\leq 2-\kappa$, uniformly in $\eta$. The equation \eqref{stab1} is stable in the following sense. For any small $\delta$, let $M=M(z,\delta)$ be a solution to \be M + \frac{1}{M+z} = \delta \label{stab2} \ee with $\text{Im}\, M>0$. Subtracting \eqref{stab1} with $M=m_{sc}$ from \eqref{stab2}, we have $$ (M-m_{sc})\Big[m_{sc}+z - \frac{1}{M+z}\Big] = \delta(m_{sc}+z) $$ and $$ \text{Im}\Big[m_{sc}+z - \frac{1}{M+z}\Big] \ge \text{Im}\, m_{sc} \ge c(\kappa) $$ Since the function $m_{sc}+z$ on the compact set $z\in S_{N,\kappa}$ is bounded, we get that \be | M-m_{sc}| \leq C_\kappa \delta \, \label{cont3} \ee for some constant $C_\kappa$ depending only on $\kappa$. Now we perform a continuity argument in $\eta$ to prove that \be |\E \, m(E+i\eta) - m_{sc}(E+i\eta)|\leq C^*\e \label{cont}\ee uniformly in $z\in S_{N,\kappa}$ with a sufficiently large constant $C^*$. Fix $E$ with $|E|\leq 2-\kappa$. For $\eta=[\frac{1}{2}, 1]$, \eqref{cont} follows from \eqref{cont1} with some small $\e$, since the right hand side of \eqref{cont1} is bounded by $C\e$. Suppose now that \eqref{cont} has been proven for some $\eta\in [2N^{-2/3}\log N,\, 1]$ and we want to prove it for $\eta/2$. By integrating the inequality $$ \frac{\eta/2}{(x-E)^2 + (\eta/2)^2} \ge \frac{1}{2} \frac{\eta}{(x-E)^2+\eta^2} $$ with respect to $\rd F(x)$ we obtain that $$ \text{Im}\, m\big(E+i\frac{\eta}{2}\big) \ge \frac{1}{2} \text{Im}\, m(E+i\eta) \ge \frac{1}{2}c(\kappa)- C^*\e > \frac{c(\kappa)}{4}\, $$ for sufficiently small $\e$. Thus the r.h.s of \eqref{cont1} for $z=E+i\frac{\eta}{2}$ is bounded by $C\e$, the constant depending only on $\kappa$. Applying the stability bound \eqref{cont3}, we get \eqref{cont} for $\eta$ replaced with $\eta/2$. This completes the proof of Theorem \ref{thm:sc}. \qed \bigskip {\bf Proof of Lemma \ref{lm:x4}.} Recall that $\lambda_\al^{(1)}$ denote the eigenvalues and $\bu_\al^{(1)}$ denote the eigenvectors of $B^{(1)}$ for $\al=1,2, \ldots , N-1$. We also defined $\xi_\al^{(1)} = |\bb^{(1)}\cdot \bu_\al{(1)}|^2$ with the vector $\bb^{(1)} =(b_1, \ldots b_{N-1})= \sqrt{N}\ba^{(1)} = \sqrt{N}(h_{12}, h_{13}, \ldots h_{1N})^*$ whose components are i.i.d. random variables with real and imaginary parts distributed according to $\nu$. Dropping the sub- and superscripts, we have $$ X = \frac{1}{N} \sum_{\al =1}^{N-1} \frac{\xi_\al-1}{\lambda_\al-z} = \frac{1}{N} \sum_{\al} \frac{ \sum_{i,j} b_i\bar b_j \bu_\al(i)\bar \bu_\al (j) -1}{\lambda_\al-z} \, , $$ where all summations run from 1 to $N-1$. Since the distribution $\nu$ satisfies the spectral gap inequality \eqref{gap}, we have \be \E |X|^2 \leq C \E \sum_k \Bigg[ \Big| \frac{\pt X}{\pt b_k} \Big|^2 + \Big| \frac{\pt X}{\pt \bar b_k} \Big|^2 \Bigg]\;, \label{spg} \ee where $\pt/\pt b = \frac{1}{2}\big[\pt/\pt(\text{Re} \, b) -i \pt/\pt ( \text{Im}\, b)\big]$ and $\pt/\pt \bar b = \frac{1}{2}\big[\pt/\pt(\text{Re} \, b) +i \pt/\pt ( \text{Im}\, b)\big]$. We compute \be \begin{split} \sum_k \Bigg[ \Big| \frac{\pt X}{\pt b_k} \Big|^2 + \Big| \frac{\pt X}{\pt \bar b_k} \Big|^2 \Bigg]&= \sum_k \Bigg[ \Bigg| \frac{1}{N} \sum_{\al,j} \frac{ \bar b_j \bu_\al(k)\bar \bu_\al (j)}{\lambda_\al-z}\Bigg|^2 + \Bigg| \frac{1}{N}\sum_{\al,i} \frac{ b_i \bu_\al(i)\bar \bu_\al (k)}{\lambda_\al-z}\Bigg|^2 \Bigg]\\ &=\frac{1}{N^2} \sum_k \sum_{\al,\beta,i,j} \Bigg[ \frac{ \bar b_j b_i \bu_\al (k)\bar \bu_\beta(k) \bar \bu_\al (j)\bu_\beta(i)} {(\lambda_\al-z)(\lambda_\beta-\bar z)} + \frac{ b_i\bar b_j \bu_\al(i)\bar \bu_\beta(j) \bar\bu_\al (k) \bu_\beta(k) } {(\lambda_\al-z)(\lambda_\beta-\bar z)}\Bigg]\\ &= \frac{2}{N^2}\sum_{\al,i,j} \frac{ \bar b_j b_i \bar \bu_\al (j)\bu_\al(i)} {|\lambda_\al-z|^2}\;. \end{split} \label{pt2} \ee Here we used the orthonormality of the eigenfunctions, $\sum_k \bu_\al (k)\bar\bu_\beta(k) = \delta_{\al,\beta}$. We insert this into \eqref{spg} and take the expectation with respect to the $\bb$ variables, $\E\, \bar b_j b_i=\delta_{ij}$, by using the fact that the components of $\bb$ are independent of the $\lambda_\al$'s and $\bu_\al$'s. \[ \begin{split} \E |X|^2 & \leq \frac{C}{N^2} \E \sum_{\al,i,j} \frac{ \bar b_j b_i \bar \bu_\al (j)\bu_\al(i)} {|\lambda_\al-z|^2} \\ &= \frac{C}{N^2} \E \sum_\al \frac{ 1}{|\lambda_\al-z|^2} \leq \frac{C}{N\eta} \E\; \frac{1}{N} \sum_\al \frac{1}{|\lambda_\al -z|}\;. \end{split} \label{lon} \] To estimate the last term, we have \be \E \; \frac{1}{N} \sum_\al \frac{1}{|\lambda_\al -z|} \leq \int_{|\lambda|\le K_0} \frac{\E \, \varrho_\eta(\lambda)}{|\lambda-z|} \; \rd \lambda + \frac{1}{\eta} \P \{\max |\lambda_\al |\ge K_0\} \leq C\log N \, . \label{1mom} \ee In the last step, by choosing $K_0$ sufficiently large, we used the uniform estimate \eqref{Erho} on $\E \, \varrho_\eta(\lambda)$ and the bound \eqref{tail} for the eigenvalues of the $(N-1)\times (N-1)$ Wigner matrix $B^{(1)}$. Thus we have showed that \be \E \, |X|^2 \leq \frac{C\log N}{N\eta} \; . \label{xsec} \ee To estimate the fourth moment, we have $$ \E \, |X|^4 = \big[ \E\, |X|^2\big]^2 + \E \big[ |X|^2- \E\, |X|^2\big]^2 \leq \frac{(C\log N)^2}{(N\eta)^2} + C \E \sum_k \Bigg[ \Bigg| \frac{\pt |X|^2}{\pt b_k} \Bigg|^2 + \Bigg| \frac{\pt |X|^2}{\pt \bar b_k} \Bigg|^2 \Bigg] \, . $$ We will compute only the first term in the summation, the second one is identical. We have $$ C\E \sum_k \Bigg| \frac{\pt |X|^2}{\pt b_k} \Bigg|^2 \leq 2C\E \Bigg[ \; |X|^2 \sum_k \Bigg(\Big| \frac{\pt X}{\pt b_k}\Big|^2 + \Big| \frac{\pt X}{\pt \bar b_k}\Big|^2\Bigg)\Bigg] \leq \frac{1}{4} \E \, |X|^4 + C \E \Bigg[ \sum_k \Bigg(\Big| \frac{\pt X}{\pt b_k}\Big|^2+ \Big| \frac{\pt X}{\pt \bar b_k}\Big|^2\Bigg) \Bigg]^2\, , $$ therefore \be \frac{1}{2} \E |X|^4 \leq \frac{(C\log N)^2}{(N\eta)^2} + C \E \sum_k \Bigg[ \Big| \frac{\pt |X|^2}{\pt b_k} \Big|^2 + \Big| \frac{\pt |X|^2}{\pt \bar b_k} \Big|^2 \Bigg] \, . \label{tre} \ee For the last term, we use \eqref{pt2}: \be \begin{split} \E \Bigg[ \sum_k \Bigg(\Big| \frac{\pt X}{\pt b_k}\Big|^2+ \Big| \frac{\pt X}{\pt \bar b_k}\Big|^2\Bigg) \Bigg]^2 &= \frac{1}{N^4}\E \Bigg[\sum_{\al,i,j} \frac{ \bar b_j b_i \bar \bu_\al (j)\bu_\al(i)} {|\lambda_\al-z|^2}\Bigg]^2\\ &= \frac{1}{N^4}\E \sum_{\al,\beta}\sum_{i,j,\ell,m} \frac{ \E \big[\bar b_j b_i \bar b_\ell b_m\big] \bar \bu_\al (j)\bu_\al(i) \bar \bu_\beta(\ell)\bu_\beta(m)} {|\lambda_\al-z|^2|\lambda_\beta-z|^2}\\ &= \frac{1}{N^4}\E \sum_{\al,\beta}\sum_{i\neq\ell} \frac{ |\bu_\al(i)|^2|\bu_\beta(\ell)|^2} {|\lambda_\al-z|^2|\lambda_\beta-z|^2} + \frac{1}{N^4}\E \sum_{\al,\beta}\sum_{i\neq j} \frac{ \bar \bu_\al(j)\bu_\al(i)\bar \bu_\beta(i)\bu_\beta(j)} {|\lambda_\al-z|^2|\lambda_\beta-z|^2}\\ & \hskip.5cm +\frac{c_4}{N^4} \, \E \sum_{\al,\beta}\sum_{i} \frac{ |\bu_\al(i)|^2|\bu_\beta(i)|^2} {|\lambda_\al-z|^2|\lambda_\beta-z|^2}\\ &\leq \frac{C}{N^4}\E \sum_{\al,\beta}\sum_{i,\ell} \frac{ |\bu_\al(i)|^2|\bu_\beta(\ell)|^2} {|\lambda_\al-z|^2|\lambda_\beta-z|^2}\\ &\leq \frac{C}{(N\eta)^2} \E \Bigg[ \frac{1}{N}\sum_\al \frac{1}{|\lambda_\al-z|}\Bigg]^2 . \end{split} \label{lo} \ee In the second line we used that $$ \E \big[\bar b_j b_i \bar b_\ell b_m\big] =\delta_{ij}\delta_{\ell m} (1-\delta_{i\ell}) + \delta_{i\ell}\delta_{jm}(1-\delta_{im}) + c_4\delta_{ij}\delta_{j\ell}\delta_{\ell m}, $$ where $c_4 = \E |b|^4 =\int (x^2+y^2)^2 \rd\nu(x)\rd\nu(y)$. Finally, the last expectation value is estimated as $$ \E \Bigg( \frac{1}{N} \sum_\al \frac{1}{|\lambda_\al-z|} \Bigg)^2 \leq \E \Bigg( \int_{|\lambda|\leq K_0} \frac{\varrho_\eta(\lambda)} {|\lambda-z|}\rd\lambda \Bigg)^2 + \eta^{-2} \; \P\big\{ \max |\lambda_\al|\ge K_0\big\} \; . $$ The second term is exponentially small by \eqref{tail}. In the first term we use \eqref{rholde} to conclude that $\varrho_\eta(\lambda) \leq K$ uniformly in $\lambda$ apart from an event of exponentially small probability. Inserting this bound into \eqref{lo} and \eqref{tre}, we obtain the desired bound $\E |X|^4\leq C(\log N)^2/(N\eta)^2$ in Lemma \ref{lm:x4}. \qed \section{Extended states} Recall that the eigenvalues of $H$ are denoted by $\mu_1 < \mu_2 < \cdots < \mu_N$ and the corresponding normalized eigenvectors by $\bv_1, \bv_2,\ldots \bv_N$. \begin{theorem}\label{thm:loc} Let $H$ be an $N\times N$ Wigner matrix as described in \eqref{wig} and satisfying the conditions (\ref{gM}) and (\ref{x2}). Then there exist positive constants, $C_1, C_2$ and $c$, depending only on the constants $M$ in (\ref{gM}) and $\delta$ in (\ref{x2}), such that for any $q>0$ \be\label{eq:thmloc} \P \Big\{ \frac{1}{N}\Big| \big\{ \beta\; :\; \max_j |\bv_\beta(j)|^2 \ge \frac{C_1 q^2(\log N)^2}{N}\big\}\Big| \ge \frac{C_2}{q}\Big\} \leq e^{-c(\log N)^2}. \ee \end{theorem} {\it Remark.} Suppose that $\|\bv \|_\infty^2 \leq 1/L$ holds for an $\ell^2$-normalized vector $\bv=(v_1, v_2, \ldots)$. Then the support of $\bv$ contains at least $L$ elements. Thus the quantity $\| \bv\|_\infty^{-2}$ can be intrepreted as the localization length of $\bv$. With this interpretation, Theorem \ref{thm:loc} states that the density of eigenstates with a localization length $L\le Nq^{-2}$ (with logarithmic corrections) is bounded from above by $C/q$. It also follows from Theorem \ref{thm:loc}, that, for every $p \geq 2$, \begin{equation}\label{eq:lp>2} \P \Big\{ \frac{1}{N}\Big| \big\{ \beta\; :\; \| \bv_\beta \|_{\ell^p} \geq C_1 N^{\frac{1}{p}-\frac{1}{2}} (\log N)^{2-\frac{4}{p}} \big\} \Big| \ge \frac{C_2}{\log N}\Big\} \leq e^{-c (\log N)^2}. \end{equation} In other words, with high probability, all the $N$ eigenvectors apart from a fraction converging to zero as $N \to \infty$, have the expected delocalization properties up to logarithmic corrections. Note that, by duality, (\ref{eq:lp>2}) immediately implies that \begin{equation}\label{eq:lp<2} \P \Big\{ \frac{1}{N}\Big| \big\{ \beta\; :\; \| \bv_\beta \|_{\ell^p} \leq C_1^{-1} N^{\frac{1}{p}-\frac{1}{2}} (\log N)^{2-\frac{4}{p}} \big\} \Big| \ge \frac{C_2}{\log N}\Big\} \leq e^{-c (\log N)^2}. \end{equation} for all $1 \leq p \leq 2$. In Section \ref{sec:Meta}, we will improve (\ref{eq:lp<2}), by showing, in Corollary \ref{cor:lp}, that, up to an event with exponentially small probability, every eigenvector $\bv$ of $H$ satisfies $\| \bv \|_p \leq c N^{\frac{1}{p} - \frac{1}{2}}$ for all $1 \leq p \leq 2$. \bigskip \begin{proof} For brevity, we introduce the notation \[ \theta = [\log N]^2, \] where $[\, \cdot \,]$ denotes the integer part. For $q>0$, let $O_q$ denote the set of eigenvalue indices $\al$ such that the distance between the eigenvalues $\mu_{\alpha+\theta}$ and $\mu_{\alpha-\theta}$ is less than $q\theta/N$: \be\label{Od} O_q = \Big\{\alpha: |\mu_{\alpha-\theta} - \mu_{\alpha+\theta}| \le \frac{q\theta}{N} \Big\}. \ee Here we used the notation $\mu_{\alpha} = \mu_1$ if $\alpha <1$ and $\mu_{\alpha} = \mu_N$ if $\alpha >N$. Given $K_0>0$, we define $\Omega$ to be the event characterized by all eigenvalues of $H$ being in the interval $[-K_0, K_0]$, that is \[ \Omega = \{ \omega : \sigma (H) \subset [-K_0, K_0] \}\, . \] By (\ref{tail}), we have \[ \P ( \Omega) \geq 1 - e^{-cN} \] if $K_0$ is sufficiently large. We have \begin{equation}\begin{split} \P \Big\{ \frac{1}{N}\Big| &\big\{ \beta\; :\; \max_j |\bv_\beta(j)|^2 \ge \frac{C_1 q^2 \theta}{N}\big\} \Big| \ge \frac{C_2}{q}\Big\} \\ \leq \; &\P \Big\{ \frac{1}{N}\Big| \big\{ \beta\; :\; \max_j |\bv_\beta(j)|^2 \ge \frac{C_1 q^2 \theta}{N}\big\} \Big| \ge \frac{C_2}{q} \text{ and } \Omega \Big\} + e^{-cN} \\ \leq \; &\P \Big\{ \frac{1}{N}\Big| \big\{ \beta\; :\; \max_j |\bv_\beta(j)|^2 \ge \frac{C_1 q^2\theta}{N}\big\} \cap O_q \Big| \ge \frac{C_2}{2q} \text{ and } \Omega \Big\} + \P \Big\{ \Big|O^c_q \Big| \ge \frac{C_2 N}{2q} \text{ and } \Omega \Big\} + e^{-cN} \end{split} \end{equation} A simple counting shows that the cardinality of the complement of $O_q$ is bounded by $$ |O_q^c|\leq \frac{CN}{q} . $$ on $\Omega$. Therefore, by choosing $C_2$ sufficiently large, we have \begin{equation*} \begin{split} \P \Big\{ \frac{1}{N}\Big| \big\{ \beta\; :\; \max_j |\bv_\beta(j)|^2 \ge \frac{C_1 q^2\theta}{N}\big\} \Big| \ge \frac{C_2}{q}\Big\} \leq \; &\P \Big\{ \frac{1}{N}\Big| \big\{ \beta \in O_q\; :\; \max_j |\bv_\beta(j)|^2 \ge \frac{C_1 q^2\theta}{N}\big\}\Big| \ge \frac{C_2}{2q}\Big\} + e^{-cN} \\ \leq \; &\P \Big\{ \exists \beta \in O_q \; :\; \max_j |\bv_\beta(j)|^2 \ge \frac{C_1 q^2\theta}{N} \Big\} + e^{-cN} \\ \leq \; & N \sup_{\beta} \, \P \Big\{ \beta \in O_q \text{ and } \max_j |\bv_\beta(j)|^2 \ge \frac{C_1 q^2\theta}{N} \Big\} + e^{-cN}, \end{split} \end{equation*} where we used that $q \ll N$ (if $q \geq N^{1/2}$, (\ref{eq:thmloc}) is trivial). The theorem now follows from Lemma \ref{lm:ext} below. \end{proof} \bigskip \begin{lemma}\label{lm:ext} Under the assumptions of Theorem \ref{thm:loc}, there exists a constant $c>0$ such that for any sufficiently large $C$ and for any $q>0$ we have \begin{equation}\label{ll} \sup_{\beta} \P \Big\{ \beta \in O_q \text{ and } \max_j |\bv_\beta(j)|^{2 } \ge \frac {C\theta q^2} N \Big\} \le e^{- c \theta}. \end{equation} \end{lemma} \begin{proof} It is enough to prove that, for arbitrary $\beta \in \{ 1, \dots, N \}$, \[ \P \Big\{ \beta \in O_q \text{ and } \max_j |\bv_\beta(j)|^{2 } \ge \frac {C\theta q^2} N \Big\} \le e^{- c \theta} . \] Therefore we fix $\beta \in O_q$ and we consider first the $j=1$ component $v_1=\bv_{\beta} (1)$ of $\bv_{\beta}$; for brevity we drop the index $\beta$ from the notation $\mu_\beta$ and $\bv_\beta$. Set $\kappa:=q\theta/N$. Recall that $\lambda_\alpha$ denotes the eigenvalues of $B$ in the decomposition \eqref{Hd}. Denote by $A$ the set $$ A:= \{\al\; :\; |\mu-\lambda_\al|\leq \kappa\}\; . $$ {F}rom the interlacing property of the eigenvalues, $|A|\ge \theta$ (if $\theta \leq \beta \leq N-\theta$, then actually $|A|=2\theta$). Recall the equations \eqref{ee} and \eqref{ee1} obtained from the eigenvalue equation $H\bv = \mu\bv$ and from the decomposition \eqref{Hd}. In particular, from \eqref{ee1} we find \be\label{w} \|\bw\|^2= \bw\cdot \bw = |v_1|^2 \ba^* (\mu -B)^{-2} \ba \, . \ee Since $\|\bw\|^2 = 1 - |v_1|^2$, we obtain \be\label{v2} |v_1|^2 = \frac{1}{1+ \ba^* (\mu -B)^{-2} \ba} = \frac{1}{1 + \frac{1}{N} \sum_{\alpha} \frac{\xi_{\alpha}}{(\mu - \lambda_{\alpha})^2}} \, , \ee recalling the notation $\xi_\al=N|\ba\cdot \bu_\al|^2$ where $\bu_\al$ is the normalized eigenvector of $B$ associated with the eigenvalue $\lambda_\al$. Thus we have \be\label{v3} |v_1|^2 \le \frac 1 { 1 + N^{-1} \kappa^{-2} \sum_{\alpha \in A} \xi_\alpha } = \frac {N \kappa^{2}} { N \kappa^{2} + \sum_{\alpha \in A} \xi_\alpha }. \ee Fix a small $\delta>0$. Let $Q$ be the following event \[ Q= \Big\{ \sum_{\alpha \in A} \xi_\alpha > \theta \delta \Big\}. \] On this set $Q$ we have the bound for $|v_1|^2$ \[ {\bf 1}_Q|v_1|^2\leq {\bf 1}_Q \frac {N \kappa^{2}} { N \kappa^{2} + \sum_{\alpha\in A} \xi_\alpha } \le \delta^{-1} N \kappa^{2} \theta^{-1} = \frac {\theta q^2} {N\delta} \; , \] and for $\delta$ small enough, we have \[ \P(Q^c) \le e^{- c \theta} \] by Corollary \ref{cor:BL}. So far we have considered the $j=1$ component of $\bv$. We can repeat the argument for each $j=1,2,\ldots, N$. Thus $Q$ should carry a subscript $1$ and we can define $Q_j$ accordingly. Clearly, $\P\{(\cap_j Q_j)^c\} \le N e^{- c \theta}\le e^{-c'\theta}$. On the other hand, on the set $\cap_j Q_j$ we have \[ \max_j |\bv_\beta(j)|^2 \le \frac {\theta q^2} {N \delta}\qquad \mbox{for any}\quad\beta\in O_q\, . \] \end{proof} \bigskip Theorem \ref{thm:loc} implies that all eigenvectors of $H$, apart from a fraction vanishing in the limit $N \to \infty$, are completely extended, in the sense that, up to logarithmic corrections, $\| \bv \|_{\infty} \leq \const / N^{1/2}$. The reason we cannot prove this bound for all eigenvectors of $H$ is the lack of information about the microscopic distribution of the eigenvalues of $H$ (and of its minors) on scales of order $O(1/N)$. {F}rom Corollary \ref{cor:sc}, which gives precise information on the eigenvalue distribution up to scales of order $O(N^{-2/3}\log N)$, we can nevertheless get a non-optimal bound on $\| \bv \|_{\infty}$ for all eigenvectors of $H$. \begin{proposition}\label{cor:linfty} Let $H$ be an $N\times N$ Wigner matrix as described in \eqref{wig} and satisfying the conditions (\ref{gM}), (\ref{x2}) and (\ref{logsob}). Fix $\kappa>0$, and assume that $C$ is large enough. Then there exists $c>0$ such that \[ \P \Bigg\{\exists \text{ $\bv$ with $H\bv=\mu\bv$, $\| \bv \|=1$, $\mu \in [-2+\kappa, 2-\kappa]$ and } \| \bv \|_\infty \ge \frac{C(\log N)}{N^{1/3}} \Bigg\} \leq e^{-c(\log N)^2}\;. \] \end{proposition} \bigskip {\it Remark.} The bound $\|\bv\|_\infty\leq CN^{-1/3}\log N$, obtained in this proposition trivially implies the upper bound $\|\bv \|_p\leq C(\log N)^{1-2/p} N^{\frac{2}{3}\big(\frac{1}{p}-\frac{1}{2}\big)}$ for $2\leq p < \infty$ as well. \bigskip \begin{proof} Let $\eta = N^{-2/3} (\log N)$ and define \[ I_n = [ -2 + \kappa + (n-1) \eta ; -2 + \kappa + n \eta ] \qquad \text{for } n = 1,\dots, n_{\text{max}} = [ (4-2\kappa) \eta^{-1} ] + 1, \] where $[x]$ denotes the integer part of $x\in\bR$. Then \[ \bigcup_{n=1}^{n_{\text{max}}} I_n \supset [-2 + \kappa, 2 - \kappa] \, \text{ and } |I_n| = \eta = N^{-2/3} (\log N) \qquad \text{for all } n=1,\dots, n_{\text{max}}. \] As before, let $\cN_{I}=|\{ \beta\; : \; \mu_\beta\in I\}|$ for any $I\subset \bR$. By choosing $\eta^*= \eta\log N$ in \eqref{ncont} in Corollary \ref{cor:sc}, and using $\cN_{I_n}\leq \cN_{I_n^*}$ with $I_n^*$ being the interval of length $2\eta^*$ with the same center as $I_n$, we have \[ \P \left\{\max_n \cN_{I_n} \leq \frac{N\eta}{\log N} \right\} \leq e^{-c (\log N)^2}. \] Suppose that $\mu \in I_n$, and that $H\bv = \mu \bv$. {F}rom (\ref{v2}), we obtain \[ |v_1|^2 = \frac{1}{1+\frac{1}{N} \sum_{\alpha} \frac{\xi_{\alpha}}{(\lambda_{\alpha}-\mu)^2}} \leq \frac{1}{1+\frac{1}{4N\eta^2} \sum_{\lambda_\alpha \in I_n} \xi_{\alpha}} \leq \frac{4 N \eta^2}{\sum_{\lambda_\alpha \in I_n} \xi_{\alpha}}\,. \] and from the interlacing property, there exist at least $\cN_{I_n}-1$ eigenvalues $\lambda_\al$ in $I_n$. Therefore \begin{equation} \begin{split} \P \Big( \exists &\text{ $\bv$ with $H\bv=\mu\bv$, $\| \bv \|=1$, $\mu \in [-2+\kappa, 2-\kappa]$ and } \| \bv \|_\infty \ge \frac{C(\log N)}{N^{1/3}} \Big) \\ &\leq \sum_{n=1}^{n_{\text{max}}} \sum_{j=1}^N \, \P \Big( \exists \text{ $\bv$ with $H\bv=\mu\bv$, $\| \bv \|=1$, $\mu \in I_n$ and } |v_j|^2 \ge \frac{C(\log N)^2}{N^{2/3}} \Big) \\ &\leq N n_{\text{max}} \sup_n \P \Big( \exists \text{ $\bv$ with $H\bv=\mu\bv$, $\| \bv \|=1$, $\mu \in I_n$ and } |v_1|^2 \ge \frac{C(\log N)^2}{N^{2/3}} \Big)\\ &\leq \const \, N \eta^{-1} \sup_n \P \left( \sum_{\lambda_\alpha \in I_n} \xi_{\alpha} \leq \frac{4 N^{1/3}}{C} \right) \\ &\leq \const \, N \eta^{-1} \sup_n \P \left( \sum_{\lambda_\alpha \in I_n} \xi_{\alpha} \leq \frac{4 N^{1/3}}{C} \text{ and } \cN_{I_n} \geq N^{1/3} \right) + \const \, N \eta^{-1} \sup_n \, \P \left(\cN_{I_n} \leq \frac{N\eta}{\log N} \right) \\&\leq \const \, N^{5/3} e^{-\delta N^{1/3}} + \const \, N^{5/3} e^{-c (\log N)^2} \leq e^{-c' (\log N)^2}\,. \end{split} \end{equation} using Corollary \ref{cor:BL} and choosing $C$ sufficiently large. \end{proof} \section{Second moment of the Green function} In this section, we use the result of Theorem \ref{thm:loc} to obtain bounds on the second moment of the diagonal elements of the Green function of $H$. Recall the notation $\theta =[\log N]^2$. \begin{theorem}\label{thm:green} Let $H$ be an $N\times N$ Wigner matrix as described in \eqref{wig} and satisfying the conditions (\ref{gM}) and (\ref{x2}). Let $z=E+i\eta$ be the spectral parameter of the Green function $G_{E,\eta}=G_z=(H-z)^{-1}$. Then there exist $c,C>0$ such that for any $\eta$, \be \P\Bigg( \Big| \Big\{ E \, :\, \frac{1}{N}\sum_{j=1}^N |G_{E,\eta}(j,j)|^2 \ge C(\log N)^{12}\Big\}\Big|\ge \frac{C}{\log N} \Bigg) \leq e^{-c(\log N)^2}. \ee \end{theorem} {\it Remark.} This theorem states that, with the exception of a very small probability, the second moment of the Green function, averaged over all sites, remains bounded (modulo logarithmic corrections) for all but a negligible set of energies in the sense of Lebesgue measure. \bigskip \begin{proof} For any $k\in \bbZ$, we define the random sets $$ M_k: = \Big\{ \al\, :\, \frac{2^k}{N} < |\mu_{\al-\theta} -\mu_{\al + \theta}|\leq \frac{2^{k+1}}{N}\Big\}\, . $$ where we used again the convention that $\mu_{\alpha} = \mu_1$ for all $\alpha \leq 1$ and $\mu_{\alpha} = \mu_N$ for all $\alpha \geq N$. For given $\kappa, K_0 >0$, let $$ \Omega_1: = \Big\{\bigcup_{k=0}^{\kappa \log N} M_k=\{1,2, \ldots N\} \Big\} \cap \Big\{ \sigma (H) \subset [ - K_0 , K_0] \Big\} . $$ From \eqref{tail} we know that $$ \P\Big\{ \sigma(H)\in [-K_0 , K_0]\Big\}\ge 1- e^{-cN}, $$ for a sufficiently large $K_0$, so we obtain that $M_k=\emptyset$ for all $k\ge \kappa \log N$, if $\kappa$ is large enough, apart from an exponentially small event. {F}rom Theorem \ref{du} we obtain that $$ \P \Big\{ M_k =\emptyset \; \mbox{for all}\; k <0 \Big\}\ge 1 - e^{-c\theta} $$ and therefore, if $K_0$ and $\kappa$ are large enough, $$ \P (\Omega_1)\ge 1- e^{-c\theta}. $$ In the sequel we will work on the set $\Omega_1$, i.e. we can assume that the index $k$ runs from $0$ to $(\const)\log N$ and that all eigenvalues lie in $[-K_0, K_0]$. By a simple counting, the cardinality of $M_k$ is bounded by \be |M_k|\leq (\const)2^{-k}N\theta\, \label{Mk} \ee on the event $\Omega_1$. For any $\alpha \in M_k$, denote \be \Omega_k(\alpha): = \Big\{\max_j |\bv_\al(j)|^2 \le C \frac{2^{2k}}{4N\theta}\Big\}, \label{omal} \ee where $\bv_\al$ is the normalized eigenvector to the eigenvalue $\mu_\al$. {F}rom Lemma \ref{lm:ext}, we obtain, for every $k=0,\dots,\kappa \log N$, $$ \P \Big\{ \bigcup_{\alpha \in M_k} \Omega^c_k(\alpha) \Big\} \le e^{-c\theta} $$ for some $c>0$. Let $$ \Omega: = \Omega_1\cap \bigcap_k\bigcap_{\alpha\in M_k} \Omega_k(\alpha), $$ then $$ \P (\Omega)\ge 1- e^{-c\theta}, $$ for some $c>0$. In the sequel we will work on the event $\Omega$. Define the following random set of energies: $$ \cE :=\bR\setminus\bigcup_k \bigcup_{\al\in M_k} \Big\{ E\; :\; |\mu_\al -E|\leq \frac{2^k}{N\theta^2}\Big\}. $$ The Lebesgue measure of the complement of $\cE$ is bounded by $$ |\cE^c|\leq \sum_k \sum_{\al\in M_k} \frac{2^{k+1}}{N\theta^2} \leq \frac{C}{\log N}. $$ Let $E\in \cE$ and $\omega\in \Omega$. We compute \be \begin{split} \frac{1}{N}\sum_{j=1}^N |G_{E,\eta}(j,j)|^2 \leq& \frac{1}{N}\sum_{j=1}^N\sum_{k,\ell=0}^{\kappa \log N} \sum_{\al\in M_k}\sum_{\beta\in M_\ell} \frac{|\bv_\al(j)|^2}{|\mu_\al-E|}\frac{|\bv_\beta(j)|^2}{|\mu_\beta-E|} \\ \leq & \frac{2}{N}\sum_{j=1}^N\sum_{k\le\ell}^{\kappa \log N} \sum_{\al\in M_k}\sum_{\beta\in M_\ell} \frac{|\bv_\al(j)|^2}{|\mu_\al-E|}\frac{|\bv_\beta(j)|^2}{|\mu_\beta-E|} \\ \leq & \frac{2}{N}\sum_{k\le\ell}^{\kappa \log N} \frac{2^{2k}C}{4N\theta} \sum_{\al\in M_k}\sum_{\beta\in M_\ell} \frac{1}{|\mu_\al-E|}\frac{1}{|\mu_\beta-E|} \end{split} \label{albeta} \ee In the second line we used the symmetry between $\al$ and $\beta$, in the third line we used the estimate on $|\bv_\al(j)|^2$ from \eqref{omal} and that $\sum_j |\bv_\beta(j)|^2=1$. We now perform the $\al\in M_k$ summation; the $\beta\in M_\ell$ summation will be identical. Let $I$ be an arbitrary interval of length $|I|= 2^{k}/N$. We claim that the number of eigenvalues $\mu_\al \in I$ with $\al\in M_k$ is at most $2\theta$. We label the elements of $M_k$ in increasing order; $\al_1<\al_2< \ldots <\al_{|M_k|}$. Let $\mu_{\alpha_i}$ be the smallest eigenvalue in the set $I$ with index in $M_k$. If $i > |M_k| - 2\theta$, then there cannot be more than $2\theta$ eigenvalues with indices in $M_k$ in $I$. Otherwise, if $i \leq |M_k| - 2\theta$, we have $$ \mu_{\al_{i+2\theta}} -\mu_{\al_i}\ge \mu_{\al_{i+\theta}+\theta} - \mu_{\al_{i+\theta}-\theta} >\frac{2^k}{N} $$ and therefore, since $|I| = 2^k/N$, $\mu_{\al_{i+2\theta}}$ cannot be in $I$. We now define the intervals $$ I_m: = \Big [E+\frac{2^k(m-\frac{1}{2})}{N}, E+\frac{2^k(m+\frac{1}{2})}{N}\Big] $$ for each $m\in \bbZ$, $|m|\leq CN\cdot 2^{-k}$. Clearly, each $I_m$ contains at most $2\theta$ eigenvalues $\mu_\al$ with index $\al\in M_k$. Notice that for any $\mu\in I_m$, $m\neq 0$, we have $|\mu-E|\ge 2^{k-1}m/N$. For $\mu_\al\in I_0$, with $\al\in M_k$, by the choice of $E\in \cE$, we have $|\mu_\al-E|\ge 2^k/(N\theta^3)$. Therefore \be \begin{split} \sum_{\al\in M_k} \frac{1}{|\mu_\al-E|} & \leq 2\theta \sum_{|m|\leq CN\cdot 2^{-k}} \max\Big\{ \frac{1}{|\mu_\al-E|} \; : \; \al\in M_k, \; \mu_\al\in I_m\Big\}\\ &\leq 2\theta\Big[ \max\Big\{ \frac{1}{|\mu_\al-E|} \; : \; \al\in M_k\Big\} + 2\sum_{m=1}^{ CN\cdot 2^{-k}} \frac{N}{2^{k-1}m}\Big] \\ & \leq \frac{ C\theta^3 N}{2^k}. \label{alsum} \end{split} \ee Using \eqref{alsum} both for the $\al$ and $\beta$ summations in \eqref{albeta}, we obtain $$ \frac{1}{N}\sum_{j=1}^N |G_{E,\eta}(j,j)|^2 \leq \frac{2}{N}\sum_{k=0}^{\kappa \log N} \sum_{\ell=k}^{\kappa \log N} \frac{2^{2k}}{4N\theta} \frac{ C\theta^3 N}{2^k}\frac{ C\theta^3 N}{2^\ell} \leq C\theta^6 $$ for any $E\in \cE$ and $\om\in \Omega$. This completes the proof of Theorem \ref{thm:green}. \end{proof} \section{Absence of localized eigenvectors}\label{sec:Meta} In this section we show that eigenvectors of Wigner random matrices, up to events with exponentially small probability, cannot be localized in a strong sense given by the following definition. \begin{definition} Let $L\ge 1$ be an integer and $\eta>0$. We say that an $\ell^2$-normalized vector $\bv =(v_1, \ldots, v_N)\in \bC^N$ exhibits $(L,\eta)$-localization if there exists a set $A \subset \{ 1, 2, \dots ,N\}$ such that $|A|=L$ and $\sum_{j \in A^c} |v_j|^2 \leq \eta$. \end{definition} \begin{theorem}\label{thm:Meta} Let $ H$ be an $N\times N$ Hermitian random matrix from the Wigner ensemble defined in \eqref{wig}, satisfying also the condition (\ref{gM}) and (\ref{x2}). Suppose that $\eta$ and $\nu=L/N$ are sufficiently small. Then, with a constant $c>0$ that depends only on $M$ and $\delta$ from \eqref{gM}, \eqref{x2}), we have \[ \P \big\{ \exists \text{ a normalized eigenvector $\bv$ of $H$ exhibiting $(L,\eta)$-localization} \big\} \leq e^{-c N} . \] \end{theorem} \begin{proof} Since, by (\ref{tail}), \[ \P \Big\{ \exists \text{ eigenvalue $\mu$ of $H$ with } |\mu| \geq K_0 \Big\} \leq e^{-cN} \] if $K_0$ is large enough, it is sufficient to prove that \begin{equation}\label{eq:Met1} \sup_{\beta \in \{ 1, \dots ,N \}} \, \P \Big\{ \text{$\bv_{\beta}$ exhibits $(L,\eta)$ localization and }|\mu_{\beta}|\leq K_0 \Big\} \leq e^{-cN} \end{equation} where $\mu_1 \leq \mu_2 \leq \dots \leq \mu_N$ denote the eigenvalues of $H$, and $\bv_1, \bv_2,\dots, \bv_N$ the corresponding normalized eigenvectors. To prove (\ref{eq:Met1}), we fix $\beta$, and consider the eigenvector $\bv_{\beta}$ associated with the eigenvalue $\mu_{\beta}$; for brevity, we drop the index $\beta$ from $\mu_{\beta}$ and $\bv_{\beta}$. By the definition of $(L,\eta)$-localization and by the permutation symmetry \begin{equation}\label{eq:Meta1} \begin{split} \P \big\{\text{$\bv$ is $(L,\eta)$-localized and $|\mu| \leq K_0$}\big\} &=\P\Big\{\exists \; A \subset \{1, \dots ,N\} : |A|= L \text{ and } \sum_{j\in A^c} |v_j|^2 \leq \eta \text{ and $|\mu| \leq K_0$} \Big\} \\ &\leq {N \choose L} \, \P \Big\{ \sum_{j=L+1}^N |v_j|^2 \leq \eta \text{ and $|\mu| \leq K_0$} \Big\}\,. \end{split} \end{equation} We introduce the notation $\bu= (v_1,\dots v_L)^t$, $\bw=(v_{L+1}, \dots , v_N)^t$, and, for $j=L+1,\dots,N$, \[ \bc_j = \frac{1}{\sqrt{N}} \, (h_{j1}, h_{j2}, \dots, h_{jL})^* \in \bC^L \quad \text{and} \quad \bd_j = \frac{1}{\sqrt{N}} \, (h_{j,L+1}, \dots , h_{jN})^* \in \bC^{N-L}. \] {F}rom the eigenvalue equation $H\bv = \mu \bv$, we obtain, for all $j \geq L+1$, \[ \mu v_j = \bc_j \cdot \bu + \bd_j \cdot \bw % \qquad \Rightarrow \qquad \bc_j \cdot \bu = \mu v_j - \bd_j \cdot \bw \] and thus \[ \sum_{j=L+1}^N |\bc_j \cdot \bu|^2 = \sum_{j=L+1}^N |\mu v_j - \bd_j \cdot \bw|^2 \leq 2 \mu^2 \| \bw \|^2 + 2 \sum_{j=L+1}^N |\bd_j \cdot \bw|^2. \] Denoting by $D_1$ the $(N-L) \times L$ matrix with rows given by $\bc_{L+1}^*, \dots, \bc_N^*$ and by $D_2$ the $(N-L) \times (N-L)$ matrix with rows given by $\bd_{L+1}^*, \dots , \bd_N^*$, last equation implies \[ ( \bu, D_1^* D_1 \bu ) \leq 2 \mu^2 \| \bw \|^2 + 2 ( \bw , D_2^* D_2 \bw ) \leq 2 \| \bw \|^2 \left( \mu^2 + \lambda_{\text{max}} (D_2^* D_2) \right) . \] Thus, from (\ref{eq:Meta1}), we conclude that \begin{equation}\begin{split}\label{eq:lbd} \P \big\{\text{$\bv$ is $(L,\eta)$-localized} &\text{ and $|\mu| \leq K_0$} \big\} \\ &\leq {N \choose L} \, \P \{ \| \bw \|^2 \leq \eta \text{ and $|\mu| \leq K_0$} \} \\ &\leq {N \choose L} \, \P \left\{( \bu, D_1^* D_1 \bu) \leq 2 \eta \left( \mu^2 + \lambda_{\text{max}} (D_2^* D_2) \right) \text{ and $|\mu| \leq K_0$} \right\} \\ &\leq {N \choose L} \, \P \left\{ (1-\eta) \lambda_{\text{min}} (D_1^* D_1) \leq 2 \eta \left( K_0^2 + \lambda_{\text{max}} (D_2^* D_2) \right) \right\} \\ &\leq {N \choose L} \, \P \left\{ (1-\eta) \frac{N-L}{N} \lambda_{\text{min}} (X_1^* X_1) \leq 2 \eta \left( K_0^2 + \frac{N-L}{N} \lambda_{\text{max}} (X_2^* X_2) \right) \right\} \\ &\leq {N \choose L} \, \Big[ \P \big\{\lambda_{\text{min}} (X_1^* X_1) \leq c \big\} + \P \big\{\lambda_{\text{max}} (X_2^* X_2) \geq C\big\} \Big] \end{split} \end{equation} for any positive constants $c$ and $C$ if $\eta$ and $\nu=L/N$ are sufficiently small (because $(1-\eta)(1-\nu) c \geq 2 \eta (K_0^2 + (1-\nu) C)$ if $\eta, \nu$ are sufficiently small). Here $\lambda_{\text{min}} (F)$ and $\lambda_{\text{max}} (F)$ denotes the minimal and, respectively, the maximal eigenvalue of the Hermitian matrix $F$, and $X_1 = \sqrt{N/(N-L)} \, D_1$, $X_2 = \sqrt{N/(N-L)} \, D_2$. {F}rom Lemma \ref{lm1} and Lemma~\ref{lm2} below, we know that, for any sufficiently small $\nu=L/N$, for sufficiently large $C$, and for $c<1/2$, there exists $\alpha >0$ such that \begin{equation} \begin{split} \P\big\{\lambda_{\text{min}} (X_1^* X_1) \leq c\big\} &\leq e^{-\alpha (N-L)} \qquad \text{and} \\ \P\big\{\lambda_{\text{max}} (X_2^* X_2) \geq C\big\} &\leq e^{-\alpha (N-L)}. \end{split} \end{equation} Thus, from (\ref{eq:lbd}), we obtain that, for $\eta>0$ and $\nu = L/N$ small enough, \begin{equation} \begin{split} P \big\{\text{$\bv$ is $(L,\eta)$-localized and $|\mu| \leq K_0$}\big\} \leq 2 {N \choose L} \, e^{-\alpha (N-L)} \leq \left( \frac{e}{\nu}\right)^{\nu N} e^{-\alpha N (1- \nu)} \leq e^{-\alpha N/4}. \end{split} \end{equation} Since the constant $\alpha$ is independent of the eigenvalue $\mu$, (\ref{eq:Met1}) follows. \end{proof} \bigskip \begin{corollary}\label{cor:lp} Suppose that the random matrix $H$ satisfies the same assumptions as in Theorem \ref{thm:Meta}. Then, for every $\kappa >0$ sufficiently small, there exists a constant $c>0$ such that \[ \P \left\{ \exists \text{ normalized } \bv \in \bC^N \text{ such that } H \bv = \mu \bv \text{ and } \| \bv \|_p \leq \kappa N^{\frac{1}{p} - \frac{1}{2}} \right\} \leq e^{-cN} \] for any $1\leq p \leq 2$. \end{corollary} {\it Remark.} If the eigenvector $\bv$ is uniformly extended, i.e. $|v_j|^2=\frac{1}{N}$, then $\| \bv \|_p = N^{1/p-1/2}$. This Corollary indicates that the behavior of all eigenvectors is consistent with the extended states hypothesis as far as the low $\ell^p$-norms ($1\leq p\leq 2$) are concerned. \bigskip \begin{proof} {F}rom \eqref{tail} with a sufficiently large $K_0$ we have \begin{equation}\label{eq:lp-Meta} \begin{split} \P \Big( \exists &\text{ normalized } \bv \in \bC^N \text{ such that } H \bv = \mu \bv \text{ and } \| \bv \|_p \leq \kappa N^{\frac{1}{p} - \frac{1}{2}} \Big) \\ %\leq \; &\P \left( \exists \text{ eigenvalue } %\mu \text{ of $H$ with } |\mu| \geq K_0 \right) \\ &+ \P \Big( %\exists \text{ normalized } \bv \in \bC^N \text{ such that } H \bv = %\mu \bv, |\mu| \leq K \text{ and } \| \bv \|_p \leq \kappa %N^{\frac{1}{p} - \frac{1}{2}} \Big) \\ \leq \; &e^{-cK_0N} + \P \Big( \exists \text{ normalized } \bv \in \bC^N \text{ such that } H \bv = \mu \bv, |\mu| \leq K_0 \text{ and } \| \bv \|_p \leq \kappa N^{\frac{1}{p} - \frac{1}{2}} \Big)\,. \end{split} \end{equation} Now, if $\bv$ is a normalized eigenvector of $H$, associated with an eigenvalue $|\mu| \leq K_0$, we can apply Theorem~\ref{thm:Meta}. To this end, we fix $\nu$ and $\eta$ small enough, and let $L = \nu N$. After relabeling, we can assume that $|v_1| \geq |v_2| \geq \dots \geq |v_L| \geq |v_{L+1}| \geq \dots \ge |v_N|$. Then, by Theorem \ref{thm:Meta}, \[ \P \Big\{ \sum_{j\leq L} |v_j|^2 \geq \eta\Big\} \leq e^{- c N} . \] Thus, with the exception of an event with exponentially small probability, \[ L |v_L|^2 \leq \sum_{j=1}^L |v_j|^2 \leq \eta \] This implies that $|v_L| \leq \sqrt{\eta/L}$. Therefore \[ 1-\eta \leq \sum_{j \geq L+1} |v_j|^2 \leq |v_L|^{2-p} \sum_{j\geq L+1} |v_j|^p \leq (\eta/L)^{1-p/2} \sum_{j=1}^N |v_j|^p \] and hence \[ \P \left( \sum_{j=1}^N |v_j|^p \leq L^{1-p/2} \frac{1-\eta}{\eta^{1-p/2}} = \kappa^p N^{1-p/2} \right) \leq e^{-c N} \] which, together with (\ref{eq:lp-Meta}), completes the proof. \end{proof} \bigskip In the next two lemmas, we prove effective large deviation estimate on the largest and the smallest eigenvalue of some covariance matrices. \begin{lemma}\label{lm1} Let $X=(X_{ij})$ be a complex $N\times L$ matrix, with $N >L$, such that, for all $i=1,\dots,N$ and $j=1,\dots,L$, $\text{Re}\, X_{ij}, \text{Im} \, X_{ij}$ are i.i.d. random variables with \[ \E \, X_{ij} = 0, \qquad \E \, |X_{ij}|^2 = \frac{1}{2N} \qquad \text{and } \quad \E \, e^{\delta N|X_{ij}|^2} \leq K_\delta <\infty \] for some $\delta>0$ and with $K_\delta$ independent of $N$. \begin{itemize} \item[i)] For $C>0$ large enough \[ \P (\lambda_{\text{max}} (X^* X) \geq C) \leq e^{-c_0 C N} \] for a constant $c_0$ depending only on $\delta$. \item[ii)] For $\nu= L/N$ sufficiently small and for all $0 0$ such that \[ \P (\lambda_{\text{min}} (X^* X) \leq c) \leq e^{-\alpha N} \] for all $\alpha < \alpha_0$. \end{itemize} \end{lemma} {\it Remark.} The precise large deviation rate function for $\lambda_{\min}$ and $\lambda_{\max}$ was determined recently in \cite{FHK} in the limit $N\to\infty$ under the additional condition that $L=o(N/\log\log N)$. Our proof is somewhat different and it also applies to the case $L\leq \nu N$, with $\nu$ small enough, but the decay rate we obtain is not precise. The history and earlier results in this direction was reviewed in \cite{FHK} and we shall not repeat it here. \bigskip \begin{proof} We begin by proving i). First, fix $\bz \in \bC^L$, with $\| \bz \| =1$. We claim that \begin{equation}\label{Meta1b} \P\big\{ ( \bz,X^* X \bz) \geq C\big\} \leq e^{-c_1 CN} \end{equation} for a constant $c_1$ depending only on $\delta$. In fact, for arbitrary $\kappa >0$, \begin{equation} \begin{split} \P \big\{ (\bz, X^* X \bz) \geq C\big\} &\leq e^{-\kappa C N} \E \, e^{\kappa N (\bz, X^* X \bz)} \\ &=e^{-\kappa C N} \E \, e^{\kappa N \sum_{j=1}^N |\bX_j \cdot \bz|^2} \end{split} \end{equation} where, for $j=1, \dots ,N$, $\bX_j = (X_{j1}, \dots ,X_{jL})^*$ denotes the adjoint of the $j$-th row of $X$. Since different rows of $X$ are independent and identically distributed, we find \begin{equation}\label{Meta2} \begin{split} \P \big\{ (\bz, X^* X \bz) \geq C\big\} &\leq e^{-\kappa C N} \prod_{j=1}^N \E\, e^{\kappa N |\bX_j \cdot \bz|^2} = e^{-\kappa C N} \left( \E \, e^{\kappa N |\bX_1 \cdot \bz|^2} \right)^N. \end{split} \end{equation} Consider now the random vector $\bY= \sqrt{N} \bX_1 = (y_1, \dots , y_L)^*$ with i.i.d. components. We have \begin{equation}\label{Meta3} \begin{split} \E \, e^{\kappa |\bY \cdot \bz|^2} &= \const \int_{\bR\times\bR} \rd q \rd p \, e^{-\frac{q^2 + p^2}{4}} \, \E\, e^{\sqrt{\kappa} \left(q \text{Re} \, (\bY\cdot \bz) + p \, \text{Im} \, (\bY \cdot \bz) \right) } \\ &= \const \int_{\bR\times\bR} \rd q \rd p \, e^{-\frac{q^2 + p^2}{4}} \, \prod_{i=1}^L \E \, e^{\sqrt{\kappa} \left( q \, \text{Re} (z_i y_i) + p \, \text{Im} (z_i y_i) \right)}\\ &= \const \int_{\bR\times\bR} \rd q \rd p \, e^{-\frac{q^2 + p^2}{4}} \, \prod_{i=1}^L \E \, e^{\sqrt{\kappa} \left( q \, \text{Re} z_i + p\, \text{Im} z_i \right) \text{Re} y_i} \, \E \, e^{\sqrt{\kappa} \left(- q \, \text{Im} z_i + p \, \text{Re} z_i \right) \text{Im} y_i} \end{split} \end{equation} with an appropriate normalization constant. Since $\E \, \text{Re} y_i = 0$ we find, for arbitrary $r \in \bR$, \begin{equation} \E \, e^{ r \text{Re} \, y_i} = \sum_{n \geq 0} \frac{r^n}{n!} \E (\text{Re} \,y_i)^n = 1 + \sum_{n \geq 1}\frac{r^{2n}}{(2n)!} \E (\text{Re} \, y_i)^{2n} + \sum_{n \geq 1} \frac{r^{2n+1}}{(2n+1)!} \E (\text{Re} \, y_i)^{2n+1}. \end{equation} Using that, for all $n \geq 1$, \[ \frac{r^{2n+1}}{(2n+1)!} \E (\text{Re} \, y_i)^{2n+1} \leq \frac{r^{2n}}{(2n)!} \E (\text{Re}\, y_i)^{2n} + \frac{r^{2n+2}}{(2n+2)!} \E (\text{Re}\, y_i)^{2n+2}, \] we obtain that \begin{equation} \begin{split} \E \, e^{ r \text{Re} \, y_i} &= \sum_{n \geq 0} \frac{r^n}{n!} \E (\text{Re} \,y_i)^n = 1 + 3 \sum_{n \geq 1}\frac{r^{2n}}{(2n)!} \E (\text{Re} \, y_i)^{2n} \leq 1 + \sum_{n \geq 1}\frac{n! \, (3r)^{2n}}{\delta^{2n} (2n)!} \E e^{\delta (\text{Re} \, y_i)^2} \\ & \leq 1 + \sum_{n \geq 1} \frac{(3 r)^{2n} K_{\delta}^n}{n! \delta^{2n}} \leq e^{9 K_{\delta} r^2/ \delta^2}, \end{split} \end{equation} where we chose $\delta >0$ small enough, and we used that $K_{\delta} = \E e^{\delta y^2}= \int e^{\delta y^2}e^{ - g(y)}\rd y < \infty$. Since $\|\bz\|=1$, from (\ref{Meta3}) we obtain \begin{equation}\label{Meta4} \E \, e^{\kappa |\bY \cdot \bz|^2} \leq \const \int_{\bR\times\bR} \rd q \rd p \, e^{-(q^2+p^2) \left(\frac{1}{4} - 36 \kappa (K_{\delta}/\delta^2)\right)} \leq \const \end{equation} by choosing $\kappa >0$ small enough. Inserting in (\ref{Meta2}), and choosing $C$ large enough, we find (\ref{Meta1b}). \medskip Now, for fixed $0< \e <1/4$, we choose a family $\{ \bz_j \}_{j\in I}$ with $\bz_j \in \bC^L$, $\| \bz_j\| \leq 1$ for all $j \in I$, such that $|I| \leq (2/\e)^{2L}$, and such that, for all $\bz \in \bC^{L}$ with $\| \bz \|=1$ there exists $j \in I$ with $\| \bz - \bz_j \| \leq \e$. For a suitable $j \in I$, we have \begin{equation}\begin{split} \| X^* X \| &= \sup_{\bz\in \bC^N} (\bz, X^* X \bz) = (\bz_{\text{max}}, X^* X \bz_{\text{max}}) \\ &\leq 2 \|\bz_{\text{max}} - \bz_j \| \| X^* X \| + (\bz_j, X^* X \bz_j) \leq 2 \e \| X^* X \| + (\bz_j, X^* X \bz_j) \end{split}\end{equation} and thus, if $\lambda_{\text{max}} (X^* X) \geq C$, there must be at least one $j\in I$ such that \[ (\bz_j, X^* X \bz_j) \geq (1-2\e) C. \] Therefore, since $|I| \leq (2/\e)^{2L}$, we can apply (\ref{Meta1b}) to obtain \begin{equation}\begin{split} \P \big\{ \lambda_{\text{max}} (X^* X) \geq C\big\} &\leq \P \big\{\exists j \in I : (\bz_j, X^* X \bz_j) \geq (1-2\e) C\big\} \\ & \leq (2/\e)^{2L} \sup_{j} \, \P\big\{(\bz_j, X^* X \bz_j) \geq (1-2\e) C\big\} \\ &\leq (2/\e)^{2L} e^{-c_1 C N} , \end{split} \end{equation} and thus, for $C$ large enough (and since $L \leq N$), \[ \P \big\{\lambda_{\text{max}} (X^* X) \geq C\big\} \leq e^{-(c_1/2) C N} . \] Next, we prove ii). Again, we first fix $\bz \in \bC^L$, and prove that, for $00$ \begin{equation} \begin{split} \P\big\{ (\bz, X^*X \bz) \leq c\big\} &\leq e^{\beta N c} \, \E \,e^{-\beta N (\bz, X^*X \bz)} = \left( e^{\beta c} \, \E \, e^{-\beta | \bY \cdot \bz|^2} \right)^N, \end{split}\label{eq:end} \end{equation} where we defined, as before, $\bY=\sqrt{N} \bX_1 = (y_1, \dots , y_L)^*$. Since $e^{-\beta r} \leq 1 - \beta r + \beta^2 r^2/2$ for all $r \geq 0$, we obtain \[ \E \, e^{-\beta | \bY \cdot \bz|^2} \leq 1 - \beta \E \, |\bY \cdot\bz|^2 + \frac{\beta^2}{2} \E |\bY \cdot \bz|^4 = 1 - \frac{\beta}{2} + O(\beta^2) \leq e^{-\beta/2 + O(\beta^2)} \] if $\beta>0$ is sufficiently small depending only on $\E \, y_1^4$. Therefore, we find \[ e^{\beta c}\, \E \, e^{-\beta | \bY \cdot \bz|^2} \leq e^{-\beta \big(\frac{1}{2}-c\big)+ O(\beta^2)} \] which proves (\ref{eq:ii1}) from \eqref{eq:end} with a sufficiently small $\al$, depending on $c$. To conclude the proof of ii), we fix $\e >0$ and a family $\{ \bz_j \}_{j\in I}$ with $\bz_j \in \bC^L, \| \bz_j\| \leq 1$ for all $j \in I$, such that, for all $\bz \in \bC^{L}$ with $\| \bz \|=1$ there exists $j \in I$ with $\| \bz - \bz_j \| \leq \e$ and $|I| \leq (2/\e)^{2L}$. Then, for a suitable $j \in I$, \begin{equation}\begin{split} \lambda_{\text{min}} (X^*X) &= \inf_{\| \bz \| = 1} (\bz, X^* X \bz) = (\bz_{\text{min}}, X^* X \bz_{\text{min}}) \\ &\geq (\bz_j, X^* X \bz_j) - 2 \| \bz_{\text{min}} - \bz_j \| \lambda_{\text{max}} (X^*X) \\ &\geq (\bz_j, X^* X \bz_j) - 2 \e \lambda_{\text{max}} (X^*X). \end{split}\end{equation} Therefore, we find \begin{equation} \begin{split} \P\big\{ \lambda_{\text{min}} (X^* X) \leq c\big\} \leq \; &\P \big\{\lambda_{\text{min}} (X^*X) \leq c \; \text{and} \; \lambda_{\text{max}} (X^*X) \leq C\big\} + \P \big\{\lambda_{\text{max}} (X^*X) \geq C\big\} \\ \leq \; &\P \big\{\exists j : (\bz_j, X^* X \bz_j) \leq c + 2 \e C\big\} + \P \big\{\lambda_{\text{max}} (X^*X) \geq C\big\} \\ \leq \; & \left(\frac{2}{\e}\right)^{2L} \, \P \big\{(\bz_1, X^* X \bz_1) \leq c + 2 \e C\big\} + \P \big\{\lambda_{\text{max}} (X^*X) \geq C\big\}. \end{split} \end{equation} Part ii) now follows using the result of part i) with a sufficiently large $C$, choosing $\e >0$ sufficiently small and using that $L/N =\nu$ is small enough. \end{proof} \begin{lemma}\label{lm2} Let $X$ be a $N\times N$ Hermitian random matrix as described in \eqref{wig} and we assume condition \eqref{x2}. %{\bf such that $\text{Re} X_{ij}, \text{Im} %X_{ij}$ and $X_{ii}$ are independent real random variables for all %$1\leq i < j \leq N$, such that $\text{Re} X_{ij}, \text{Im} X_{ij}$ %are identically distributed for all $1\leq i 0$ and with a finite constant %$K_\delta$ independent of $N$. Then, for $K_0>0$ large enough \[ \P \big\{ \lambda_{\text{max}} (X) \geq K_0\big\} \leq e^{-c_0 K_0^2N} \] with a constant $c_0$ depending only on $\delta$ in \eqref{x2}. \end{lemma} \begin{proof} Fix $\bz \in \bC^N$ with $\| \bz\| =1$. Then, with the notation $\bX_j = (X_{j1}, \dots , X_{jN})^*$ for $j=1, \dots, N$, \begin{equation}\label{1} \begin{split} \P \big\{ (\bz,X^* X \bz) \geq C\big\} &\leq e^{-\kappa C N} \E \, e^{\kappa N \sum_j |\bX_j \cdot \bz|^2} \\ & \leq e^{-\kappa C N} \E \, e^{2 \kappa N \sum_j |\sum_{l\leq j} X_{jl} \cdot z_l|^2} e^{2 \kappa N\sum_j |\sum_{l > j} X_{jl} \cdot z_l|^2} \\ & \leq e^{-\kappa C N} \left(\E \, e^{4 \kappa N \sum_j |\sum_{l\leq j} X_{jl} \cdot z_l|^2}\right)^{1/2} \left( \E \, e^{4 \kappa N \sum_j |\sum_{l > j} X_{jl} \cdot z_l|^2} \right)^{1/2} \end{split} \end{equation} Next, choosing $\kappa>0$ sufficiently small, we can show that, similarly to (\ref{Meta4}), \begin{equation} \E \, e^{4 \kappa N \sum_j |\sum_{l\leq j} X_{jl} \cdot z_l|^2} = \prod_{j=1}^N \E \, e^{4\kappa N |\sum_{l \leq j} X_{jl} \cdot z_l|^2} \leq \const^N \end{equation} and \begin{equation} \E \, e^{4 \kappa N \sum_j |\sum_{l\geq j} X_{jl} \cdot z_l|^2} = \prod_{j=1}^N \E \, e^{4\kappa N |\sum_{l \leq j} X_{jl} \cdot z_l|^2} \leq \const^N \end{equation} because $\sum_{l\leq j} |z_l|^2 \leq 1$ and $\sum_{l >j} |z_l|^2 \leq 1$. Thus, from (\ref{1}), we have, for $C$ large enough, \[ \P \big\{ (\bz,X^* X \bz) \geq C\big\} \leq e^{-c_1 C N} \] for a constant $c_1$ only depending on $\delta$. {F}rom the last equation, the lemma follows with $C=K_0^2$ by the same argument that was used at the end of the proof of part i) of Lemma \ref{lm1}. \end{proof} \thebibliography{hhhh} \bibitem{AGZ} Anderson, G. W., Guionnet, A., Zeitouni, O.: Lecture notes on random matrices. Book in preparation. \bibitem{B} Bai, Z.: Convergence rate of expected spectral distributions of large random matrices. Part I. Wigner matrices. {\it Ann. Probab.} {\bf 21} (1993), No.2. 625--648. \bibitem{BMT} Bai, Z. D., Miao, B., Tsay, J.: Convergence rates of the spectral distributions of large Wigner matrices. {\it Int. Math. J.} {\bf 1} (2002), no. 1, 65--90. \bibitem{BL} Brascamp, H.J., Lieb, E.H.: On extensions of the Brunn-Minkowski and Pr\'ekopa-Leindler theorems, including inequalities for log-concave functions, and with an application to the diffusion equation. {\it J. Funct. Anal.} {\bf 22} (1976) 366--398. \bibitem{D} Deift, P.: Orthogonal polynomials and random matrices: a Riemann-Hilbert approach. {\it Courant Lecture Notes in Mathematics} {\bf 3}, American Mathematical Society, Providence, RI, 1999 \bibitem{FHK} den Boer, A.F., van der Hofstad, R., Klok, M.J.: Large deviations for eigenvalues of sample covariance matrices. 2007, preprint. \bibitem{GZ} Guionnet, A., Zeitouni, O.: Concentration of the spectral measure for large matrices. {\it Electronic Comm. in Probability} {\bf 5} (2000) Paper 14. \bibitem{J} Johansson, K.: Universality of the local spacing distribution in certain ensembles of Hermitian Wigner matrices. {\it Comm. Math. Phys.} {\bf 215} (2001), no.3. 683--705. %\bibitem[Kh]{Kh} Khorunzhy, A.: On smoothed density %of states for Wigner random matrices. {\bf Random Oper. %Stoch. Eq.} {\bf 5} (1997), no.2., 147--162. \end{document} ---------------0711120403174--