This is a multi-part message in MIME format.
---------------0603150838533
Content-Type: text/plain; name="06-70.keywords"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="06-70.keywords"
Gibbs measures, specifications, non-quasilocality,
non-Gibbsian measures, renormalization transformations, spin-flip
dynamics, disordered models
---------------0603150838533
Content-Type: application/x-tex; name="robfin.tex"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline; filename="robfin.tex"
\documentclass[11pt,a4]{article}
\usepackage{amsfonts,amssymb,amsmath}
\usepackage[english]{babel}
%\usepackage{showkeys}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% change the catcode of @ (allows names containing @ after \begin{document})
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\makeatletter
%
% Equations numbered within sections
%
\@addtoreset{equation}{section}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Redeclaration of \makeatletter; no @-expressions may be used from now on
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\makeatother
%%%%%%%%%%% NUMINSEC.STY %%%%%%%%%%%%%%%%%%% BEGINNING
%%%%%% Equations, Theorems and such use same counter %%%%%%%%%%%
\newtheorem{theorem}[equation]{Theorem}
\newtheorem{lemma}[equation]{Lemma}
\newtheorem{corollary}[equation]{Corollary}
\newtheorem{proposition}[equation]{Proposition}
\newtheorem{definition}[equation]{Definition}
\newtheorem{remark}[equation]{Remark}
\newtheorem{Remarks}[equation]{Remarks}
\newenvironment{remarks}{\begin{Remarks}\rm}{\end{Remarks}}
\newtheorem{example}[equation]{Example}
\newtheorem{exercise}[equation]{Exercise}
%%%%%% Counters restarted with each new section %%%%%%%%%%%%%
\renewcommand\theequation{\thesection.\arabic{equation}}
\renewcommand\thefigure{\thesection.\arabic{figure}}
\renewcommand\thetable{\thesection.\arabic{table}}
%\renewcommand\thefigure{\thesection.\@arabic\c@figure}
%\renewcommand\thetable{\thesection.\@arabic\c@table}
%%%%%% Page format %%%%%%%%%%%
%\renewcommand{\baselinestretch}{1.5}
\oddsidemargin -7mm % Remember this is 1 inch less than actual
\evensidemargin 7mm
\textwidth 17cm
\topmargin -9mm % Remember this is 1 inch less than actual
\headsep 0.9in % Between head and body of text
\headsep 20pt % Between head and body of text
\textheight 22cm
%%%%%%%%%%% NUMINSEC.STY %%%%%%%%%%%%%%%%%%% END
%%%%%%%%%% MACROS %%%%%%%%%%
\def\reff#1{(\ref{#1})}
\def\sobre#1#2{\lower 1ex \hbox{ $#1 \atop #2 $ } }
\def\one{{\bf 1}\hskip-.5mm}
\def\proof{\noindent{\bf Proof. }}
\def\proofof#1{\noindent{\bf Proof of #1. }}
\def\bydef{\mathop{:=}}
\def\defby{\mathop{=:}}
\def\eee{{\rm e}}
\def\square{\ifmmode\sqr\else{$\sqr$}\fi}
\def\sqr{\vcenter{
\hrule height.1mm \hbox{\vrule width.1mm
height2.2mm\kern2.18mm\vrule width.1mm} \hrule height.1mm}} %
%This is a slimmer sqr.
\def\qed{ \square}
\def\dist{{\rm dist}}
\def\diam{{\rm diam}\,}
\def\comp#1{{#1}^{\rm c}}
\def\1{\rlap{\mbox{\small\rm 1}}\kern.15em 1}
\def\ind#1{\1_{#1}}
\def\build#1_#2^#3{\mathrel{\mathop{\kern 0pt#1}\limits_{#2}^{#3}}}
\def\tend#1#2#3{\build\hbox to 12mm{\rightarrowfill}_{#1\rightarrow
#2}^{#3}}
\def\cor#1{\build\longlefrightarrow_{}^{#1}}
\def\tendn{\tend{n}{\infty}}{}
\def\converge#1#2#3{\build\hbox to
15mm{\rightarrowfill}_{\hbox{\scriptsize #3}}^{#1\rightarrow #2}}
\def\converg#1#2#3{\build\hbox to
15mm{\rightarrowfill}_{\hbox{\scriptsize #3}}^{#1\uparrow #2}}
\def\embf#1{\emph{\bf #1}}
\def\rest#1{|_{#1}}
%%%%%%%% ABBREVIATIONS -- PROBABILISTIC MACROS %%%%%%%%%%%
\newcommand{\lat}{\mathbb{L}}
\newcommand{\sing}{S}
\newcommand{\card}[1]{\left|#1\right|}
\newcommand{\cardd}[1]{{\rm card}\left(#1\right)}
\newcommand{\norm}[1]{\left\|#1\right\|}
\newcommand{\tribu}{{\mathcal F}}
\newcommand{\topo}{{\mathcal T}}
\newcommand{\bonds}{{\mathcal B}}
\newcommand{\buno}{{\mathcal B}_1}
\def\buildd#1#2{\mathrel{\mathop{\kern 0pt#1}\limits_{#2}}}
\newcommand{\subneq}{\buildd{\subset}{\neq}}
\newcommand{\supneq}{\buildd{\supset}{\neq}}
\newcommand{\spec}{\widehat\omega'}
\begin{document}
\bibliographystyle{plain}
\title{Gibbsianness and non-Gibbsianness in lattice random fields}
\author{Roberto Fern\'andez%\footnotemark[1]}%\ \footnotemark[2]}
\\
\small{ Laboratoire de Math\'ematiques Rapha{\"e}l Salem}\\ \small{UMR 6085 CNRS-Universit\'e de Rouen}\\
\small{Avenue de l'Universit\'e, BP 12}\\
\small{F-76801 Saint \'Etienne du Rouvray, France}\\
\small{\bf{\tt roberto.fernandez@univ-rouen.fr}}}
%\date{October, 2004}
\maketitle
%\renewcommand{\thefootnote}{\fnsymbol{footnote}}
%\footnotetext[1]{Laboratoire de
%Math\'ematiques Rapha{\"e}l Salem, UMR 6085 CNRS-Universit\'e de
%Rouen, Avenue de l'Universit\'e, B.P.12, F76801 St Etienne du
%Rouvray, France;
%{\tt roberto.fernandez@univ-rouen.fr}}
%%\begin{abstract}
%%I review Gibbsianness and non-Gibbsianness.
%%\end{abstract}
\tableofcontents
\section{Historical remarks and purpose of the course}
The notion of Gibbs measure, or Gibbs random field, is the founding
stone of mathematical statistical mechanics. Its formalization in the
late sixties, due to Dobrushin, Lanford and Ruelle
\cite{dob68b,lanrue69}, marked the beginning of two decades of intense
activity that produced a rather complete theory \cite{pre76,geo88}
which has been exploited in many areas of mathematical physics,
probability and stochastic processes, as well as for example in
dynamical systems.
Despite its diverse applicability, the Gibbsian description was
developed specifically to describe \emph{equilibrium} statistical
mechanics. Limitations were bound to show up when this framework was
transgressed. In fact, the wake-up call came from work within the
original equilibrium setting. Indeed, around 1980 Griffiths and
Pearce \cite{gripea78,gripea79,gri81} pointed out that some measures
obtained as a result of renormalization transformations ---a technique
developed to study critical points \cite{fis83,gol92}--- showed
``pathologies'' that contradicted Gibbsian intuition. Israel
\cite{isr79} quickly pointed out the cause. These measures lacked the
\emph{quasilocality} property which, as we discuss below, is one of
the (two) key properties of Gibbsianness. The measures were, thus,
\emph{non-Gibbsian}.
After these early examples, almost a decade had to pass before the
topic really took off. At this time there appeared a second wave of
examples showing that non-Gibbsianness was rather ubiquituous; it was
present in spin ``contractions'' \cite{lebmae87,dorvan89}, lattice
projections \cite{sch89} and stationary measures of stochastic
evolutions \cite{lebsch88}. These examples motivated us to write a
mini-treatise \cite{vEFS_JSP} where we tried to explain the relevant
notions and systematize existing examples based on different
non-Gibbsianness symptoms ---discontinuities or zeroes of the
conditional distributions, large deviations that are too large or too
small. Next to our article, in the same issue of Journal of
Statistical Physics, a much shorter paper by Martinelli and
Olivieri~\cite{maroli93} initiated, in fact, the next stage of the
non-Gibbsianness saga, namely the efforts in the direction of
\emph{Gibbsian restoration}.
These efforts have progressed in two complementary directions. On the
one hand, criteria have been put forward to determine how severely
non-Gibbsian a measure is. Measures have been classified according to
(i) the effect of further
decimation~\cite{maroli93,maroli94,lorvel94,ent00}, (ii) the size of
the set of points of discontinuity of the conditional probabilities
\cite{lorwin92,ferpfi96,entshl98} and (iii) the size of the set of
configurations for which a Boltzmann description is
possible~\cite{dob95,maevel97,dobshl97,brikuplef98,dobshl98,brikuplef01,kul01}.
The reader is directed to \cite{entetal00} for a concise comparison of
these classification schemes, which, however, does not include a more recently
introduced fourth category of non-Gibbsianness~\cite{entver04}. On
the other hand, some features of Gibbsian measures have been proven to
hold also for different classes of non-Gibbsian fields. They include
parts of the thermodynamic formalism~\cite{lebsch88,pfi02} and the
variational approach~%
\cite{lef99,ferlenred03,ferlenred03b,kullenred04,entver04}.
At present, after almost 25 years of non-Gibbsian studies, the state
of affairs is the following. On the positive side, we have a rather
extensive catalogue of instances of the phenomenon. The more recent,
and surprising, manifestations include the non-Gibbsianness of joint
measures for disordered systems~\cite{kul99,entetal00b,kul03}
---contradicting a well-known assumption in physics literature--- and
the appearance and disappearance of non-Gibbsianness during dynamical
evolutions of the type used in Monte Carlo
simulations~\cite{entetal02}. We also have a pretty good knowledge of
mechanisms leading to non-Gibbsianness. By this I mean both the
physical mechanisms (hidden variables, phase transitions of restricted
systems) and the mathematical tools to provide rigorous proofs.
Finally, the work on the ``thermodynamic'' properties of non-Gibbsian
measures has brought further insight in the limitations
of such an approach and the different components of the usual Gibbsian
variational approach.
On the negative side, we still owe concrete answers to practitioners.
It is still unclear to what extent, if any, the lack of Gibbsianness
of renormalized measures compromises widely accepted calculations of
critical exponents. Likewise, nothing is known on possible observable
consequences of non-Gibbsianness of simulation or sampling schemes.
This situation was to be expected. Non-Gibbsianness is an elusive
phenomenon, involving extremely unlikely events and very special
(perverse!) boundary conditions. There is no question that we deal
with a phenomenon that is widespread. It shows up, for example, in
intermittent dynamical systems~\cite{maeetal00} and in problems of
technological relevance~\cite{entver04}. Nevertheless, we seem to be
still at the stage of mostly mathematical finesse.
But this finesse has been very beneficial. In at least two instances
it has helped clarify an important paradoxical situation. First,
through the ``second fundamental theorem'' in \cite{vEFS_JSP} which
dispelled, to a certain extent, the threat of discontinuities in the
renormalization-group paradigm. Second, it was instrumental in
reconciling contradictory hypotheses with succesful predictions in
Morita's approach to the study of disordered systems~\cite{kul04}.
Furthermore, non-Gibbsianness has forced some healthy reconsideration
of known results, especially those related to the thermodynamical and
variational characterization of measures. The discontinuities, often
associated only to a measure-zero set of bad configurations, rended the
traditional treatment invalid. Putting it dramatically, proofs were
destroyed by a few very unlikely events. It is natural to
enquire whether this is due to a limitation of the techniques
of proof, or whether continuity is really essential. The meticulous
work on Gibbsian reconstruction is teaching us how to isolate and bring
into light the different ingredients of each Gibbsian result, and to
appreciate the subtle balance between topology and probability theory
which supports mathematical statistical mechanics.
\bigskip
This course can be roughly divided in two parts. The first part is an
introduction of the main concepts and notions. To make it reasonably
self-contained, I will start with a rather detailed exposition of the
definiton and benchmark properties of Gibbsianness. In particular, I
will include a hopefully pedagogical proof of Kozlov's theorem, which
has been our main tool to detect non-Gibbsianness. This will lead me,
quite naturally, to an early presentation of the different
non-Gibbsianness classification schemes. The second part reviews
examples of non-Gibbsianness. These examples show up through
violations of either the non-nullness or the quasi-locality of some
conditional probability. I will try to convey at least an intuitive
understanding of some of the mechanisms behind these two types of
violations for the case of renormalization transformations, as well as for the case of spin-flip evolutions~\cite{entetal02} and the case of
disordered systems~\cite{kul99}.
Most of the exposition is in the style of an overview. I will try, on
the one hand, to clarify the main conceptual issues and, on the other
hand, to transmit the ideas and intuitions that helped develop my own
understanding of the subject. In particular, the non-quasilocality
instances will be organized around three ``surprising''
manifestations: renormalization-group pathologies, non-Gibbsianness in
Glauber evolutions and non-Gibbsianness of the joint measures of
disordered systems. I hope these situations are surprising enough to
convince the audience that the phenomenon is important indeed. Due to
time constraints, I am leaving aside the variational and
thermodynamical treatments of non-Gibbsian measures. A pedagogical
self-contained exposition of these issues requires a course of its
own. I refer the reader respectively to \cite{kullenred04} and
\cite{pfi02} for state-of-the-art presentations of these topics.
\section{Setup, notation and basic notions}
\paragraph{General definitions and notation.}
We consider a countable set $\lat$, called the \emph{lattice}, formed
by \emph{sites} $x$ and whose subsets will be called \emph{regions}.
At each site of $\lat$ sits a copy of a
\emph{single-spin space} $\sing$. For our pedagogical purposes it is enough
to consider \emph{finite} spins, that is $2\le \cardd{\sing}<\infty$. Most of the
examples below correspond to the cases $S=\{0,1\}$ (lattice-gas models),
$S=\{-1,1\}$ (Ising spins) or $S=\{1,2,\ldots,q\}$ (Potts spins with $q$ colors).
Gibbs measures are defined on the
\emph{configuration space} $\Omega=\sing^{\lat}$ which represents a
large array of microscopic systems, each described by $\sing$. Thus,
each configuration $\omega\in\Omega$ is a collection of values
$(\omega_x)_{x\in\lat}$ where, for concreteness, each
$\omega_x\in\sing$ will be called the value of the \emph{spin at} $x$.
To fix ideas, we can take the canonical case $\lat=\mathbb{Z}^d$, but
the following presentation is written so as to make a certain generality
apparent. In fact, for the purely statistical mechanical theory most of the time we
will consider sets $\lat$ endowed with a distance $\dist$ such that
the parallelepipeds
\begin{equation}
\label{eq:1.1}
\Lambda_n(x) \;\bydef\;\left\{ y\in \lat : dist(x,y) \le n\right\}
\end{equation}
have a (external) boundary whose cardinality grows slower than the cardinality
of $\Lambda_n$.
In addition, for the thermodynamical treatment and to study ergodicity
and large-deviation properties (topics that are omitted here), an
\emph{action} of $\mathbb{Z}^d$ on $\lat$ by homeomorphisms is needed.
This means that there must exist a family of bijections indexed by
$\mathbb{Z}^d$ ---the \emph{translations}--- $T_x:\lat\to\lat$,
$x\in\lat$, such that (i) $T_x^{-1}=T_{-x}$, (ii) they are continuous
and measurable (with respect to the topology and $\sigma$-algebra
introduced below) and (iii) they leave invariant the distance in
the sense that $\dist (T_xy,T_xz)=\dist (x,y)$. The largest $d$ for which such
an action exists is the \emph{dimension} of the lattice $\lat$.
Another, almost gratuitous, generalisation of the setting is to
consider site-dependent single-spin spaces $\Omega_x$. For practical
purposes, the only consequence of such a more general framework is to
make the notation heavier, so I will not adopt it here.
Let us fix some notational conventions. The symbol ``$\card{\ }$''
will be used in several senses: cardinality of an ensemble, absolute
value of a (complex) number and, if $x$ is a site in $\mathbb{Z}^d$,
$\card{x}=\max_{1\le i\le d}\card{x_i}$. In particular, we shall use
the distance $\dist (x,y)=\card{x-y}$. We shall denote configurations by
lower case Greek letters, $\omega,\sigma,\eta \in \Omega$ and finite
subsets of the lattice by uppercase letters, which will be Greek when
associated to lattice regions and Latin when they are to be thought of as
bonds (see below). The finiteness property will be emphasized by the
symbol ``$\Subset$'': $\Lambda,\Gamma\Subset\lat$. Finite-region
configurations will show the region as a subscript:
$\omega_\Lambda\in\Omega_\Lambda\bydef \sing^\Lambda$. Configurations
defined by regions will be denoted in a factorized form; an omitted
subscript indicating completion to the rest of the lattice:
$\omega_\Lambda\eta_{\comp{\Lambda}}=\omega_\Lambda\eta$.
A configuration $\sigma$ will be said \emph{asymptotically equal} to another
configuration $\eta$ if it is of the form $\sigma=\omega_\Lambda\eta$
for some $\Lambda\Subset\lat$ and $\omega\in\Omega$. Alternatively,
$\sigma$ will be called a \emph{finite-region modification} of $\eta$.
The $r$-external boundary of a region $\Lambda\subset\lat$
($00$ there exists
$n\in\mathbb{N}$ such that
\begin{equation}
\label{eq:1.9}
\sup_{\sigma\in\Omega}\,\Bigl|f(\omega_{\Lambda_n}\sigma)-f(\omega)
\Bigr| < \epsilon \;.
\end{equation}
The compactness of $\Omega$ implies that functions continuous everywhere
are uniformly continuous. Hence, the continuity of a function $f$ on
the whole of $\Omega$ is equivalent to any of the following
properties:
\begin{itemize}
\item[(C1)] For each $\epsilon>0$ there exists $n\in\mathbb{N}$ such
that
%for all $n$ larger than $n_{0}$
\begin{equation}
\label{eq:1.10}
\sup_{\omega\in\Omega}\,\sup_{\sigma\in\Omega}
\,\Bigl|f(\omega_{\Lambda_n}\sigma)-f(\omega)\Bigr| <
\epsilon \;.
\end{equation}
\item[(C2)] $f$ can be uniformly approximated by local functions:
For each $\epsilon>0$ there exists a local function $f_\epsilon$ such
that
\begin{equation}
\label{eq:1.11}
\norm{f_\epsilon-f}_\infty < \epsilon \;.
\end{equation}
\end{itemize}
We immediately see that all local functions are continuous, while all
asymptotic observables are discontinuous.
In more general settings, where $\Omega$ is not compact, a function
satisfying (C1) or (C2) is termed \emph{quasilocal}. In our case,
then, continuity [that is, the validity of \reff{eq:1.9} for all
$\omega$] is equivalent to quasilocality. We shall use both terms
almost interchangeably, with a slight preference for the latter. This
is in part for historical reasons, but also to emphasize the fact that
in more general settings, quasilocality, rather than continuity, is
the key property. In particular, I may refer to property
\reff{eq:1.9} as \emph{quasilocality at} $\omega$.
A weaker notion of quasilocality will be relevant below.
\begin{definition}\label{def:1.1}
Let $\omega,\theta\in\Omega$.
A function $f$ on $\Omega$ is \embf{quasilocal at $\omega$ in the
direction} $\theta$ if
\begin{equation}
\label{eq:1.11.1}
\Bigl|f(\omega_{\Lambda_n}\theta)-f(\omega)\Bigr|\; \tendn{}\; 0\;.
\end{equation}
$f$ is \embf{quasilocal in the direction} $\theta$ if it satisfies
\reff{eq:1.11.1} for all $\omega\in\Omega$.
\end{definition}
Due to the ``sup'' in \reff{eq:1.9},
a function can be quasilocal at $\omega$ in \emph{every} direction
without being continuous at $\omega$ (see the example after Remark 2.5
in \cite{ferlenred03b}).
\paragraph{Interplay between topology and measure theory.}
The notion of weak convergence (=weak*-convergence in functional
analysis), is perhaps the most elementary concept needing a combined
topological and measure-theoretical framework. A sequence of measures
$\mu_n$ on $(\Omega,\tribu)$ \emph{converges weakly} to a measure $\mu$
if its continuous expectations converge, that is, if
\begin{equation}
\label{eq:1.12}
\mu_n(f) \ \tendn\ \mu(f) \quad \mbox{for every \emph{continuous}
function } f\;.
\end{equation}
(In more general situations, the convergence is required for functions
that are continuous \emph{and bounded}. This last condition is
automatic if $\Omega$ is compact, as is the case here.) Due to the density result
(C2) above, weak convergence is therefore equivalent to either of the
following equivalent conditions:
\begin{itemize}
\item[(W1)] $\mu_n(f) \;\tendn\; \mu(f)$ for every \emph{local}
function $f$.
\item[(W2)] $\mu_n\bigl(C_{\sigma_\Lambda}\bigr) \;\tendn\;
\mu\bigl(C_{\sigma_\Lambda}\bigr )$
for every cylinder $C_{\sigma_\Lambda}$.
\end{itemize}
In words, weak convergence means convergence of expectations of microscopic
observables. It gives no information whatsoever as to the convergence
of the means of the discontinuous macroscopic (asymptotic)
observables. In our examples below such a convergence will fail:
Infinite-region measures $\mu_n$ are typically singular with respect to each other, as well
as with respect to their weak limits $\mu$, precisely because the respective supports are
disjoint asymptotic events.
Weak convergence is, indeed, an extremely weak notion
of convergence (strictly weaker than other popular modes of
convergence, like convergence in probability, almost-surely, in $L^p$
sense, in total variation \ldots). There is both a physical and a
mathematical justification for its use. From the physical point of
view, it corresponds to the idea of \emph{infinite-volume} limit, that
is, on the construction of (infinite-volume) ``states'' by working on
finite, but progressively larger volumes. Mathematically, it is the
type of convergence involved in basic limit theorems (like
central-limit theorems) and, moreover, it leads to a \emph{compact}
space of probability measures (Banach-Alaoglu theorem). This is an
invaluable property that, for instance, reduces to a marginal comment
the potentially difficult problem of existence of Gibbs measures for a
given interaction.
Another notion needed later on is the following.
\begin{definition}
A probability measure $\mu$ on a Borel measurable space is \embf{
non-null} if $\mu(O)>0$ for every open set $O$.
\end{definition}
Two more instances of topology-measure theory interplay will be found
later on. First, the reference to regular conditional probabilities in
Polish spaces. Second, the very notion of Gibbs measure!
\section{Probability kernels, conditional probabilities and statistical mechanics}
\subsection{Probability kernels}
We turn now to more specific notions that are not always learnt in
elementary probability courses. I start with the definition of a
\emph{probability kernel} which, informally, is an object with two
``slots", being a probability measure with respect to one of them and
a measurable function with respect to the other one. It represents a
family of probability measures which depend, in a measurable fashion,
on a random parameter. Two applications are of interest here: (i)
conditional probabilities ---measurable functions of the conditioning
configuration--- and (ii) stochastic transformations ---measurable
with respect to the initial configuration. Kernels for the second
application are usually denoted in an ``operator" fashion, while the
``conditioning" notation is reserved for the first case. I'll adopt
this last ``bar" notation for both, because I always tend to think
these kernels as conveying conditioning information.
\begin{definition} A \embf{probability kernel} $\Psi$ from a probability space
$(\mathcal{A}, \Sigma)$ to another probability space $(\mathcal{A}',
\Sigma')$ is a function
\begin{equation}
\label{eq:2.1}
\Psi(\,\cdot\mid\cdot\, ) : \Sigma' \times\mathcal{A}\longrightarrow [0,1]
\end{equation}
such that
\begin{itemize}
\item[(i)] $\Psi(\,\cdot\,|\omega)$ is a probability measure on
$(\mathcal{A}', \Sigma')$ for each $\omega\in\mathcal{A}$;
\item[(ii)] $\Psi(A'|\,\cdot\,)$ is $\Sigma$-measurable for each
$A'\in\Sigma'$.
\end{itemize}
\end{definition}
A, perhaps familiar, illustration of this concept is given by the
transition probabilities defining a (discrete-time, homogeneous)
stochastic process. In this case, $\mathcal{A}=S^{-\mathbb{N}}$,
$\mathcal{A}'=S^{-\mathbb{N}\cup\{0\}}$, $\Sigma$ and $\Sigma'$ the
respective product $\sigma$-algebras, and $\Psi(A'|\omega)$ is the
probability that the event $A'$ happens at the next instant given a
history $\omega$. The specifications discussed below constitute a
multi-dimensional generalization of this example.
Probability kernels can be combined by a "convolution'' in the following natural
way. Suppose $\Psi$ is a kernel from $(\mathcal{A}, \Sigma)$ to
$(\mathcal{A}', \Sigma')$ and $\Psi'$ is a kernel from $(\mathcal{A}',
\Sigma')$ to $(\mathcal{A}'', \Sigma'')$. Then $\Psi\Psi'$ is the
kernel from $(\mathcal{A}, \Sigma)$ to $(\mathcal{A}'', \Sigma'')$
defined by
\begin{equation}
\label{eq:2.2}
\bigl(\Psi\Psi'\bigr)(A''|\omega) \;=\;
\Psi\Bigl(\Psi'(A''|\,\cdot\,)\Bigm| \omega\Bigr)\;,
\end{equation}
for $A''\in\Sigma''$ and $\omega\in\mathcal{A}$. In more detail,
\begin{equation}
\label{eq:2.3}
\bigl(\Psi\Psi'\bigr)(A''|\omega) \;=\;
\int_{\mathcal{A}'}\Psi(d\omega'|\omega)
\,\Psi'(A''|\omega')\;.
\end{equation}
This convolution leads to a map between probability measures, which are
particularly simple examples of probability kernels. Indeed, a kernel
$\Psi$ from $(\mathcal{A}, \Sigma)$ to $(\mathcal{A}', \Sigma')$ defines
the map
\begin{equation}
\label{eq:2.4}
\begin{array}{ccc}
\mathcal{P}(\mathcal{A}, \Sigma) &\longrightarrow&
\mathcal{P}(\mathcal{A}', \Sigma')\\
\mu & \longmapsto & \mu'=\mu\Psi
\end{array}
\end{equation}
that is,
\begin{equation}
\label{eq:2.5}
\mu'(A') \;=\; \int_{\mathcal{A}}\mu(d\omega) \,\Psi(A'|\omega)\;,
\end{equation}
for all $A'\in\Sigma'$. This is how renormalization transformations,
discussed below, are defined. In particular, \emph{deterministic}
transformations correspond to kernels concentrated on single
points:
\begin{equation}
\label{eq:2.5.1}
\Psi(A'\mid\omega) \;=\; \delta_{\psi(\omega)}(A')
\end{equation}
for a certain function $\psi:\mathcal{A}\to\mathcal{A}'$.
Then, $\mu'(A')=(\mu\Psi)(A')=\mu\bigl[\psi^{-1}(A')\bigr]$ and
\begin{equation}
\label{eq:2.5.2}
\mu'(f') \;=\; \int_{\mathcal{A}}f'\bigl(\Psi(\omega)\bigr)\,\mu(d\omega) \;,
\end{equation}
for $f'\in\Sigma'$.
\subsection{Conditional probabilities}\label{ssec:2.2}
Marc Kac said that probability theory is measure theory with a soul.
This soul ---which makes probability into a full field of its own and
not just a mere chapter of finite-measure theory--- is the notion of
conditional expectation . It is not a simple concept, though, due to
the need of conditioning with respect to events of \emph{zero}
probability. I intend to review here definitions and properties of
this object, so as to explain the full mathematical meaning of the
crucial notion of \emph{specification} to be introduced shortly.
Readers who are impatient or reluctant to abstract considerations may
prefer to jump to the next subsection and accept Definition \ref{def:2.5}
through the more ``physical'' arguments given below. Most of these
subtleties would be avoidable if we were dealing only with Gibbsian
measures, but they are unavoidable for a proper understanding of
non-Gibbsianness.
A very popular exercise in elementary probability courses consists in
showing that two events are independent if, and only if, all the
events of the $\sigma$-algebras generated by them are. This
observation generalizes to the fact that the information related to
conditional expectations is best encoded through \emph{functions} that
correspond to conditioning with respect to whole $\sigma$-algebras.
Kolmogorov taught us the right axiomatic way to define this concept.
\begin{definition} \label{def:2.2}
Let $(\mathcal{A},\Sigma,\mu)$ be a probability space, $\tau$ a
$\sigma$-algebra with $\tau\subset\Sigma$ and $f$ a $\mu$-integrable
$\Sigma$-measurable function. A \embf{conditional expectation
function} of $f$ given $\tau$ is a function
\begin{equation}
\label{eq:2.6}
E_\mu(f\mid \tau)(\,\cdot\,): \mathcal{A} \longrightarrow \mathbb{R}
\end{equation}
such that
\begin{itemize}
\item[(i)] $E_\mu(f\mid \tau)$ is $\tau$-measurable.
\item[(ii)] For any $\tau$-measurable bounded function $g$,
\begin{equation}
\label{eq:2.7}
\int d\mu \, g\,E_\mu(f\mid \tau) \;=\; \int d\mu \, g\,f\;.
\end{equation}
\end{itemize}
\end{definition}
Such a function $E_\mu(f\mid \tau)$ is interpreted as the expected
value of $f$ if we have access only to the information contained in
$\tau$, that is, if we can only perform an experiment determining
occurrence of events in $\tau$ rather than the more detailed events in
$\Sigma$. It is the ``best predictor'' of $f$, in square-integrable
sense, among the $\tau$-measurable functions. (The reader can, for
instance, have a look to Chapter 9 of the book by
Williams~\cite{wil91} for a short but clear motivation of the previous
definition and its interpretation.) Identity \reff{eq:2.7} is the
ultimate version of the quintessential probabilistic technique of
decomposing an expectation into a sum of conditioned averages weighted
by the probabilities of the conditioning events (``divide-and-conquer"
technique).
Several remarks are in order. First, the existence of such
conditional expectations is assured by the Radon-Nikod\'ym theorem.
Second, as condition (ii) involves a $\mu$-integral, $E_\mu(f\mid
\tau)$ can be modified on a set of $\mu$-measure zero while still
satisfying the definition. Thus, Definition \ref{def:2.2} does not
define a unique function. Measure-zero modifications, however, are
the \emph{only} ones possible. That is, $E_\mu(f\mid \tau)$ \emph{is
defined $\mu$-almost surely}. Often, more appropriately, the symbol
$E_\mu(f\mid \tau)$ is reserved for the whole class of functions
determined by the previous definition. Here it is being used,by abuse of notation, for any particular choice ---\emph{realization}--- within
this class. In this way we gain concreteness but we have to remember
to include a ``$\mu$-almost surely" clause in each expression relating
conditional expectations. Third, conditional expectations enjoy a
number of important properties, most of which are very easy to prove
(nicely summarized in Section 9.7 and the inner back cover of
\cite{wil91}). We highlight two of them for immediate use. First,
for each bounded $g\in\tau$,
\begin{equation}
\label{eq:2.8}
E_\mu(g\,f\mid \tau)\;=\; g\, E_\mu(f\mid \tau)\quad
\mu\mbox{-almost surely}\;.
\end{equation}
Second, if $\widetilde\tau$ is an even smaller $\sigma$-algebra, that
is $\widetilde\tau\subset\tau\subset\Sigma$, then
\begin{equation}
\label{eq:2.9}
E_\mu\Bigl(E_\mu(f\mid \tau)\Bigm|\widetilde\tau\Bigr)
\;=\; E_\mu(f\mid \widetilde\tau)\quad \mu\mbox{-almost surely}\;.
\end{equation}
This is the well known ``tower property'' of conditioning.
A highly non-trivial, somehow hidden, aspect of the previous
presentation, is the fact that the functions $E_\mu(f\mid \tau)$ are
constructed on an ``$f$-to-$f$ basis''. The conditional expectation
for each $f$ is constructed without any regard for the conditional
expectations of other functions $f$. The full-measure sets granting
properties like \reff{eq:2.8} and \reff{eq:2.9} are $f$-dependent.
The question arises whether a coordinate choice of conditional
expectations is possible such that there is a full-measure set where all
properties work simultaneously for \emph{all} measurable and
integrable functions. This amounts to constructing a $\mu$-full set
of $\omega\in\mathcal{A}$ for which the $f$-dependence $f\mapsto
E_\mu(f\mid \tau)(\omega)$ corresponds to a \emph{measure} that
``explains'' these conditional expectations. Of course, this is not
always possible (in fact, Kolmogorov's seminal contribution consisted
in showing that it is largely irrelevant; most of probability theory
can be developed only with conditional expectation functions ---which
always exist--- whether or not they come from conditional probability
measures). The next definition covers the case when it happens to be
possible.
\begin{definition}\label{def:2.3}
Let $(\mathcal{A},\Sigma,\mu)$ be a probability space and
$\tau$ a $\sigma$-algebra with $\tau\subset\Sigma$. A \embf{regular condition
probability} of $\mu$ given $\tau$ is a probability kernel
$\mu_{|\tau}(\,\cdot\,\mid\,\cdot\,)$ from $(\mathcal{A},\Sigma)$ to
$(\mathcal{A},\tau)$ such that for each $\mu$-integrable $f\in\Sigma$
\begin{equation}
\label{eq:2.10}
\mu_{|\tau}(f\mid\,\cdot\,) \;=\;E_\mu(f\mid \tau)(\,\cdot\,)
\quad \mu\mbox{-almost surely}\;.
\end{equation}
\end{definition}
We can, of course, state a more direct definition by transcribing
property \reff{eq:2.7} at the level of kernels. However, it is more
convenient for our purposes to decompose such a property with the aid of
identity \reff{eq:2.8}. In this way the following proposition is
obtained.
\begin{proposition}\label{pro:2.1}
Let $(\mathcal{A},\Sigma,\mu)$ be a probability space and $\tau$ a
$\sigma$-algebra with $\tau\subset\Sigma$. A regular conditional
probability of $\mu$ given $\tau$ is a probability kernel
$\mu_{|\tau}(\,\cdot\,\mid\,\cdot\,)$ from $(\mathcal{A},\Sigma)$ to
itself such that
\begin{itemize}
\item[(i)] $\mu_{|\tau}(f\mid\,\cdot\,)\in\tau$ for each
$\mu$-integrable $f\in\Sigma$.
\item[(ii)] $\mu$-almost surely, $\mu_{|\tau}(g\,f\mid\,\cdot\,) =
g\,\mu_{|\tau}(f\mid\,\cdot\,)$ for each bounded $g\in\tau$ and each
$\mu$-integrable $f\in\Sigma$.
\item[(iii)] $\mu \,\mu_{|\tau} = \mu$.
\end{itemize}
\end{proposition}
[The last identity uses the compact notation introduced in
\reff{eq:2.4}/\reff{eq:2.5} for the composition of a kernel with a
measure.]
In our case, (and in most of the cases encountered in day-to-day
probability studies) we are saved by a remarkable theorem stating that
\emph{every measure on a Polish space has regular conditional
probabilities}. As this regularity holds for every choice of
conditioning $\sigma$-algebra $\tau$, the tower property \reff{eq:2.9}
can be transcribed in terms of kernels. To make the connection with
the notion of specification, let me formalize the kernel version of
the tower property for families of $\sigma$-algebras.
\begin{definition}\label{def:2.4}
Let $(\mathcal{A},\Sigma,\mu)$ be a probability space and
$\{\tau_i:i\in I\}$ a family of $\sigma$-algebras with
$\tau_i\subset\Sigma$, $i\in I$. A \embf{system of regular
conditional probabilities} of $\mu$ given the family $\{\tau_i\}$
is a family of probability kernels
$\mu_{|\tau_i}(\,\cdot\,\mid\,\cdot\,)$, $i\in I$, from
$(\mathcal{A},\Sigma)$ to itself such that
\begin{itemize}
\item[(i)] For each $i\in I$, $\mu_{|\tau_i}$ is a regular conditional
probability of $\mu$ given $\tau_i$.
\item[(ii)] If $i,j\in I$ are such that $\tau_i\subset\tau_j$,
\begin{equation}
\label{eq:2.11}
\mu_{|\tau_i}\,\mu_{|\tau_j} \;=\; \mu_{|\tau_i} \quad
\mu\mbox{-almost surely}\;.
\end{equation}
\end{itemize}
\end{definition}
[The last identity looks so admirably brief thanks to the convolution
notation \reff{eq:2.2}/\reff{eq:2.3}.]
This definition embodies a rather central problem in probability
theory: given a measure and a family of $\sigma$-algebras, find the
corresponding system of regular conditional probabilities. Such a
system gives complete knowledge of the measure in relation to the
experiments in question. As we discuss next, the central problem in
statistical mechanics goes precisely in the \emph{opposite}
direction.
\subsection{Specifications. Consistency}\label{ssec:2.3}
In physical terms, statistical mechanics deals with the following
problem: Given the finite-volume (microscopic) behavior of a system in
equilibrium, determine the possible infinite-volume equilibrium states
to which such behavior leads. The mathematical formalization of this
question (in the classical = non-quantum case) passes by the following
tenets:
\begin{itemize}
\item[(SM1)] Equilibrium state = probability measure
\item[(SM2)] Finite regions = finite parts of an infinite system
\end{itemize}
The description of a system in a finite region $\Lambda\Subset\lat$ is
given, thus, by a probability kernel
$\pi_\Lambda(\,\cdot\,\mid\,\cdot\,)$, where
$\pi_\Lambda(f\mid\omega)$ represents the equilibrium value of $f$
when the configuration outside $\Lambda$ is $\omega$. To emphasize
this last fact, and for further mathematical convenience,
$\pi_\Lambda(\,\cdot\,\mid\omega)$ should be considered a probability
measure on the whole of $\Omega$ acting as
$\delta_{\omega_{\comp{\Lambda}}}$ outside $\Lambda$. These kernels
must obey certain constraints if they are to describe
\emph{equilibrium}. To the very least they have to be consistent with
the following principle:
\begin{itemize}
\item[(SM3)] A system is in equilibrium in $\Lambda$ if it is in
equilibrium in every box $\Lambda'\subset\Lambda$.
\end{itemize}
This means that the equilibrium value of any $f$ in $\Lambda$ can also
be found through expectations in $\Lambda'$ with configurations
in $\Lambda\setminus\Lambda'$ distributed according to the
$\Lambda$-equilibrium. That is,
\begin{equation}
\label{eq:2.12}
\pi_\Lambda(f\mid\omega) \;=\; \pi_\Lambda\Bigl(
\pi_{\Lambda'}(f\mid\,\cdot\,)\Bigm|\omega\Bigr)
\qquad (\Lambda'\subset\Lambda\Subset\lat)
\end{equation}
Putting all this together, we arrive at the notion of
specification, first introduced by Preston.
\begin{definition}\label{def:2.5}
A \embf{specification} on $(\Omega,\tribu)$ is a family
$\Pi=\{\pi_\Lambda: \Lambda\Subset\lat\}$ of probability
kernels from $(\Omega,\tribu)$ to itself such that
\begin{itemize}
\item[(i)] $\pi_\Lambda(f\mid\omega)\in\tribu_{\comp{\Lambda}}$ for each
$\Lambda\Subset\lat$ and bounded measurable $f$.
\item[(ii)] Each $\pi_\Lambda$ is \emph{proper}:
\begin{equation}
\label{eq:2.13}
\pi_\Lambda(g\,f\mid\omega) \;=\; g(\omega) \,\pi_\Lambda(f\mid\omega)
\end{equation}
for all $\omega\in\Omega$, $g\in\tribu_{\comp{\Lambda}}$ and bounded
measurable $f$.
\item[(iii)] The family $\Pi$ is \emph{consistent}:
\begin{equation}
\label{eq:2.14}
\pi_\Lambda\,\pi_{\Lambda'} \;=\; \pi_\Lambda
\end{equation}
if $\Lambda'\subset\Lambda$.
\end{itemize}
\end{definition}
[Recall the convolution notation \reff{eq:2.2}/\reff{eq:2.3}.]
A specification is a physical model, a complete description of how a
system in equilibrium behaves at the microscopic level, the
information that will be given to you, for instance, by your
experimental physicist friend. Your task, as statistical mechanics
specialist, is to come up with the resulting infinite-volume (i.e.\
macroscopic) states. According to the previous tenets, these are
measures satisfying the consistency property \reff{eq:2.13} when
$\Lambda$ becomes $\lat$.
\begin{definition}\label{def:2.6}
A measure $\mu\in\mathcal{P}(\Omega,\tribu)$ is \embf{consistent} with
a specification $\Pi=\{\pi_\Lambda: \Lambda\Subset\lat\}$ if
\begin{equation}
\label{eq:2.15}
\mu \,\pi_\Lambda \;=\; \mu
\end{equation}
for each $\Lambda\Subset\lat$. Let $\mathcal{G}(\Pi)$ denote the set
of probability measures consistent with $\Pi$.
\end{definition}
[Recall the convolution notation \reff{eq:2.4}/\reff{eq:2.5}.]
The concept of specification is very general. Systems at non-zero
temperature are described by the Gibbsian specifications discussed in
the next section, but models with exclusions and systems at zero
temperature require more singular specifications. Conditions
\reff{eq:2.15} are often called \emph{DLR equations} in reference to
Dobrushin, Lanford and Ruelle who first set them up for Gibbsian
models. The set $\mathcal{G}(\Pi)$ can be empty \cite[Example
(4.16)]{geo88}; otherwise it is a simplex. Its extremal points have
physically appealing properties (trivial tail field, short-range
correlations) associated to macroscopic behavior. The existence of
several consistent measures corresponds to the existence of ``multiple
phases'', and its indeed signals the presence of a first-order phase transition.
A comparison with the preceding subsection shows that the definition
of specification collects all the properties of a system of regular
kernels that do not refer to the initial measure $\mu$. Thus, a
specification can be interpreted as \emph{a system of regular
conditional probabilities} defined \emph{without reference to an
underlying measure}. In fact, the goal is precisely to find
measures having each $\pi_\Lambda$ as its
$\tribu_{\comp{\Lambda}}$-conditional probability. This observation
is made precise by the following proposition whose proof should be
immediate.
\begin{proposition}\label{pro:2.2}
Let $\Pi=\{\pi_\Lambda: \Lambda\Subset\lat\}$ be a specification and $\mu$
a probability measure on $(\Omega,\tribu)$. The following properties are
equivalent:
\begin{itemize}
\item[(i)] $\mu$ is consistent with $\Pi$.
\item[(ii)] $\{\pi_\Lambda: \Lambda\Subset\lat\}$ is a system of regular
conditional probabilities of $\mu$ given the family
of $\sigma$-algebras $\{\tribu_{\comp{\Lambda}}:\Lambda\Subset\lat\}$;
i.e.\ $\mu_{\tribu_{\comp{\Lambda}}}(\,\cdot\,\mid\omega)
=\pi_\Lambda(\,\cdot\,\mid\omega)$ for $\mu$-almost
all $\omega\in\Omega$.
\item[(iii)] $\pi_\Lambda(f\mid\,\cdot\,) =
E_\mu(f\mid\tribu_{\comp{\Lambda}})(\,\cdot\,)$ $\mu$-almost surely
for each $\Lambda\Subset\lat$ and each $\mu$-integrable function
$f$.
\end{itemize}
\end{proposition}
Thus, while in probability one usually starts with a measure and
searches for its conditional probabilities, in statistical mechanics
one starts with the conditional probabilities and searches for the
measure. The existence of first-order phase transitions shows that
finite-volume conditional expectations, unlike finite marginal
distributions, \emph{do not uniquely determine a measure}. This
explains, in part, the richness of the resulting theory.
There is, nevertheless, an important difference between specifications
and systems of regular conditional probabilities brought by the
absence of ``$\mu$-almost surely'' clauses in the former. Indeed, in
the case of specifications there is no initial privileged measure and,
moreover, consistency will in general lead to infinitely many relevant
measures. In such a situation there is no clear way to give meaning
to almost sure statements. Hence, while (ii) of Proposition
\ref{pro:2.1} and the tower property \reff{eq:2.11} hold $\mu$-almost
surely, the analogous conditions of being proper and consistent
---(ii) and (iii)[=\reff{eq:2.5}] of Definition \ref{def:2.5}--- hold
for all $\omega\in\Omega$. Thus, not every system of regular
conditional probabilities forms a specification and it is natural to
wonder whether each measure admits a specification or, almost
equivalently, whether a regular system can always be modified so as to
obtain a specification. The answer, somehow surprisingly, is a rather
general ``yes" \cite{pre76,sok81}. A more subtle question is whether
such a modification can be done so as to acquire some additional
properties, like continuity with respect to the external condition.
This turns out to be a deep issue that is at the heart of the
non-Gibbsianness phenomenon to be studied later. \medskip
In our finite-spin setting, each proper kernel $\pi_\Lambda$ is
absolutely continuous with respect to the product of the counting
measure in $\Omega_\Lambda$ and a delta measure on
$\Omega_{\comp\Lambda}$.
\begin{definition}
The \embf{specification densities} of a specification
$\Pi=\{\pi_\Lambda:\Lambda\Subset\lat\}$ are the functions
$\gamma_\Lambda(\,\cdot\mid\cdot\,):
\Omega_\Lambda\times\Omega_{\comp\Lambda}\to [0,1]$
defined by
\begin{equation}
\label{eq:2.16}
\gamma_\Lambda(\sigma_\Lambda\mid\omega_{\comp\Lambda}) \;\bydef\;
\pi_\Lambda(C_{\sigma_\Lambda}\mid\omega)\;,
\end{equation}
that is, the functions such that,
\begin{equation}
\label{eq:2.17}
\pi_\Lambda(f\mid\omega) \;=\; \sum_{\sigma_\Lambda\in\Omega_\Lambda}
f(\sigma_\Lambda\omega) \,
\gamma_\Lambda(\sigma_\Lambda\mid\omega_{\comp\Lambda})
\end{equation}
for every bounded measurable $f$.
\end{definition}
These densities will be the main characters of the presentation below.
They enjoy a number of useful properties.
The consistency relation
\reff{eq:2.14} applied to $f=\ind{C_{\sigma_{\Lambda'}}}$ yields
\begin{equation}
\label{eq:2.18}
\gamma_\Lambda(\sigma_\Lambda\mid\omega_{\comp\Lambda}) \;=\;
\sum_{\eta_{\Lambda}\in\Omega_{\Lambda}}
\gamma_{\Lambda'}(\sigma_{\Lambda'}\mid\sigma_{\Lambda\setminus\Lambda'}
\,\omega_{\comp\Lambda})\;
\gamma_{\Lambda}(\eta_{\Lambda'}\,\sigma_{\Lambda\setminus\Lambda'}\mid
\omega_{\comp\Lambda})\;.
\end{equation}
From this we readily obtain a key \emph{bar-displacement property}
that will be intensively
exploited in the proof of Kozlov's theorem:
\begin{proposition}\label{pro:2.3}
Let $\{\gamma_\Lambda:\Lambda\Subset\lat\}$ be a family of densities
of a specification on $(\Omega,\tribu)$. Consider regions
$\Lambda'\subset\Lambda\Subset\lat$ and configurations $\alpha$,
$\sigma$ and $\omega$ such that
$\gamma_{\Lambda'}(\alpha_{\Lambda'}\mid\sigma_{\Lambda\setminus\Lambda'}
\,\omega_{\comp\Lambda}) > 0$. Then,
\begin{equation}
\label{eq:2.19}
\frac{\gamma_{\Lambda}(\beta_{\Lambda'}\,\sigma_{\Lambda\setminus\Lambda'}
\mid\omega_{\comp\Lambda})}
{\gamma_{\Lambda}(\alpha_{\Lambda'}\,\sigma_{\Lambda\setminus\Lambda'}
\mid\omega_{\comp\Lambda})} \;=\;
\frac{\gamma_{\Lambda'}(\beta_{\Lambda'}\mid\sigma_{\Lambda\setminus\Lambda'}
\,\omega_{\comp\Lambda})}
{\gamma_{\Lambda'}(\alpha_{\Lambda'}\mid\sigma_{\Lambda\setminus\Lambda'}
\,\omega_{\comp\Lambda})}
\end{equation}
for every configuration $\beta$.
\end{proposition}
In words: the conditioning bar can be freely moved, as long as the
external configurations of numerator and denominator remain identical.
In fact, this condition amounts to an alternative way to define specifications
in our finite-spin setting (this way is particularly popular within
the Russian school.)
\begin{exercise}\label{ex:rus}
Show that a family of strictly positive density functions
$\gamma_\Lambda$ defines a specification if, and only if,
\begin{itemize}
\item[(i)] they are normalized: $\sum_{\sigma_\Lambda\in\Omega_\Lambda}
\gamma_\Lambda(\sigma_\Lambda\mid\omega_{\comp\Lambda}) = 1$
for every configuration $\omega$, and
\item[(ii)] they satisfy relation \reff{eq:2.19} for all configurations
$\alpha$, $\beta$, $\sigma$ and $\omega$.
\end{itemize}
\end{exercise}
A double application of the key relation \reff{eq:2.19} yields the telescoping formula
\begin{equation}
\label{eq:2.19.1}
\frac{\gamma_{\Lambda}(\beta_{\Lambda}\mid\omega_{\comp\Lambda})}
{\gamma_{\Lambda}(\alpha_{\Lambda}\mid\omega_{\comp\Lambda})} \;=\;
\frac{\gamma_{\{x\}}(\beta_{x}\mid\beta_{\Lambda\setminus\{x\}}\,
\omega_{\comp\Lambda})}
{\gamma_{\{x\}}(\alpha_{x}\mid\beta_{\Lambda\setminus\{x\}}\,
\omega_{\comp\Lambda})}\,
\frac{\gamma_{\Lambda\setminus\{x\}}(\beta_{\Lambda\setminus\{x\}}
\mid\alpha_x \,\omega_{\comp\Lambda})}
{\gamma_{\Lambda\setminus\{x\}}(\alpha_{\Lambda\setminus\{x\}}
\mid\alpha_x \,\omega_{\comp\Lambda})}\;,
\end{equation}
which implies that the single-site densities \emph{characterize} the specification.
That is, we are led to the following proposition, which is a particular case of
\cite[Theorem (1.33)]{geo88}.
\begin{proposition}\label{pro:2.4}
A specification with strictly positive
densities can be reconstructed, in a unique way,
from its single-site densities through
\reff{eq:2.19.1}. As a consequence, two specifications with strictly positive
densities are equal
if, and only if, their single-site densities coincide.
\end{proposition}
To benefit from this result we need a family of singletons that are known to come
from a specification. A more involved question is the \emph{construction}
or \emph{extension} issue, namely under which conditions a family
of single-site densities can be extended to a full specification. To see
that some conditions are needed let us apply \reff{eq:2.19.1}
for $\Lambda=\{x,y\}$:
\begin{equation}
\label{eq:2.19.2}
\frac{\gamma_{\{x,y\}}(\beta_{\{x,y\}}\mid\omega_{\comp{\{x,y\}}})}
{\gamma_{\{x,y\}}(\alpha_{\{x,y\}}\mid\omega_{\comp{\{x,y\}}})}
\;=\;
\frac{\gamma_{\{x\}}(\beta_{x}\mid\beta_{y}\,
\omega_{\comp{\{x,y\}}})}
{\gamma_{\{x\}}(\alpha_{x}\mid\beta_{y}\,
\omega_{\comp{\{x,y\}}})}\,
\frac{\gamma_{\{y\}}(\beta_{y}
\mid\alpha_x \,\omega_{\comp{\{x,y\}}})}
{\gamma_{\{y\}}(\alpha_{y}
\mid\alpha_x \,\omega_{\comp{\{x,y\}}})}\;.
\end{equation}
The normalization $\sum_{\beta_{\{x,y\}}}
\gamma_{\{x,y\}}(\beta_{\{x,y\}}\mid\omega_{\comp{\{x,y\}}})=1$
yields
\begin{equation}
\label{eq:2.19.3}
\gamma_{\{x,y\}}(\alpha_{\{x,y\}}\mid\omega_{\comp{\{x,y\}}})\;=\;
\frac{\gamma_{\{y\}}(\alpha_{y}
\mid\alpha_x \,\omega_{\comp{\{x,y\}}})}
{\displaystyle \sum_{\beta_{\{x,y\}}}
\frac{\gamma_{\{x\}}(\beta_{x}\mid\beta_{y}\,
\omega_{\comp{\{x,y\}}})}
{\gamma_{\{x\}}(\alpha_{x}\mid\beta_{y}\,
\omega_{\comp{\{x,y\}}})}\,
\gamma_{\{y\}}(\beta_{y}
\mid\alpha_x \,\omega_{\comp{\{x,y\}}})
}
\end{equation}
This expression is, indeed, an algorithm to construct a two-site
density starting from single-site functions. A similar formula holds,
of course, interchanging $x$ with $y$. For the algorithm to
make sense, both \reff{eq:2.19.3} and its $x\leftrightarrow y$
permutation must be equal. It is not hard to check that this equality is,
in fact, a necessary and sufficient condition for single-site kernels to yield
unique consistent two-site kernels. In fact, as we point out in
\cite[Appendix]{fermai04}, this is just the condition needed to construct
consistent kernels for \emph{all} finite regions.
\begin{proposition}\label{pro:2.5}
Let $\{\gamma_{\{x\}}: x\in\lat\}$ be a family of strictly positive functions
$\gamma_{\{x\}}(\,\cdot\mid\cdot\,):\Omega_x\times \Omega
\longrightarrow (0,1]$ satisfying
\begin{itemize}
\item[(i)] the normalization condition
\begin{equation}
\label{eq:2.19.4}
\sum_{\sigma_x}\gamma_{\{x\}}(\sigma_{x}\mid
\omega_{\comp{\{x,\}}})\;=\;1
\end{equation}
for all $\omega\in\Omega$, and
\item[(ii)] the order-consistency condition
\begin{eqnarray}
\label{eq:2.19.5}
\frac{\gamma_{\{y\}}(\alpha_{y}
\mid\alpha_x \,\omega_{\comp{\{x,y\}}})}
{\displaystyle \sum_{\beta_{\{x,y\}}}
\frac{\gamma_{\{x\}}(\beta_{x}\mid\beta_{y}\,
\omega_{\comp{\{x,y\}}})}
{\gamma_{\{x\}}(\alpha_{x}\mid\beta_{y}\,
\omega_{\comp{\{x,y\}}})}\,
\gamma_{\{y\}}(\beta_{y}
\mid\alpha_x \,\omega_{\comp{\{x,y\}}})
} \;=\; \nonumber\\
\frac{\gamma_{\{x\}}(\alpha_{x}
\mid\alpha_y \,\omega_{\comp{\{x,y\}}})}
{\displaystyle \sum_{\beta_{\{x,y\}}}
\frac{\gamma_{\{y\}}(\beta_{y}\mid\beta_{x}\,
\omega_{\comp{\{x,y\}}})}
{\gamma_{\{y\}}(\alpha_{y}\mid\beta_{x}\,
\omega_{\comp{\{x,y\}}})}\,
\gamma_{\{x\}}(\beta_{x}
\mid\alpha_y \,\omega_{\comp{\{x,y\}}})
}
\end{eqnarray}
for all $\alpha_x,\alpha_y\in \sing$ and $\omega\in\Omega$.
\end{itemize}
Then, there exists a unique specification with strictly positive
densities having the $\gamma_{\{x\}}$ as its single-site
densities. Furthermore, a probability measure $\mu$ is consistent with
this specification if and only if it is consistent with the single-site
kernels defined by the densities $\gamma_{\{x\}}$.
\end{proposition}
The proof of this proposition does not use the product structure
of $\Omega$, hence it also works for non-strictly positive specifications
whose zeros come from local exclusion rules (one must just declare
$\Omega$ to be the set of allowed configurations). Extension conditions,
and construction algorithms when the kernels have
zeros determined by asymptotic events, are
discussed in \cite{dacnah01,dacnah04,fermai04,fermai05}.
\section{What it takes to be Gibbsian}
\subsection{Boltzmann prescription. Gibbs measures}
Spin systems at equilibrium at non-zero temperature are described
through Gibbsian specifications. They are defined through the
Boltzmann prescription $\gamma_\Lambda \sim \eee^{-\beta H_\Lambda}$,
where $H_\Lambda$ ---the Hamiltonian--- is a function in units of
energy and $\beta$ a constant in units of inverse energy. It is the
inverse of the product of the temperature times the Boltzmann
constant, but it is briefly called the \emph{inverse temperature}
which is the correct name if the temperature is measured in
electron-volts. Of course, every non-null $\gamma_\Lambda$ can be
written as the exponential of something, but not everything has the
right to be called a Hamiltonian in statistical mechanics. To model
microphysics, the Hamiltonian must be a sum of local terms
representing interaction energies among finite (microscopic) groups of
spins. The set of these interaction energies is, thus, the basic
object of the prescription.
\begin{definition}\label{def:3.1}
An \embf{interaction} or \embf{interaction potential} or
\embf{potential} is a family $\Phi=\{\phi_A:A\Subset\lat\}$ of
functions $\phi_A:\Omega\to\mathbb{R}$ such that $\phi_A\in\tribu_A$
(that is, $\phi_A$ depends only on the spins in the finite set $A$),
for each $A\Subset\lat$. Furthermore:
\begin{itemize}
\item The \embf{bonds} of $\Phi$ are those finite sets $A$ for which
$\phi_A\neq 0$. Let us denote by $\bonds_\Phi$ the set of bonds.
\item $\Phi$ is of \embf{finite range} if the diameter of the bonds of
$\Phi$ does not exceed a certain $r<\infty$ (the \embf{range}).
\end{itemize}
\end{definition}
Alternatively, interactions are specified writing the formal sum
$H=\sum_{A\in\bonds} \phi_A$. Such an expression must be interpreted
just as a bookkeeping expression.
The pair $(\Omega,\Phi)$ constitute a Gibbsian \emph{model}.
The \emph{Ising model} is, perhaps, the most popular one.
It is defined by $\lat=\mathbb{Z}^d$, $\sing=\{-1,1\}$ and
\begin{equation}
\phi_A(\omega) \;=\;\left\{\begin{array}{cl}
-J_{\{x,y\}}\,\omega_x\omega_y & \mbox{if } A=\{x,y\}
\mbox{ with }\card{x-y}=1\\
-h_x\,\omega_x & \mbox{if } A=\{x\}\\
0 & \mbox{otherwise}
\end{array}\right.
\end{equation}
or, alternatively, $H =-\sum_{\langle x,y \rangle}
J_{\{x,y\}}\,\omega_x\omega_y - \sum_x h_x\,\omega_x$. The constants
$J_{\{x,y\}}$ are the nearest-neighbor \emph{couplings}, and $h_x$ is
the \emph{magnetic field} at $x$ (these parameters are constant in the
translation-invariant case). The notation ``$\langle x,y \rangle$''
is a standard way to indicate pairs of nearest-neighbor sites $x,y$.
The minus signs are a concession to physics that demands that
energy-lowering operations be alignment with the field and alignment,
resp.\ anti-alignment, of neighboring spins in the ferromagnetic
($J_{\{x,y\}}\ge 0$), resp.\ anti-ferromagnetic ($J_{\{x,y\}}\le 0$)
case. The change of variables $\xi_x=(\omega_x+1)/2$ produces the
\emph{lattice gas} model. Another well-studied model is the
\emph{Potts model} with $q$ colors: $\lat=\mathbb{Z}^d$,
$\sing=\{1,2,\ldots,q\}$ and $H=-\sum_{\langle x,y \rangle}
J_{\{x,y\}}\,\ind{\{\omega_x=\omega_y\}}$.
The definitions of Hamiltonian and Boltzmann weights require the
specification of conditions assuring the existence of the relevant
series. The following definition refers to the weakest of such
conditions.
\begin{definition}\label{def:4.sum}
Let $\Phi$ be an interaction.
\begin{itemize}
\item The {\bf Hamiltonian} for a region $\Lambda\Subset\lat$ with frozen external
condition $\omega$ is the real-valued function defined by
\begin{equation}
\label{eq:3.2}
H^\Phi_\Lambda(\sigma_\Lambda\mid\omega_{\comp\Lambda})
\;=\; \sum_{A\Subset\lat : A\cap\Lambda\neq\emptyset}
\phi_A(\sigma_\Lambda\omega)
\end{equation}
for $\sigma,\omega\in\Omega$ such that the sum exists.
\item $\Phi$ is \embf{summable at} $\omega\in\Omega$ if
$H^\Phi_\Lambda(\sigma_\Lambda\mid\omega_{\comp\Lambda})$
exists for all $\Lambda\Subset\lat$ and all
$\sigma_\Lambda\in\Omega_\Lambda$. Let us denote
$\Omega^\Phi_{ \rm sum}$ the set of configurations at which
the interaction is summable.
\end{itemize}
\end{definition}
[Let me recall that $\sum_{A\ni x} \phi_A(\omega)$ exists iff the
sequence $S_n(\omega)= \sum_{A: x \in A\subset V_n} \phi_A(\omega)$ is
Cauchy.]
\begin{definition}
The {\bf Boltzmann weights} for an interaction $\Phi$
are the functions defined for all $\Lambda\Subset\lat$ and all
$\omega\in\Omega^\Phi_{ \rm sum}$ by
\begin{equation}
\label{eq:3.3}
\gamma^\Phi_\Lambda(\sigma_\Lambda\mid\omega_{\comp\Lambda})
\;=\; \frac{\eee^{-H^\Phi_\Lambda(\sigma_\Lambda\mid\omega_{\comp\Lambda})}}
{Z^\Phi_\Lambda(\omega)}\;,
\end{equation}
where $Z^\Phi_\Lambda(\omega)$ is the \embf{partition function}
\begin{equation}
\label{eq:3.4}
Z^\Phi_\Lambda(\omega) \;=\; \sum_{\omega_\Lambda\Subset\Omega_\Lambda}
\eee^{-H^\Phi_\Lambda(\sigma_\Lambda\mid\omega_{\comp\Lambda})}\;.
\end{equation}
\end{definition}
Notice that the $\beta$ factor has been absorbed into $H_\Lambda$,
which amounts to a redefinition of the interaction. This stresses the
fact that this factor plays no role in the discussion of general
properties of Gibbs measures. It is, however, essential for the study
of phase transitions. Keeping to tradition, I reserve the right to
include it explicitly or absorb it according to needs.
Gibbsianness demands summability in a very strong sense.
\begin{definition}
An interaction $\Phi$ on $(\Omega,\tribu)$ is \embf{uniformly absolutely
summable} if
\begin{equation}
\label{eq:3.5}
\sum_{A\ni x} \norm{\Phi_A}_\infty \;<\; \infty
\quad \mbox{for each } x\in\lat\;.
\end{equation}
The set of such uniformly absolutely summable interactions
will be denoted $\buno$.
\end{definition}
This is much more than just demanding $\Omega^\Phi_{ \rm sum}=\Omega$.
\begin{definition}On $(\Omega,\tribu)$:
\begin{itemize}
\item The \embf{Gibbsian specification defined by an interaction}
$\Phi\in\buno$ is the specification $\Pi^\Phi$ having the
$\Phi$-Boltzmann weights as densities, that is, defined by
\reff{eq:2.17} for the weights $\gamma^\Phi_\Lambda$.
\item The \embf{Gibbs measures} for an interaction $\Phi\in\buno$ are
the measures consistent with $\Pi^\Phi$.
\item $\Pi$ is a \embf{Gibbsian specification} if there exists an
interaction $\Phi\in\buno$ such that $\Pi=\Pi^\Phi$.
\item $\mu$ is a \embf{Gibbsian measure} (or Gibbsian random field) if
there exists a $\Phi\in\buno$ such that
$\mu\in\mathcal{G}(\Pi^\Phi)$.
\end{itemize}
\end{definition}
\begin{exercise}
Prove that $\Pi^\Phi$ is a specification if $\Omega^\Phi_{ \rm sum}=\Omega$.
\end{exercise}
\begin{exercise}\label{exo:3.1}
Summability conditions weaker than \reff{eq:3.5} are also in the
market. An interaction is
\begin{itemize}
\item \embf{absolutely summable} if $\sum_{A\ni x}
\norm{\Phi_A(\omega)}_\infty$ converges for each $\omega\in\Omega$ and each
$x\in\lat$;
\item \embf{uniformly summable} if
$\sum_{A\ni x}\Phi_A(\omega)$ converges uniformly on
$\omega\in\Omega$ for each $x\in\lat$.
\end{itemize}
Find :
\begin{itemize}
\item[(i)] An interaction that is uniformly but not absolutely
summable. [\emph{Hint:} Consider $\Phi_A=(-1)^nc_n$ if $A=\Lambda_n$
and zero otherwise, for suitable functions $c_n$.]
\item[(ii)] An interaction that is absolutely but not uniformly summable.
\end{itemize}
\end{exercise}
\subsection{Properties of Gibbsian (and some other) specifications}
The Gibbsian formalism ---random fields consistent with specifications
defined by Boltzmann weights--- leads to an extremely successful
description of physical reality. It provided a unified explanation of
many experimental facts and phenomenological recipes, and it has
been an infallible tool to study new phenomena. It explains
thermodynamics, that is, the emergence of state functions like entropy
and free energy, related by Legendre transforms, which contain the
information needed to determine the thermal properties of matter
systems. Furthermore, it provides a detailed description of phase
transitions, and leads to the prediction of universal critical
exponents generalizing the law of corresponding states.
Here we are interested in the mathematical properties of Gibbsian
objects. Let me start by the observation that the map $\Phi\to
\Pi^\Phi$ is far from one-to-one. Interactions can be redefined, by
combining local terms, in infinitely many ways without changing the
corresponding Boltzmann weights. All such interactions should be
identified.
\begin{definition}
Two interactions $\Phi$ and $\widetilde\Phi$,
on the same space $(\Omega,\tribu)$ are \embf{physically equivalent}
if $\pi^\Phi_\Lambda = \pi^{\widetilde\Phi}_\Lambda$ for each
$\Lambda\Subset\lat$. In our finite-spin setting, this is equivalent to
$\gamma^{\Phi_\Lambda} = \gamma^{\widetilde\Phi}_\Lambda$ for each
$\Lambda\Subset\lat$
\end{definition}
While interactions are the right way to encode the physical
information ---and an economic way to parametrize families of
measures---, specifications are the determining mathematical objects.
Traditionally interactions have taken the center of the stage, but a
specification-based approach has the advantage of avoiding the
multi-valuedness problem associated to physical equivalence, which can
lead to rather confusing situations \cite{vanfer89}. Such an approach
is, in fact, essential for a comparative study of Gibbsian and
non-Gibbsian fields. The very beginning of this ``interaction-free''
program is the detection of the key features of Gibbsian
specifications that single them out from the rest. This is the object
of the rest of the section.
We start by determining important properties of specifications that
follow from basic attributes of an underlying interaction. Given our
focus on the finite-spin situation, we write them in terms of the density function.
Foreseeing our non-Gibbsian needs, we shall distinguish among
configurational, directional and uniform versions of each
property. First, we
notice that Boltzmann densities are never zero, furthermore, if
$\Phi\in\buno$, this non-nullness is uniform.
\begin{definition}\label{def:3.5}
A specification $\Pi$ on $(\Omega,\tribu)$ with densities
$\{\gamma_\Lambda:\Lambda\Subset\lat\}$ is:
\begin{itemize}
\item \embf{Non-null at} $\omega\in\Omega$ if
\begin{equation}\label{eq:3.6.-3}
\gamma_\Lambda(\sigma_\Lambda\mid\omega_{\comp\Lambda}) \;>\;0
\end{equation}
for each $\Lambda\Subset\lat$ and $\sigma_\Lambda\in\Omega_\Lambda$.
Due to \reff{eq:2.18}, this property is equivalent
to \embf{non-nullness in direction} $\omega$, that is,
non-nullness at all configurations asymptotically equal to $\omega$.
\item \embf{Non-null} if it is non-null at all $\omega\in\Omega$.
\item \embf{Uniformly non-null} if for each $\Lambda\Subset\lat$
\begin{equation}\label{eq:3.7}
\inf_{\sigma_\Lambda\in\Omega_\Lambda,
\omega_{\comp\Lambda}\in\Omega_{\comp\Lambda}}
\gamma_\Lambda(\sigma_\Lambda\mid\omega_{\comp\Lambda})
\;\defby\; c_\Lambda>0\;.
\end{equation}
\end{itemize}
\end{definition}
The most immediate consequence of uniform non-nullness is the following.
\begin{proposition}
A measure consistent with an uniform non-null specification is
non-null.
\end{proposition}
We next observe that a range-$r$ interaction produces weights ---or
kernels --- that are insensitive to spins beyond the $r$-boundary of the
region. This motivates the following definition.
\begin{definition}\label{def:3.6}
A specification $\Pi$ on $(\Omega,\tribu)$ with densities
$\{\gamma_\Lambda:\Lambda\Subset\lat\}$ is:
\begin{itemize}
\item $r$-\embf{Markovian in direction} $\theta\in\Omega$ if
\begin{equation}\label{eq:3.8}
\gamma_\Lambda(\sigma_\Lambda\mid
\omega_{\partial_r \Lambda}\eta)
- \gamma_\Lambda(\sigma_\Lambda\mid
\omega_{\partial_r \Lambda}\widetilde\eta)
\;=\;0
\end{equation}
for all $\Lambda\Subset\lat$ and all $\sigma,\omega,\eta,\widetilde\eta\in\Omega$
such that $\eta$ and $\widetilde\eta$ are asymptotically equal to $\theta$.
\item $r$-\embf{Markovian} if \reff{eq:3.8} holds for all
$\omega$, $\eta$ and $\widetilde\eta$ in $\Omega$, or, equivalently, if
$\pi_\Lambda(A\mid\cdot\,)\in\tribu_{\partial_r\Lambda}$ for all
$\Lambda\Subset\lat$ and all $A\in\tribu_\Lambda$.
\item \embf{Markovian} (resp.\ \embf{Markovian in direction}
$\theta\in\Omega$) if it is
$r$-Markovian (resp.\ $r$-Markovian in direction
$\theta\in\Omega$) for some $r\ge 0$.
\end{itemize}
\end{definition}
For general, possibly infinite-range, interactions in $\mathcal{B}_1$
a simple calculation shows that strict Markovianness becomes ``almost
Markovianness'' in the sense that the difference \reff{eq:3.8} becomes
zero only in the limit $r\to\infty$. In our setting, this corresponds
to continuity with respect to the external condition [recall the
discussion around and following display \reff{eq:1.9}]. The
corresponding definitions are as follows.
\begin{definition} \label{def:3.4}
A specification $\Pi$ on $(\Omega,\tribu)$ with densities
$\{\gamma_\Lambda:\Lambda\Subset\lat\}$ is:
\begin{itemize}
\item \embf{Quasilocal at $\omega$ in the direction $\theta$} iff
\begin{equation}\label{eq:3.9}
\Bigl| \gamma_\Lambda(\sigma_\Lambda\mid
\omega_{\Lambda_n}\theta) -
\gamma_\Lambda(\sigma_\Lambda\mid \omega)
\Bigr|\;\tendn{}\; 0
\end{equation}
for each $\Lambda\Subset\Lambda$ and each
$\sigma_\Lambda\in\Omega_\Lambda$.
\item \embf{Quasilocal at $\omega$} iff it is quasilocal at $\omega$ in all
directions, that is, iff
\begin{equation}\label{eq:3.10}
\sup_{\eta,\widetilde\eta\in\Omega} \Bigl| \gamma_\Lambda(\sigma_\Lambda\mid
\omega_{\Lambda_n}\eta) -
\gamma_\Lambda(\sigma_\Lambda\mid \omega_{\Lambda_n}\widetilde\eta)
\Bigr|\;\tendn{}\; 0
\end{equation}
for each $\Lambda\Subset\Lambda$ and each
$\sigma_\Lambda\in\Omega_\Lambda$.
\item\embf{Quasilocal} iff
\begin{equation}\label{eq:3.11}
\sup_{\omega,\eta,\widetilde\eta\in\Omega}
\Bigl| \gamma_\Lambda(\sigma_\Lambda\mid
\omega_{\Lambda_n}\eta) -
\gamma_\Lambda(\sigma_\Lambda\mid \omega_{\Lambda_n}\widetilde\eta)
\Bigr|\;\tendn{}\; 0
\end{equation}
for each $\Lambda\Subset\Lambda$ and each
$\sigma_\Lambda\in\Omega_\Lambda$.
\end{itemize}
\end{definition}
In more general settings continuity is not equivalent to uniform
continuity. In such situations we therefore
obtain weaker definitions by replacing
``quasilocal'' by ``continuous'' and removing the ``sup'' in
\reff{eq:3.10} and \reff{eq:3.11}. A continuous specification is also
called \emph{Feller}. For our finite-spin models, Feller and
quasilocality are synonymous. Let me also observe that, given
the compactness of our configuration space, for a
quasilocal specification non-nullness is equivalent to uniform
non-nullness (the minimum is achieved).
With these definitions we can now state the easy part of Kozlov theorem.
\begin{proposition}[Necessary conditions for Gibbsianness]
\label{pro:3.1}
If a specification is Gibbsian, then it is uniformly
non-null and quasilocal.
\end{proposition}
\begin{corollary}\label{cor:3.5}
Every Gibbsian measure is non-null.
\end{corollary}
\begin{exercise} Prove Proposition \ref{pro:3.1}. Start by
proving that for $\Phi\in \buno$ the functions $\omega\to
H_\Lambda^\Phi(\sigma_\Lambda\mid \omega_{\comp\Lambda})$ are
continuous.
\end{exercise}
Thanks to Proposition \ref{pro:2.4}, all the preceding properties are inherited
from single-site kernels.
\begin{proposition}\label{pro:2.7}
Let $\Pi$ be a specification in $\Omega$
with densities $\{\gamma_\Lambda:\Lambda\Subset\lat\}$
and $\omega,\theta\in\Omega$.
\begin {itemize}
\item[(a)] $\Pi$ is non-null at $\omega$, respectively non-null,
uniformly non-null, iff the corresponding property in Definition \ref{def:3.5}
is satisfied for all single-site densities $\gamma_{\{x\}}$.
\item[(b)] If $\Pi$ is non-null at $\theta$, then it is
$r$-Markovian in direction $\theta$ iff the corresponding property in Definition \ref{def:3.6}
is satisfied for all single-site densities $\gamma_{\{x\}}$.
\item[(c)] If $\Pi$ is non-null at $\theta$, then it is
quasilocal at $\omega$
in the direction $\theta$, respectively quasilocal at $\omega$
iff \reff{eq:3.9}, respect.\ \reff{eq:3.10},
is satisfied for all single-site densities $\gamma_{\{x\}}$
and all finite-region modifications of $\omega$.
\item[(d)] If $\Pi$ is uniformly non-null, then it is quasilocal
iff \reff{eq:3.11}
is satisfied for all single-site densities $\gamma_{\{x\}}$.
\end{itemize}
\end{proposition}
\begin{exercise}
Given a specification on $\{-1,1\}^\lat$, consider the
\emph{spin-flip relative energies} $h_x$ defined by the identity
\begin{equation}\label{eq:3.11.1}
\frac{\gamma_{\{x\}}(\sigma_x\mid\omega)}
{\gamma_{\{x\}}(-\sigma_x\mid\omega)}
\;=\; \exp\Bigl\{-h_x(\sigma_x\mid\omega)\Bigr\}\;.
\end{equation}
\begin{itemize}
\item[(i)] Rewrite the previous proposition in terms of
properties of $h_x$.
\item[(ii)] Write an analogous result for arbitrary spins,
replacing the spin-flip by a permutation of $\sing$.
\end{itemize}
(The use of $h_x$ is favored
by the Flemish school.)
\end{exercise}
We finish this subsection with an illustration of how topology and
measure theory combine to match physics.
\begin{theorem}\label{theo:4.1}
A non-null probability measure on $(\Omega,\tribu)$ is consistent with
at most one quasi-local specification.
\end{theorem}
In particular this means that a Gibbs measure can be Gibbsian for only
\emph{one} quasilocal specification, only one interaction modulo
physical equivalence, only one temperature, \ldots A very rewarding
result.
\proof Let $\mu$ be a measure consistent with two quasilocal
specifications $\Pi$, $\widetilde\Pi$ of respective kernels and
densities $\pi_\Lambda$, $\widetilde\pi_\Lambda$, $\gamma_\Lambda$ and
$\widetilde\gamma_\Lambda$, $\Lambda\Subset\lat$. For each such
$\Lambda$ and each $\sigma_\Lambda\in\Omega_\Lambda$, let
\begin{equation}
A_n \;=\; \Bigl\{w\in\Omega :
\gamma_\Lambda(\sigma_\Lambda\mid\omega_{\comp\Lambda}) -
\widetilde\gamma_\Lambda(\sigma_\Lambda\mid\omega_{\comp\Lambda})
> \frac{1}{n}\Bigr\}\;.
\end{equation}
We have
\begin{eqnarray}
0 &=& \mu\Bigl(\pi_\Lambda(\ind{A_n}C_{\sigma_\Lambda}\mid\,\cdot\,)
-\widetilde\pi_\Lambda(\ind{A_n}C_{\sigma_\Lambda}\mid\,\cdot\,)\Bigr)
\nonumber\\
&=&
\mu\Bigl(\ind{A_n} \bigl[\gamma_\Lambda(\sigma_\Lambda\mid\,\cdot\,)
- \widetilde\gamma_\Lambda(\sigma_\Lambda\mid\,\cdot\,)\bigr]\Bigr)
\nonumber\\
&>& \frac{1}{n}\, \mu(A_n)\;.
\end{eqnarray}
Hence $\mu(A_n)=0$ and, by the $\sigma$-additivity of $\mu$,
\begin{equation}\label{eq:3.12}
\gamma_\Lambda(\sigma_\Lambda\mid\,\cdot\,)\;\ge\;
\widetilde\gamma_\Lambda(\sigma_\Lambda\mid\,\cdot\,)
\qquad \mu\mbox{-almost surely.}
\end{equation}
But, as $\mu$ is non-null, the set of points where
\reff{eq:3.12} holds must be dense, and the continuity of both
$\gamma_\Lambda(\sigma_\Lambda\mid\,\cdot\,)$ and
$\widetilde\gamma_\Lambda(\sigma_\Lambda\mid\,\cdot\,)$ implies that,
in fact,
\begin{equation}\label{eq:3.13}
\gamma_\Lambda(\sigma_\Lambda\mid\omega_{\comp\Lambda}) \;\ge\;
\widetilde\gamma_\Lambda(\sigma_\Lambda\mid\omega_{\comp\Lambda})
\qquad\mbox{for all }\sigma_\Lambda\in \Omega_\Lambda
\mbox{ and }
\omega_{\comp\Lambda}\in\Omega_{\comp\Lambda}\;.
\end{equation}
This argument also proves the opposite inequality through the
interchange
$\gamma_\Lambda \leftrightarrow\widetilde\gamma_\Lambda$. \qed
\subsection{The Gibbsianness question}
We turn now to the inverse of Proposition \ref{pro:3.1}, namely the
determination of sufficient conditions for Gibbsianness. This is a
key step towards the development of a specification-based theory not
relying on explicit choices of potentials. The issue is: \emph{Which
conditions grant that for a specification $\Pi$ there exists some
$\Phi\in\buno$ such that $\Pi=\Pi^\Phi$}.
Historically, this question was first addressed ---and solved--- for
Markovian fields. The simplest and most informative solution was
proposed by Grimmett \cite{gri73} who gave an explicit form of the
potential using M\"obius transform. Kozlov \cite{koz74} proved the
general version by generalizing this argument. An alternative proof
was given simultaneously by Sullivan \cite{sul73}, but using a
slightly different space of interactions. In the sequel I try to
present a pedagogical exposition of Kozlov's proof and its consequences
for the non-Gibbsianness typology.
Kozlov answered the Gibbsianness question by actually reconstructing
a potential out of the given specification. From all the physically equivalent
interactions he chose those with the \emph{vacuum} property.
\begin{definition}
An interaction $\Phi$ in $\Omega$ has \embf{vacuum} $\theta\in\Omega$
if
\begin{equation}\label{eq:3.14}
\phi_A(\omega) \;=\; 0 \qquad \mbox{if } \omega_i=\theta_i
\mbox{ for some } i\in A
\end{equation}
for all $A\Subset\lat$.
\end{definition}
The detailed proof of Kozlov's theorem involves a number of stages.
\subsubsection{Construction of the vacuum potential}
As a first step, let us obtain the formulas proposed by Kozlov
(and Grimmett before him). This is actually not hard. We are
presented with an initial specification with kernels $\pi_\Lambda$
and densities $\gamma_\Lambda$, we choose a
vacuum configuration $\theta$ and we search a potential $\Phi$
satisfying the vacuum condition \reff{eq:3.14} and such that the
Boltzmann prescription \reff{eq:3.3}--\reff{eq:3.4} leads to the
initial densities. We follow the natural strategy: We pretend that
such a potential exists and see what we get analyzing first
one-site regions, then two-site regions, and so on. In this way
we obtain its only possible expression. This expression involves
ratios of densities, thus some degree of non-nullness is required.
The first observation is that
\begin{equation}\label{eq:3.15.0}
H_\Lambda(\theta_\Lambda\mid\theta_{\comp\Lambda})\;=\; 0
\end{equation}
due to the vacuum condition \reff{eq:3.4}, hence
\begin{equation}\label{eq:3.15}
\gamma_\Lambda(\theta_\Lambda\mid\theta_{\comp\Lambda})\;=\;
\frac{1}{Z_\Lambda(\theta)}
\end{equation}
for all $\Lambda\Subset\lat$. For one-site regions the vacuum condition
implies that
\begin{equation}\label{eq:3.16.0}
H_{\{x\}}(\sigma_x\mid\theta_{\comp{\{x\}}}) \;=\; \phi_{\{x\}}(\sigma_x)\;.
\end{equation}
Thus, the Boltzmann prescription and \reff{eq:3.15} imply
\begin{equation}\label{eq:3.16}
\eee^{-\phi_{\{x\}}(\sigma_x)} \;=\;
\frac{\gamma_{\{x\}}(\sigma_x\mid\theta_{\comp{\{x\}}})}
{\gamma_{\{x\}}(\theta_x\mid\theta_{\comp{\{x\}}})}
\end{equation}
for all $x\in\lat$. Two-site regions come next. By the
vacuum condition,
\begin{equation}\label{eq:3.17.0}
H_{\{x,y\}}(\sigma_{\{x,y\}}\mid\theta_{\comp{\{x,y\}}}) \;=\;
\phi_{\{x,y\}}(\sigma_{\{x,y\}}) + \phi_{\{x\}}(\sigma_x)
+ \phi_{\{y\}}(\sigma_y)\;.
\end{equation}
Therefore, the Boltzmann prescription plus the preceding one-site
calculations lead us to
\begin{eqnarray}\label{eq:3.17}
\eee^{-\phi_{\{x,y\}}(\sigma_{\{x,y\}})} &=&
\frac{\gamma_{\{x,y\}}(\sigma_{\{x,y\}}\mid\theta_{\comp{\{x,y\}}})}
{\gamma_{\{x,y\}}(\theta_{\{x,y\}}\mid\theta_{\comp{\{x,y\}}})}
\;\times\, \eee^{\phi_{\{x\}}(\sigma_x)} \;\times\;
\eee^{\phi_{\{y\}}(\sigma_y)}\nonumber\\
&=& \left[
\frac{\gamma_{\{x,y\}}(\sigma_{\{x,y\}}\mid\theta_{\comp{\{x,y\}}})}
{\gamma_{\{x,y\}}(\theta_{\{x,y\}}\mid\theta_{\comp{\{x,y\}}})}
\right]
\,\left[\frac{\gamma_{\{x\}}(\sigma_x\mid\theta_{\comp{\{x\}}})}
{\gamma_{\{x\}}(\theta_x\mid\theta_{\comp{\{x\}}})}
\right]^{-1}
\left[\frac{\gamma_{\{y\}}(\sigma_y\mid\theta_{\comp{\{y\}}})}
{\gamma_{\{y\}}(\theta_y\mid\theta_{\comp{\{y\}}})}
\right]^{-1}
\end{eqnarray}
We begin to see alternating $+1$ and $-1$ exponents. To confirm this
feature, let's work out the term corresponding to a three-site region
$A=\{x_1,x_2,x_3\}$. As the Hamiltonian with $\theta$
external conditions is the sum of the three-site interaction plus all the
two-site and one-site terms, we obtain
\begin{equation}\label{eq:3.18}
\eee^{-\phi_A(\sigma_A)} \;=\;
\frac{\gamma_A(\sigma_A\mid\theta_{\comp{A}})}
{\gamma_A(\theta_A\mid\theta_{\comp{A}})}
\;\times\, \prod_{B\subset A\atop \card{B}=2}
\eee^{\phi_B(\sigma_B)} \;\times\;
\prod_{x\in A} \eee^{\phi_{\{x\}}(\sigma_x)}
\end{equation}
which, by \reff{eq:3.16} and \reff{eq:3.17}, implies
\begin{equation}\label{eq:3.19}
\eee^{-\phi_A(\sigma_A)} \;=\;
\biggl[\frac{\gamma_A(\sigma_A\mid\theta_{\comp{A}})}
{\gamma_A(\theta_A\mid\theta_{\comp{A}})}\biggr]
\,\biggl[ \prod_{B\subset A\atop \card{B}=2}
\frac{\gamma_B(\sigma_B\mid\theta_{\comp{B}})}
{\gamma_B(\theta_B\mid\theta_{\comp{B}})}\biggr]^{-1}
\biggl[\frac{\gamma_{\{x\}}(\sigma_x\mid\theta_{\comp{\{x\}}})}
{\gamma_{\{x\}}(\theta_x\mid\theta_{\comp{\{x\}}})}
\biggr]\;.
\end{equation}
We are ready to propose an inductive formula: If $A\Subset\lat$,
\begin{equation}\label{eq:3.20}
\eee^{-\phi_A(\sigma_A)} \;=\;
\biggl[ \prod_{B\subset A\atop B\neq\emptyset}
\frac{\gamma_B(\sigma_B\mid\theta_{\comp{B}})}
{\gamma_B(\theta_B\mid\theta_{\comp{B}})}
\biggr]^{(-1)^{\card{A\setminus B}}}
\;.
\end{equation}
Its log leads us to the following definition.
\begin{definition}\label{def:4.5}
Let $\theta\in\Omega$ and let $\Pi$ be a specification with densities
$\{\gamma_\Lambda:\Lambda\Subset\lat\}$ that is non-null in the direction
$\theta$. The $\theta$-\embf{vacuum potential} for $\Pi$ is the interaction
defined by
\begin{equation}\label{eq:3.21}
\phi^{\gamma,\theta}_A(\sigma_A) \;=\; -
\sum_{B\subset A\atop B\neq\emptyset} (-1)^{\card{A\setminus B}}
\log\biggl[\frac{\gamma_B(\sigma_B\mid\theta_{\comp{B}})}
{\gamma_B(\theta_B\mid\theta_{\comp{B}})}
\biggr]
\end{equation}
for each $A\Subset\lat$ and each $\sigma\in\Omega$.
\end{definition}
The proof that, indeed, such a potential gives us back the original
densities turns out to be a simple application of M\"obius
transforms.
\begin{theorem}\label{theo:4.2}
If a specification $\Pi$ is non-null at $\theta\in\Omega$, then
the vacuum potential \reff{eq:3.20} verifies
\begin{equation}\label{eq:3.22}
\sum_{B\subset \Lambda\atop B\neq\emptyset}
\phi^{\gamma,\theta}_B(\sigma_B) \;=\; -
\log\biggl[\frac{\gamma_\Lambda(\sigma_\Lambda\mid\theta_{\comp{\Lambda}})}
{\gamma_\Lambda(\theta_\Lambda\mid\theta_{\comp{\Lambda}})}\biggr]
\end{equation}
and, thus, its densities with external condition asymptotically equal to $\theta$
can be written as Boltzmann weights for $\Phi^{\gamma,\theta}$:
\begin{equation}\label{eq:3.21.1}
\gamma_\Lambda(\sigma_\Lambda\mid\omega_{\Gamma\setminus\Lambda}
\,\theta_{\comp{\Gamma}})\;=\;
\gamma_\Lambda^{\Phi^{\gamma,\theta}}(\sigma_\Lambda\mid
\omega_{\Gamma\setminus\Lambda}
\,\theta_{\comp{\Gamma}})
\end{equation}
for all $\Lambda\subset\Gamma\Subset\lat$ and all $\sigma,\omega\in\Omega$.
\end{theorem}
\proof
Due to the bar-displacement property \reff{eq:2.19},
it is enough to prove \reff{eq:3.21.1} for $\omega=\theta$
(recall that non-nullness at $\theta$
implies non-nullness at configurations asymptotically equal to $\theta$). In
this case it is clear that \reff{eq:3.22} implies \reff{eq:3.21.1}, because
the normalization of the densities then yields the normalization \reff{eq:3.17.0}.
But the equivalence between \reff{eq:3.21} and \reff{eq:3.22},
supplemented with the conventions $\gamma_\emptyset=1$ and
$\phi^{\gamma,\theta}_\emptyset=0$, is
a particular case of the following well-known result. \qed
\begin{theorem}[M\"obius transform]\label{theo:4.3}
Let $\mathcal{E}$ be a finite set, $\mathcal{F}$ a commutative group
and $F$ and $G$ functions from the subsets of $\mathcal{E}$
to $\mathcal{F}$. We write
$F=(F_A)_{A\subset\mathcal{E}}$,
$G=(G_A)_{A\subset\mathcal{E}}$.
Then,
\begin{equation}\label{eq:3.23}
\biggl[ \forall\, A\subset\mathcal{E}\;,\;
F_A=\sum_{B\subset A} (-1)^{\card{A\setminus B}} G_B
\biggr]
\quad\Longleftrightarrow\quad
\biggl[\forall \, A\subset\mathcal{E}\;,\;
G_A=\sum_{B\subset A} F_B\biggr]\;.
\end{equation}
\end{theorem}
Let us discuss its (elementary) proof. The argument will be useful to
extract other properties of the vacuum potential. It all follows from the
following, equally elementary, lemma:
\begin{lemma}\label{lem:3.1}
Let $E$ be any non-empty finite set. Then
\begin{equation}\label{eq:3.24}
\sum_{D\subset E} (-1)^{\card{E}} \;=\; 0\;.
\end{equation}
\end{lemma}
\proof Let us choose some $x\in E$ and decompose
\begin{equation}\label{eq:3.25}
\sum_{D\subset E} (-1)^{\card{E}} \;=\;
\sum_{D\subset E\atop x\in D} (-1)^{\card{D}} +
\sum_{C\subset E\atop x\not\in C} (-1)^{\card{C}}\;.
\end{equation}
The substitution $D=\{x\}\cup C$ shows that both term cancel out. \qed
\medskip
\proofof{Theorem \protect\ref{theo:4.3}}
Necessity:
\begin{equation}\label{eq:3.26}
\sum_{B\subset A} F_B \;=
\sum_{B\subset A} \sum_{C\subset A} (-1)^{\card{A\setminus C}} G_C
\;=\; \sum_{C\subset A} G_C \sum_{D\subset A\setminus C} (-1)^{\card{D}}
\;= G_A\;.
\end{equation}
Sufficiency:
\begin{equation}\label{eq:3.27}
\sum_{B\subset A} (-1)^{\card{A\setminus B}} G_B\;=\;
\sum_{B\subset A} (-1)^{\card{A\setminus B}} \sum_{C\subset B} F_C
\;=\; \sum_{C\subset A} F_C \sum_{D\subset A\setminus C} (-1)^{\card{D}}
\;= F_A\;.
\end{equation}
In both lines, the second equality follows from the substitution $D=B\setminus C$
and the last one from the previous lemma. \qed
\bigskip
\subsubsection{Summability of the vacuum potential}
In order to pass to the limit $\Gamma\to\lat$ in \reff{eq:3.22} we need to verify that
the vacuum potential is summable in some sense. Of course, that requires
suitable properties of the specification. As a warm-up, let us verify
that Markovianness implies finite range.
\begin{theorem}\label{theo:3.5}
Let $\Pi$ be a specification that is non-null and $r$-Markovian in direction
$\theta\in\Omega$. Then the range of the $\theta$-vacuum potential does
not exceed $r$.
\end{theorem}
\proof To simplify the writing we adopt the conventions $\gamma_\emptyset=1$,
$\phi^{\gamma,\theta}_\emptyset=0$.
Let $A\Subset\lat$ be a set of sites with diameter exceeding $r$, and
let $x,y\in A$ such that $\card{x-y}>r$. We decompose the sum defining
the vacuum potential in \reff{eq:3.21} according to the location of $x$ and $y$:
\begin{equation}\label{eq:3.28}
\phi^{\gamma,\theta}_A(\sigma_A) \;=\; -
\biggl[\sum_{B\subset A\atop B\ni x,y} +
\sum_{B\subset A\atop B\ni x\,, B\not\ni y}
+ \sum_{B\subset A\atop B\ni y\,,\, B\not\ni x}
+ \sum_{B\subset A\atop B\not\ni x,y}
\biggr] (-1)^{\card{A\setminus B}}
\log\biggl[\frac{\gamma_B(\sigma_B\mid\theta_{\comp{B}})}
{\gamma_B(\theta_B\mid\theta_{\comp{B}})}
\biggr]\;.
\end{equation}
In the first three sums, let us respectively substitute $C=B\setminus\{x,y\}$,
$C=B\setminus\{x\}$ and $C=B\setminus\{y\}$. Alternating signs appear
which leads to
\begin{eqnarray}\label{eq:3.29}
\phi^{\gamma,\theta}_A(\sigma_A) &=& -
\sum_{C\subset A\setminus\{x,y\}}
(-1)^{\card{A\setminus C}}\;
\log\biggl[\frac{\gamma_{C\cup\{x,y\}}(\sigma_C\,\sigma_x\,\sigma_y\mid\theta)}
{\gamma_{C\cup\{x,y\}}(\theta_C\,\theta_x\theta_y\mid\theta)}\,\nonumber\\
&&\qquad\qquad
\times \frac{\gamma_{C\cup\{x\}}(\theta_C\,\theta_x\mid\theta)}
{\gamma_{C\cup\{x\}}(\sigma_C\,\sigma_x\mid\theta)}\,
\frac{\gamma_{C\cup\{y\}}(\theta_C\,\theta_y\mid\theta)}
{\gamma_{C\cup\{y\}}(\sigma_C\,\sigma_y\mid\theta)}\,
\frac{\gamma_C(\sigma_C\mid\theta)}{\gamma_C(\theta_C\mid\theta)}
\biggr]
\end{eqnarray}
We displace the bar in the last three ratios, thanks to \reff{eq:2.19},
so as to incorporate the whole
set $C\cup\{x,y\}$ inside the conditioning. All the terms
$\gamma_{C\cup\{x,y\}}(\theta_C\,\theta_x\theta_y\mid\theta)$ cancel out
and we obtain
\begin{eqnarray}\label{eq:3.30}
\phi^{\gamma,\theta}_A(\sigma_A) &=& -
\sum_{C\subset A\setminus\{x,y\}}
(-1)^{\card{A\setminus C}}\;
\log\biggl[\frac{\gamma_{C\cup\{x,y\}}(\sigma_C\,\sigma_x\,\sigma_y\mid\theta)}
{\gamma_{C\cup\{x,y\}}(\sigma_C\,\sigma_x\,\theta_y\mid\theta)}\,
\frac{\gamma_{C\cup\{x,y\}}(\sigma_C\,\theta_x\,\theta_y\mid\theta)}
{\gamma_{C\cup\{x,y\}}(\sigma_C\,\theta_x\,\sigma_y\mid\theta)}\biggr]\nonumber\\
&=& - \sum_{C\subset A\setminus\{x,y\}}
(-1)^{\card{A\setminus C}}\;
\log\biggl[\frac{\gamma_{\{y\}}(\sigma_y\mid\sigma_C\,\sigma_x\,\theta)}
{\gamma_{\{y\}}(\theta_y\mid\sigma_C\,\sigma_x\,\theta)}\,
\frac{\gamma_{\{y\}}(\theta_y\mid\sigma_C\,\theta_x\,\theta)}
{\gamma_{\{y\}}(\sigma_y\mid\sigma_C\,\theta_x\,\theta)}\biggr]\;,
\end{eqnarray}
where we have used \reff{eq:2.19} again in each ratio.
But the $r$-Markovianness hypothesis implies that
$\gamma_{\{y\}}(\,\cdot \mid\sigma_C\,\sigma_x\,\theta)$
equals $\gamma_{\{y\}}(\,\cdot \mid\sigma_C\,\theta_x\,\theta)$,
thus the argument of the logarithm is equal to one. This implies
$\phi^{\gamma,\theta}_A=0$.\qed
\medskip
We see that in the proof, Markovianness is used only at the level of
single-site densities. This is, of course, not a surprise in view of
Proposition \reff{pro:2.7}.
As mentioned above, this theorem (in its ``directionless" version) is associated
to a number of known probabilists ---Averintsev, Spitzer, Hammersley and Clifford,
Preston, and Grimmett. Historical
notes can be found in the introduction to the last author's contribution \cite{gri73},
which is also the genesis for the preceding proof. The strategy of this proof can be
used to prove the first of the following overdue observations.
\begin{exercise}\label{exe:3.5}
\
\begin{itemize}
\item[(i)] Prove that $\Phi^{\gamma,\theta}$ is indeed a vacuum potential, that is, prove that
it satisfies property \reff{eq:3.14}.
\item[(ii)] Formalize the obvious fact that a $\theta$-vacuum potential is unique.
\end{itemize}
\end{exercise}
It is even easier to prove a similar theorem but with \emph{Markovian} replaced
by \emph{quasilocal}, We only need the following identity. If
$\Lambda\subset\widetilde\Lambda\Subset\lat$ and
$\sigma,\omega\in\Omega$,
\begin{eqnarray}\label{eq:3.31}
\log\biggl[\frac{\gamma_{\Lambda}(\omega_\Lambda\mid
\omega_{\widetilde\Lambda\setminus\Lambda}\,\theta)}
{\gamma_{\Lambda}(\theta_\Lambda\mid
\omega_{\widetilde\Lambda\setminus\Lambda}\,\theta)}\biggr]&=&
\log\biggl[\frac{\gamma_{\widetilde\Lambda}
(\omega_{\widetilde\Lambda}\mid\theta)}
{\gamma_{\widetilde\Lambda}(\theta_{\Lambda}\,
\omega_{\widetilde\Lambda\setminus\Lambda}\mid\theta)}\biggr]
\nonumber\\
&=&
\log\biggl[\frac{\gamma_{\widetilde\Lambda}
(\omega_{\widetilde\Lambda}\mid\theta)}
{\gamma_{\widetilde\Lambda}(\theta_{\widetilde\Lambda}\mid\theta)}\biggr]
- \log\biggl[\frac{\gamma_{\widetilde\Lambda\setminus\Lambda}
(\omega_{\widetilde\Lambda\setminus\Lambda}\mid\theta)}
{\gamma_{\widetilde\Lambda\setminus\Lambda}
(\theta_{\widetilde\Lambda\setminus\Lambda}\mid\theta)}\biggr]\\
&=& \sum_{B\cap\Lambda\neq\emptyset \atop B\subset\widetilde\Lambda}
\phi_B^{\gamma,\theta}(\omega)\;,\nonumber\\
\end{eqnarray}
where the first two equalities follow from the bar-displacement
property \reff{eq:2.19} and the last one from \reff{eq:3.22}.
This immediately implies the following theorem.
\begin{theorem}\label{theo:4.4}
Let $\Pi$ be a specification that is non-null at $\omega$ and $\theta\in\Omega$ and
quasilocal at $\omega$ in direction $\theta$. Then its $\theta$-vacuum
potential is summable at $\omega$. In fact,
\begin{equation}\label{eq:3.32}
H^{\Phi^{\gamma,\theta}}_\Lambda(\sigma_\Lambda\mid\omega_{\comp\Lambda})\;=\;
- \lim_{n\to\infty}
\log\biggl[\frac{\gamma_{\Lambda}(\omega_\Lambda\mid
\omega_{\Lambda_n\setminus\Lambda}\,\theta)}
{\gamma_{\Lambda}(\theta_\Lambda\mid
\omega_{\Lambda_n\setminus\Lambda}\,\theta)}\biggr]
\end{equation}
for every $\Lambda\Subset\lat$ and $\sigma_\Lambda\in\Omega_\Lambda$,
and, thus, the densities of $\Pi$ with external condition $\omega$
can be written as Boltzmann weights:
\begin{equation}\label{eq:3.33}
\gamma_\Lambda(\,\cdot\mid\omega_{\comp{\Lambda}})\;=\;
\gamma_\Lambda^{\Phi^{\gamma,\theta}}(\,\cdot\mid
\omega_{\comp{\Lambda}})
\end{equation}
for all $\Lambda\subset\Gamma\Subset\lat$.
\end{theorem}
\subsubsection{Kozlov theorem}
Gibbsianness requires uniform and absolute summability of the
interaction. Absolute summability seems, in principle, not to be much
of a problem. Indeed, due to our freedom to pass to physically equivalent
interactions, we can use partial sums to define an equivalent,
absolutely convergent interaction. There is, however, a rather subtle
obstacle to this strategy (I owe this observation to Frank Redig):
If we do not insist on
uniformity, the resummation procedure becomes $\omega$-dependent,
and it is not clear whether the resulting potential would remain
\emph{measurable}. Therefore, we shall combine, from the outset,
absoluteness with uniformity, that is, we shall place absolute value and
``sup'' signs all over the place.
Our hypotheses will be accordingly strengthened: We shall now assume
quasilocality (that is, uniform continuity) and (uniform) non-nullness.
Non-nullness implies (is equivalent to) the strict positivity of the numbers
\begin{equation}\label{eq:3.37}
m_x\;=\; \inf_{\omega} \gamma_{\{x\}}(\omega_x\mid\omega)\;.
\end{equation}
for all $x\in\lat$. Quasilocality says that for each $x\in\lat$ the function
\begin{equation}\label{eq:3.36}
g_x(r) \;=\; \sup_{\omega} \Bigl|
\gamma_{\{x\}}(\omega_x\mid\omega) -
\gamma_{\{x\}}(\omega_x\mid\omega_{\Lambda_r}\,\theta)
\Bigr|
\end{equation}
converges to zero, as $r\to\infty$ (in the presence of non-nullness such a
condition is equivalent to quasilocality).
To understand the basic algorithm to pass from a vacuum potential to
an absolute and uniformly summable one, let us first discuss how to
gain summability for bonds containing the origin. We resort to
the inequality
\begin{equation}\label{eq:3.34}
\card{\ln a - \ln b} \;\le\; \frac{\card{a-b}}{\min(a,b)}
\end{equation}
valid for $a, b>0$ (the proof is immediate from the integral definition
of the logarithm), to obtain, from \reff{eq:3.32}, the bound
\begin{equation}\label{eq:3.35}
\sup_{\omega}\biggl|\sum_{B\ni 0\atop B\subset \Lambda_r}
\phi_B^{\gamma,\theta}(\omega)\biggr| \;\le\;
\frac{g_0(r)}{m_0}\;.
\end{equation}
As $g_0(r)\to 0$, we can choose a sequence of integers $r_i$, $i=1,2,\ldots$
diverging with $i$ and such that
\begin{equation}\label{eq:3.38}
\sum_i g_0(r_i) \;<\; \infty\;.
\end{equation}
The idea is now to group the bonds within the regions
\begin{equation}\label{eq:3.39}
L_i^0 = \Lambda_{r_i}\;,\;
i=1,2,\ldots
\end{equation}
that is, within the families
\begin{equation}\label{eq:3.40}
S_1^0=\bigl\{ B\subset L_1^0\bigr\}\;,\;\ldots
S_i^0 =\bigl\{B\subset L_i^0: 0\in B\bigr\}\setminus S_{i-1}^0\;,\;\ldots
\end{equation}
The interaction
\begin{equation}\label{eq:3.41}
\varphi_A \;=\;\left\{\begin{array}{ll}
0 & \mbox{unless } A=L^0_i \mbox{ for some } i\ge 1\\[8pt]
\displaystyle\sum_{B\in S_i^0} \phi_B^{\gamma,\theta}
&\mbox{if } A=L_i^0\;,
\end{array}\right.
\end{equation}
is physically equivalent to the $\theta$-vacuum potential
$\Phi^{\gamma,\theta}$ and by \reff{eq:3.35}
\begin{eqnarray}\label{eq:3.43}
\sup_\omega\Bigl|\varphi_{L^0_i}(\omega)\Bigr| &=&
\sup_\omega \Bigl| \sum_{B\in \Lambda_{r_i}\atop B\ni 0}
\phi_B^{\gamma,\theta}(\omega) -
\sum_{B\in \Lambda_{r_{i-1}}\atop B\ni 0}
\phi_B^{\gamma,\theta}(\omega) \Bigr|\nonumber\\
&\le& \frac{g_0(r_i)+g_0(r_{i-1})}{m_0}
\end{eqnarray}
[$g_0(r_0)\equiv 0$]. Therefore, by \reff{eq:3.38},
\begin{equation}\label{eq:3.44}
\sum_{A\ni 0} \norm{\varphi_A}_\infty \;\le\; \frac{2}{m_0}
\sum_{i\ge 1} g_0(r_i) \;<\; \infty\;.
\end{equation}
To obtain an analogous summability around every
site $x$, the preceding strategy has to be pursued
so as to visit, in some fixed order, the different sites of the lattice
while grouping the relevant bonds as above, taking care of
counting each bond only once. In this consists the proof of
the following crucial theorem.
\begin{theorem}[Kozlov \protect\cite{koz74}]\label{theo:4.5}
A specification is Gibbsian if, and only if, it is uniformly non-null
and quasilocal.
\end{theorem}
\proof We only need to prove sufficiency, as necessity has been proven in
Proposition \reff{pro:3.1}. Let us choose a vacuum $\theta\in\Omega$ (any
choice will do), and consider the corresponding $\theta$-vacuum
potential. We fix an order for the sites of the
lattice, $\lat=\{x_1,x_2,\ldots\}$ and choose sequences
$r_i^\ell$, $i,\ell=1,2,\ldots$ such that
\begin{equation}\label{eq:3.45}
\sum_i g_{x_\ell}(r_i^\ell) \;<\; \infty
\end{equation}
for each $x_\ell\in\lat$ [the functions $g_x$ have been defined in
\reff{eq:3.36}]. We then choose ``rectangles'' around each
of the sites $x_\ell$: For $i,\ell=1,2,\ldots$,
\begin{equation}\label{eq:3.46}
L_i^1 = \{x_j : 1\le j \le \widetilde r_i^1\}\;,\;
L_i^2 = \{x_j : 2\le j \le \widetilde r_i^2\}\;,\ldots\;,
L_i^\ell = \{x_j : \ell\le j \le \widetilde r_i^\ell\}\;,\;\ldots
\end{equation}
where the $\widetilde r_i^\ell$ are chosen so that
\begin{equation}\label{eq:3.47}
r_i^\ell \;=\; \diam L^\ell_i\;,
\end{equation}
and we assign each bond, in a unique way, to one of such rectangles
by defining
\begin{equation}\label{eq:3.48}
S_i^\ell =\bigl\{B\subset L_i^\ell: x_\ell\in B\bigr\}\setminus S_{i-1}^\ell
\end{equation}
for $i,\ell=1,2,\ldots$ ($S_0^\ell\equiv 0$). We observe that:
\begin{itemize}
\item[(F1)] the families $S_i^\ell$ are disjoint, and
\item[(F2)] If $B\ni x_\ell$, then $B\in\cup_{j=1}^\ell\cup_{i\ge 1} S_i^j$.
\end{itemize}
Finally we define
\begin{equation}\label{eq:3.49}
\varphi_A \;=\;\left\{\begin{array}{ll}
0 & \mbox{unless } A=L^\ell_i \mbox{ for some } i,\ell\ge 1\\[8pt]
\displaystyle\sum_{B\in S_i^\ell} \phi_B^{\gamma,\theta}
&\mbox{if } A=L_i^\ell\;.
\end{array}\right.
\end{equation}
As for \reff{eq:3.43},
\begin{equation}\label{eq:3.50}
\sup_\omega\Bigl|\varphi_{L^\ell_i}(\omega)\Bigr|
\;\le\; \frac{g_{x_\ell}(r_i^\ell)+g_{x_\ell}(r_{i-1}^\ell)}{m_{x_\ell}}\;,
\end{equation}
hence, by observation (F2) above,
\begin{eqnarray}\label{eq:3.51}
\sum_{A\ni x_\ell} \norm{\varphi_A}_\infty
&\le& \sum_{j=1}^\ell\sum_{i\ge 1} \norm{\varphi_{L_i^j}}_\infty
\nonumber\\
&\le & \sum_{j=1}^\ell \frac{2}{m_{x_j}}
\sum_{i\ge 1} g_{x_j}(r_i^j)
\end{eqnarray}
which is finite by \reff{eq:3.45}. \qed
The interaction $\varphi$ constructed in the preceding proof
is no longer a vacuum potential, and furthermore, its summability
bound worsens with the order $\ell$ of the site. So, there is no hope
of proving site-uniformly summability, that is with a supremum
over $x$ in \reff{eq:3.5}. Another particularly annoying feature of
the proof is that a translation-invariant specification does not lead
to a translation-invariant interaction. The algorithm can be modified,
by fixing the radii $r_i^\ell$ in a $\ell$-independent fashion, so as
to produce a translation-invariant potential. But summability
is recovered only if the continuity-rate function $g_0$ decreases
at sufficient speed. It is not known whether this extra condition is
only technical (the suspicion is that it is not). There is an alternative
Gibbsianness theorem by Sullivan \cite{sul73} which has the advantage
of yielding translation invariance without additional hypotheses. But
this theorem refers to a space of interactions different from
$\buno$, and is thus slightly less adapted to current Gibbsian theory.
%(in particular in reference to large deviations).
\subsection{Less Gibbsian measures}
Kozlov theorem leaves us with a rather simple symptomatology of
non-Gibbsianness, based on only two properties. While
non-nullness is not a property to be ignored, it is not usually the main problem.
Furthermore, already models with exclusions and grammars have
given us some familiarity with the effects of its absence. The absence
of quasilocality, on the other hand, leads to
more subtle, or at least less familiar, phenomena. In physical terms,
the conditions of Definition \reff{def:3.4} correspond to
situations in which the intermediate configuration $\omega$
effectively shields the interior of the region $\Lambda$ from
the influence of far away regions. The failure of such type of
properties would place us in an extremely unphysical situation,
as it would correspond to the uncontrollability of local experiments.
Mathematically, non-quasilocality causes the breakdown of
proofs of a number of important properties that are behind
our understanding of phase diagrams and properties
of the extremal phases.
For these reasons there has been a systematic effort to determine
a \emph{taxonomy} of non-quasilocal measures, with the hope
that of restoring, within each category, a different set of Gibbsian
properties. While this hope has been only partially realized,
the classification scheme is well established by now. To present it
we need some notation.
\begin{definition}\label{def:3.15}
For a specification $\Pi$ on $(\Omega,\tribu)$
and $\theta\in\Omega$, let us denote
\begin{eqnarray}\label{eq:3.52}
\Omega_{\rm q}^\theta(\Pi) &=& \Bigl\{ w\in\Omega: \Pi
\mbox{ is quasilocal at } \omega \mbox{ in the direction }
\theta\Bigr\}\\
\label{eq:3.53}
\Omega_{\rm q}(\Pi) &=& \Bigl\{ w\in\Omega: \Pi
\mbox{ is quasilocal at } \omega \Bigr\}
\end{eqnarray}
and, for an interaction $\Phi$,
let us recall the set $\Omega_{\rm sum}^\Phi$ given in
Definition \ref{def:4.sum}.
Then, a probability measure $\mu$ on $(\Omega, \tribu)$ is
\begin{itemize}
\item \embf{quasilocal} if it is consistent with a quasilocal specification,
\item \embf{almost quasilocal} or \embf{almost Gibbs}
if it is consistent with a specification
$\Pi$ such that
\begin{equation}\label{eq:3.54}
\mu\bigl[\Omega_{\rm q}(\Pi)\bigr] \;=\;1\;,
\end{equation}
\item \embf{intuitively weakly Gibbs}
if it is consistent with a specification
$\Pi$ for which there exist a set $\Omega_{\rm reg}(\Pi)$ such that
\begin{equation}\label{eq:3.55}
\mu\bigl[\Omega_{\rm reg}(\Pi)\bigr]\;=\;1 \quad \mbox{and} \quad
\omega\in \Omega_{\rm q}^\theta(\Pi)\;,\;\forall\,\omega,\theta\in
\Omega_{\rm reg}(\Pi)\;,
\end{equation}
\item \embf{weakly Gibbs} if it is consistent with a specification
$\Pi$, with density functions
$\{\gamma_\Lambda:\Lambda\Subset\lat\}$, for which
there exists an interaction $\Phi$ such that
\begin{equation}\label{eq:3.56}
\mu\bigl[\Omega_{\rm sum}^\Phi\bigr]\;=1\;
\quad\mbox{and}\quad
\gamma_\Lambda(\,\cdot\mid\omega) \;=\;
\gamma^\Phi_\Lambda(\,\cdot\mid\omega)
\;,\; \forall\,\omega\in \Omega_{\rm sum}^\Phi\;.
\end{equation}
\end{itemize}
\end{definition}
[In our setting, (almost) quasilocality = (almost) Feller.]
Weakly Gibbs measures arose from an effort to extend an
interaction-based description of non-Gibbsian measures. In contrast,
almost quasilocality ignores the Boltzmann prescription and
focuses on specification properties. Nevertheless, due to
Theorem \ref{theo:4.4} both almost quasilocal and intuitively
weakly Gibbs measures are weakly Gibbs as well. The configurations
in $\Omega_{\rm reg}$ are the \emph{regular points} of the
corresponding interaction. We refer the reader to
\cite{entetal00,entver04} for a comparison among the different
notions. I content myself with the following remarks summarizing known facts.
\medskip
\begin{proposition}\label{pro:4.15}
Let $\mu\in\mathcal{P}(\Omega,\tribu)$.
\begin{itemize}
\item[(i)] If $\mu$ is consistent with a specification $\Pi$ and
there exists a $\theta\in\Omega$ such that
\begin{equation}\label{eq:3.57}
\mu\bigl[\Omega_{\rm q}^\theta(\Pi)\bigr]\;=\;1
\end{equation}
then $\mu$ is weakly Gibbs.
\item[(ii)] If $\mu$ is intuitively weakly Gibbs, then it is
consistent with a specification $\Pi$ such that
\begin{equation}\label{eq:3.58}
\mu\Bigl\{\theta\in\Omega: \mu\bigl[\Omega_{\rm q}^\theta(\Pi)\bigr] =1
\Bigr\} \;=\; 1\;,
\end{equation}
\end{itemize}
\end{proposition}
The first item follows from Theorem \ref{theo:4.4}, the second one is
immediate from the definition of IWG measures. The opposite
implications in both items are probably false.
\medskip
In \cite{entver04} the following inclusions
have been pointed out:
\begin{equation}\label{eq:3.59}
{\rm G} \subneq {\rm AQL} \subneq {\rm IWG}
\subset {\rm WG}\subneq \mathcal{P}(\Omega,\tribu)\;,
\end{equation}
where the acronyms represent the obvious families of measures.
Examples of measures that are almost quasilocal but not Gibbsian
include the random-cluster model when there is an almost
surely unique infinite cluster~\cite{pfivan95,gri95},
the modified ``avalanche'' model of \cite{redmaemof98},
the sign-fields of the SOS model~\cite{entshl98} and the Grising
random field~\cite{entetal00} below the critical value of site-percolation.
Measures that are intuitively weakly Gibbs but not almost quasilocal
are constructed in~\cite{entver04}. They include
measures absolutely continuous
with respect to a product of Bernoulli measures on the positive integers
and the invariant measure for the Manneville-Pomeau map
whose non-Gibbsianness was determined in~\cite{maeetal00}.
In this last example discontinuities appear together
with lack of non-nullness. The only known example of a probability
measure that is not even weakly Gibbs is the avalanche model
worked out in \cite{redmaemof98}. On the other hand, the inclusions
${\rm AQL} \subset {\rm WG}$ and
${\rm AQL} \subset \mathcal{P}(\Omega,\tribu)$ are rather strict. Indeed,
convex combinations of Gibbs measures for different potentials
are quasilocal at no
configuration \cite{vEFS_JSP}, and measures associated to dependent (Fortuin-Kasteleyn)
percolation on
trees have discontinuities at a set of
full-measure~\cite{hag96}.
It is not known whether these
measures are weakly Gibbsian.
The combinations of Bernoulli measures with different densities, studied
in \cite{redmaemof98} are such that there exists no specification $\Pi$ and no
configuration $\theta$ for which \reff{eq:3.57} is true. But this falls short
of showing lack of weak Gibbsianness.
There are, on the other hand, examples of measures
associated to disordered systems (see Section \ref{sec:dis})
that are weakly Gibbsian but almost surely \emph{not}
quasilocal~\cite{kul99}.
\medskip
The proof that a measure is weakly Gibbs
involves sophisticated techniques, usually coarse-graining arguments
combined with cluster expansions. Nevertheless, practically all
known examples of non-Gibbsian measures have been proven
to be weakly Gibbsian ~\cite%
{dob95,maevel97,dobshl97, dobshl98,brikuplef98,brikuplef01,kul01}.
In fact, if you allow me to play with words, this proven weak Gibbsianness
turns out often to be rather
strong in that it is associated to \emph{absolutely} summable interactions, that,
moreover, decay at a (configuration-dependent) exponential rate.
Nevertheless, the existence of these strong weak potentials seems to be too
weak a condition to restore useful Gibbsian properties. In particular,
only very limited results hold \cite{lef99, kullenred04} regarding the extension
of the variational approach to these measures.
\medskip
In contrast, much more of the variational approach can
be restored for almost quasilocal measures. This has been done,
through relatively simple proofs \cite{ferlenred03,ferlenred03b}
---no coarse graining, no expansion---,
in cases where FKG monotonicity can be invoked. The argument
shows at the same time that some of the weak-Gibbsian measures
cited above are in fact almost quasilocal. The discussion
in \cite{entver04} strongly indicates that these good variational-approach
results may extend to the larger class of intuitively weakly
measures.
\medskip
The best description of the differences between
the classes introduced above is contained in a
remark in \cite{entver04}:
\begin{itemize}
\item For a quasilocal measure, \emph{every} configuration shields a finite region
from \emph{every} far away influence.
\item For an almost quasilocal measure, \emph{almost every} configuration shields
a finite region from \emph{every} far away influence.
\item For an intuitively Gibbs measure, \emph{almost every} configuration shields
a finite region from \emph{almost every} far away influence.
\end{itemize}
In practical terms, the difference between \emph{every} and \emph{almost every}
seems impossible to detect as it refers to events that will never be measured
or appear in a simulation. Nevertheless, these differences show up through
distinctive mathematical properties. This contrast explains the challenge
posed by the study of non-Gibbsian measures.
\section{What it takes to be non-Gibbsian}
\subsection{Linear transformations of measures}\label{ssec:ren}
Most of the instances of non-Gibbsianness discussed in the literature
refer to measures obtained as transformations of Gibbs
measures through probability kernels as defined in
\reff{eq:2.4}/\reff{eq:2.5}. The only exceptions are the
joint measures for disordered systems.
briefly presented in Section \ref{sec:dis}.
The setting is, then, defined by a probability
kernel $\tau$ from one configuration space
$(\Omega=\sing^\lat,\tribu)$
to another, possibly different, space
$(\Omega'={\sing'}^{\lat'},\tribu')$.
Non-Gibbsian studies focus on three
types of measures obtained from a measure
$\mu\in\mathcal{P}(\Omega,\tribu)$:
\begin{itemize}
\item[(NG1)] The measure obtained after a single transformation,
$\mu'=\mu\tau$.
\item[(NG2)] Measures obtained after a number (sufficiently small
or sufficiently large) of iterations of the transformation,
$\mu^{(n)}=\mu\tau^n=(\mu\tau^{n-1})\tau $.
\item[(NG3)] Measures obtained through an infinite iteration of
the transformation or invariant measures:
$\mu^\infty=\lim_n \mu\tau^n$ or $\mu$ such that $\mu=\mu\tau$
\end{itemize}
[For the last two types, $(\Omega',\tribu')=(\Omega,\tribu)$.]
The kernels of interest here have all a product structure
\begin{equation}\label{eq:3.60}
\tau(d\omega'\mid\omega) \;=\; \prod_{x'\in\lat'}
\tau_{x'}(d\omega'_{x'}\mid\omega)\;,
\end{equation}
where each $\tau_{x'}(\,\cdot\mid\omega)$ is measure on $\sing'$,
and hence defined by a density
\begin{equation}\label{eq:3.61}
T_{x'}(\omega_{x'}\mid\omega) \;=\; \tau_{x'}(\{\omega'_{x'}\}\mid\omega)\;.
\end{equation}
Hence, the transformed measure of $\mu\in \mathcal{P}(\Omega,\tribu)$
satisfies for the weight of a cylinder
\begin{equation}\label{eq:3.62}
\mu'(C_{\omega'_{\Lambda'}}) \;=\;
\int_{\Omega}\prod_{x'\in\Lambda'} T_{x'}(\omega'_{x'}\mid\omega)
\,\mu(d\omega)\;,
\end{equation}
for any $\Lambda'\Subset\lat'$ and
$\omega'_{\Lambda'}\in\Omega'_{\Lambda'}$.
In particular, a deterministic transformation is defined by \emph{functions}
$T_{x'}:\Omega\to\sing'$ such that
\begin{equation}\label{eq:3.63}
T_{x'}(\omega_{x'}\mid\omega) \;=\; \left\{
\begin{array}{ll} 1 & \mbox{if } \omega'_{x'}=T_{x'}(\omega)\\
0 & \mbox{otherwise}
\end{array}\right.
\end{equation}
The transformations used in physics and probability can be classified
into various categories:
\begin{itemize}
\item A \embf{block transformation} is such that
for each $x'\in\lat'$ there exists
a \emph{block} $B_{x'}\Subset\lat$ such that
$T_{x'}(\omega'_{x'}\mid\cdot\,)\in\tribu_{B_{x'}}$ for all
$\omega'_{x'}\in\sing'$. Hence $T_{x'}(\omega_{x'}\mid\cdot\,)$ equals
a function on $\tribu_{B_{x'}}$ which I will denote with the same symbol,
namely $T_{x'}(\omega'_{x'}\mid\omega_{B_{x'}})$.
\item In general terms, \embf{renormalization transformations} are
characterized by at least one of the following properties:
\begin{itemize}
\item The blocks $B_{x'}, x'\in\lat'$, form a partition of $\lat$ (that is, they are
disjoint and their union is the whole of $\lat$).
\item The functions $T(\omega'_{x'}\mid\cdot\,)$ are continuous for all
$\omega'_{x'}\in\sing'$.
\end{itemize}
\item Transformations with overlapping blocks are typical of
\embf{stochastic evolutions}. These include \embf{cellular automata}
(discrete-time) and
\embf{spin-flip} (continuous-time) dynamics.
\end{itemize}
Here are a few examples of renormalization transformations
that have played a benchmark role in non-Gibbsian studies and
Gibbs-restoration projects. If necessary, the reader can suppose that
$\lat=\mathbb{Z}^d$ and $\lat'=\mathbb{Z}^{d'}$ but, of course,
lattices with a notion of $\mathbb{Z}^d$-translations (=action of
$\mathbb{Z}^d$ by isomorphisms) do equally well.
\paragraph{Deterministic block renormalization transformations:}
\begin{description}
\item{\emph{$b^d$-Decimation:}} $\lat'=\lat$, $\sing'=\sing$,
$B_{x'}=\Lambda_{b-1}+bx'$,
$T_{x'}(\omega_{B_{x'}})=\omega_{bx'}$.
\item{\emph{Spin contractions: }} $\lat'=\lat$, $\sing'\subneq\sing$,
$B_{x'}=\{x'\}$; two species:
\begin{itemize}
\item {\emph{Sign fields:}} $\sing\subset\mathbb{R}$ symmetric,
$T_{x'}(\omega_{x'})=\mbox{sign}(\omega_{x'})$.
\item {\emph{``Fuzzy'' spins:}} $\sing=\cup_{i\in I} S_i$ (partition),
$\sing'=I$, $T_{x'}(\omega_{x'})=i$ if $\omega_{x'}\in S_i$.
\end{itemize}
\item{\emph{Block average:}} $\lat'=\lat$, $\sing'\supneq\sing$,
$T_{x'}(\omega_{B_{x'}})=\card{B_{x'}}^{-1} \sum_{y\in B_{x'}}
\omega_y$.
\item{\emph{Majority rule (odd blocks):}} $\lat'=\lat$, $\sing'=\sing=\{-1,1\}$,
($\card{B_{x'}}$ odd),
$T_{x'}(\omega_{B_{x'}})=\mbox{sign}\bigl[ \sum_{y\in B_{x'}}
\omega_y\bigr]$.
\end{description}
\paragraph{Stochastic block renormalization transformations:}
\begin{description}
\item{\emph{Majority rule (even blocks):}} $\lat'=\lat$, $\sing'=\sing=\{-1,1\}$,
($\card{B_{x'}}$ even),
$\omega_{B_{x'}}=
\mbox{sign}\bigl[ \sum_{y\in B_{x'}} \omega_y\bigr]$
if this last sum is non-zero, and $+1$ or $ -1$ with probability 1/2
otherwise. [\emph{Exercise:} write the kernel densities
$T_{x'}(\omega'_{x'}\mid\omega_{B_{x'}})$.]
\item{\emph{$p$-Kadanoff transformation:}} $\lat'=\lat$, $\sing'=\sing$,
\begin{equation}\label{eq:3.64}
T_{x'}(\omega'_{x'}\mid\omega_{B_{x'}})\;=\; \frac{\exp\Bigl[
p\,\omega'_{x'}\,\sum_{y\in B_{x'}} \omega_y\Bigr]}
{\rm Norm.}\;.
\end{equation}
\end{description}
\paragraph{Non-block renormalization transformations (deterministic):}
\begin{description}
\item{\emph{Projections:}} $\lat'\subneq\lat$, $\sing'=\sing$,
$T(\omega)=\omega_{\lat'}$. This is a generalization of
decimation. The most important case is \emph{Schonmann's example}:
$\sing=\{-1,1\}$, $\lat=\mathbb{Z}^d$, $\lat'=\mathbb{Z}^{d-1}\times \{0\}$.
\item{\emph{Momentum transformations:}} $\lat'=\lat$, $\sing'\supneq\sing$,
$T_{x'}(\omega)=\sum_y F(x'-y)\,\omega_y$ for
$F$ summable, defined through a Fourier transform with an
appropriate smooth cut-off.
\end{description}
The following exercise applies to all the preceding examples.
\begin{exercise}\label{ex:4.10}
Let $\tau$ be either a renormalization transformation with strictly positive
densities $T_{x'}(\,\cdot\mid\cdot\,)$ or a deterministic renormalization
transformation such that $T^{-1}_{x'}(\omega'_{x'})\neq\emptyset$
for all $\omega'_{x'}\in\sing'$.
\begin{itemize}
\item[(i)] Prove that if $\mu$ is non-null, then $\mu\tau$ gives positive
measure to any cylinder $C_{\omega'_{\Lambda'}}$.
\item[(ii)] Conclude that $\tau$
maps non-null measures into non-null measures.
\end{itemize}
\end{exercise}
The situation is dramatically different if ``non-null" is replaced by
``Markovian": A Markovian measure, subjected to a ``Markovian"
(= block-renormalization) transformation may, in fact, not even
be quasilocal.
\subsection{Absence of uniform non-nullness}
Kozlov's theorem implies two main causes of non-Gibbsianness:
lack of non-nullness and lack of quasilocality. The manifestation
of the former comes from
the negation of the following \emph{alignment-suppression
property}.
\begin{proposition}\label{pro:4.6}
If a measure $\mu$ on $(\Omega,\tribu)$ is consistent with an
uniformly non-null translation-invariant specification,
then there exists $\delta>0$ such that
\begin{equation}\label{eq:3.65}
\sup_{\omega_\Lambda\in\Omega_\Lambda}
\mu\bigl(C_{\omega_\Lambda}\bigr)\;\le\;
\eee^{-\delta\card{\Lambda}}
\end{equation}
for all $\Lambda\Subset\lat$.
\end{proposition}
\proof Let $\gamma_x$ be the single-site specification
densities and
$\epsilon=\inf_{\sigma} \gamma_{\{x\}}(\sigma_x\mid\sigma)>0$.
Then, by consistency,
\begin{eqnarray}\label{eq:3.66}
\mu\bigl(C_{\omega_\Lambda}\bigr)&=&
\int \gamma_{\{x\}}(\omega_x\mid\sigma)
\,\one_{C_{\omega_{\Lambda\setminus\{x\}}}}(\sigma)
\,\mu(d\sigma)\nonumber\\
&\le& (1-\epsilon)\,\mu\bigl(C_{\omega_{\Lambda\setminus\{x\}}}\bigr)\;.
\end{eqnarray}
Induction finishes the proof.\qed
\medskip
The failure of this property means that the
(exponential) cost of inserting
a ``defect'' $\omega_\Lambda$ is sub-volumetric.
This is what happens, for instance, for some
sign-field measures \cite{lebmae87,dorvan89,entshl98,lor98},
where there are defects that can be placed
by paying only a surface-area cost. The non-Gibbsianness
of the invariant measures of some cellular automata also
shows up in this way. Alignment needs to appear only
in some lower-dimensional manifold ---a surface for
the voter model \cite{lebsch88} or the
non-local dynamics of \cite{marsco91}, a ``spider'' for
the non-reversible automata of \cite{fertoo03}--- and
the dynamics propagates it to a whole volume.
In fact, as proposed in \cite{maevel94}, the
detection of this alignment propagation can be a numerical
test for non-Gibbsianness. Such a test has indeed been
applied \cite{mak97,mak99} to the invariant measure
of the Toom model, with inconclusive results.
As seen in Exercise \ref{ex:4.10}, a measure that is
non-Gibbsian due to lack of non-nullness
can not be the image of any non-null measure ---Gibbsian
or not--- through strictly positive renormalization transformations.
For those that know a bit about the variational principle,
I comment that \reff{eq:3.65} means that
\begin{equation}\label{eq:3.67}
\liminf_n \frac{-1}{\card{\Lambda_n}}\,
\log\mu\bigl(C_{\omega_{\Lambda_n}}\bigr)
\;\ge\;\delta\;>\;0\;.
\end{equation}
When both $\mu$ and $\omega$ are translation-invariant violation of \reff{eq:3.67}
implies that the entropy density of the $\delta_\omega$
relative to $\mu$ is zero. This is how the violation of
\reff{eq:3.65} is usually presented, linking
non-Gibbsianness to a failure of the variational principle.
Furthermore, as the relative entropy is a large-deviation
rate function, this failure indicates the presence of large
deviations that are ``too large" (its probability is penalized
less than the volume exponential typical of Gibbsianness).
Nevertheless, the argument based on \reff{eq:3.65} is
more general, as it requires neither translation
invariance nor the existence of
relative entropy densities.
\subsection{Absence of quasilocality}
Let me explain, in some detail,
the subtleties involved in proving that a measure $\mu$
is not quasilocal. To make the notation slightly lighter
(and to acquaint the reader with yet another usual notation),
let us denote by $\mu_\Lambda$ the kernel $\mu_{|\tau_{\comp\Lambda}}$
of Definition \ref{def:2.3}, that is
\begin{equation}\label{eq:3.68.0}
\mu_\Lambda(f\mid\omega) \;=\;
E_{\mu}(f\mid\tribu_{\comp\Lambda})(\omega)
\end{equation}
is a realization of the corresponding conditional expectation
for bounded $f\in\tribu$, $\Lambda\Subset\lat$ and $\omega\in\Omega$.
[From our long discussion of Sections \ref{ssec:2.2} and \ref{ssec:2.3},
the reader should retain at least this fact: conditional expectations
admit an infinite number of versions (=realizations) all differing
on measure-zero sets.] Let me reserve the right to
denote sometimes this object as
$\mu_\Lambda(f\mid\omega_{\comp\Lambda})$
to emphasize its $\tribu_{\comp\Lambda}$-measurability.
The measure $\mu$ is not quasilocal if it
is consistent with \emph{no} quasilocal specification.
To prove this (recall that every measure is consistent
with \emph{some} specification),
it is enough to find a \emph{single}, nonremovable, point
of discontinuity for a \emph{single} $\mu_\Lambda$
for a \emph{single} quasilocal $f$ [By Proposition
\ref{pro:3.1} this will already happen for $\Lambda=\{x\}$,
$f=\one_{\sigma_x}$ for some $x\in\lat$, $\sigma_x\in\sing$.]
Let us precise this fact.
\begin{definition}\label{def:4.21}
A measure $\mu\in\mathcal{P}(\Omega,\tribu)$ is
\embf{not quasilocal at } $\omega\in\Omega$ if
there exists $\Lambda\Subset\lat$
and $f$ (quasi)local such that no realization of
$\mu_\Lambda(f\mid\cdot\,)$
is quasilocal at $\omega$.
\end{definition}
In other words, any realization of $\mu_\Lambda(f\mid\cdot\,)$ must
exhibit an \emph{essential discontinuity} at $\omega$; one
that survives zero-measure modifications. Let us
understand what this means, for a general measurable
function $g$. As we shall assume $\mu$ non-null
(otherwise it would already be non-Gibbsian),
``essential'' can be associated to ``supported
on open sets''. Thus, we are led to consider the following
twin notions.
%
\begin{definition}\label{def:4.19}
Let $g$ be a measurable function and $\mu$
a probability measure on $(\Omega,\tribu)$. Let
$\omega\in\Omega$.
\begin{itemize}
\item[(a)] $g$ is $\mu$-\embf{essentially discontinuous}
at $\omega$ if every function continuous at $\omega$ differs
from $g$ in a set of non-zero $\mu$-measure.
\item[(b)] $g$ is \embf{strongly discontinuous}
at $\omega$ if every function continuous at $\omega$ differs
from $g$ in a set having non-empty interior.
\end{itemize}
[That is: if $f$ is continuous at $\omega$, then
the set $\{\omega: g(\omega)\neq f(\omega)\}$
has $\mu$-measure non-zero in (a) and contains an
open set in (b).]
\end{definition}
\begin{remark} If $\mu$ is non-null, every strong discontinuity
is essential.
\end{remark}
Conditional expectations are bounded, hence they
can only have jump discontinuities, caused
by the presence of different limits coming from
different directions. In order for such a discontinuity
to be essential or strong, the set of directions from which
each of the different limits is achieved should
be sufficiently thick. This yields the
following basic criteria.
\begin{proposition}\label{prop:3.16}
Let $\mu\in\mathcal{P}(\Omega,\tribu)$, $g$ a
bounded measurable function and $\omega\in\Omega$.
Then $g$ is $\mu$-essentially discontinuous [resp.\
strongly discontinuous]
at $\omega$ iff there exists a $\delta>0$ such that
for every neighborhood $\mathcal{N}$
of $\omega$ there exist two sets $\mathcal{N}^+$
and $\mathcal{N}^-$, with
$\omega\in\mathcal{N}^\pm\subset\mathcal{N}$,
such that $\mu(\mathcal{N}^\pm)>0$ [resp.\
$\mathcal{N}^\pm$ open] and
\begin{equation}\label{eq:3.68}
\bigl|g(\sigma^+)-g(\sigma^-)\bigr|>\delta
\end{equation}
for every $\sigma^\pm\in \mathcal{N}^\pm$.
\end{proposition}
As the cylinders are a basis of the topology of $\Omega$
(every open set is a union of such), open neighborhoods of
$\omega$ are (unions of) cylinders of the form
$C_{\omega_\Gamma}$ for $\Gamma\Subset\lat$.
Thus, condition \reff{eq:3.68} is equivalent to
\begin{equation}\label{eq:3.69}
\bigl|g(\omega_{\Lambda_N}\,\sigma^+)-
g(\omega_{\Lambda_N}\,\sigma^-)\bigr|>\delta
\end{equation}
for $N$ large enough,
for $\sigma^\pm\in \mathcal{N}^\pm_{\Lambda^{\rm c}_N}
\in\tribu_{\Lambda^{\rm c}_N}$
of non-zero measure or open, according to the case.
After a little thought we see that we can rewrite
Proposition \ref{prop:3.16} in the following equivalent
form.
\begin{proposition}\label{prop:3.17}
Let $\mu\in\mathcal{P}(\Omega,\tribu)$, $g$ a
bounded measurable function and $\omega\in\Omega$.
\begin{itemize}
\item[(a)] $g$ is $\mu$-essentially discontinuous
at $\omega$ iff there exist a diverging sequence $(N_i)_{i\ge 1}$
of natural numbers and real numbers $\delta_+$ and
$\delta_-$ with $\delta^+-\delta^->0$ such that
for each $i\ge 1$ there exist sets
$\mathcal{N}^+_i, \mathcal{N}^-_i
\in\tribu_{\Lambda^{\rm c}_{N_i}}$ with
\begin{equation}\label{eq:3.70}
\limsup_{i\to\infty}\mu\Bigl( g\,\one_{C_{\omega_{ \Lambda_{N_i}}}}
\,\one_{\mathcal{N}^+_i}\Bigr) \;>\; \delta^+
\quad \mbox{and} \quad
\liminf_{i\to\infty}\mu\Bigl( g\,\one_{C_{\omega_{ \Lambda_{N_i}}}}
\,\one_{\mathcal{N}^-_i}\Bigr) \;<\; \delta^-
\end{equation}
%
\item[(b)] $g$ is strongly discontinuous
at $\omega$ iff there exists a diverging sequence $(N_i)_{i\ge 1}$
of natural numbers such that for each $i\ge 1$ there exist
a natural number $R_i>N_i$
and two configurations $\eta^+,\eta^-$ such that
\begin{equation}\label{eq:3.71}
\limsup_{i\to\infty}\,
\Bigl| g( \omega_{\Lambda_{N_i}}\,\eta^+_{\Lambda_{R_i}\setminus\Lambda_{N_i}}\sigma^+)
- g( \omega_{\Lambda_{N_i}}\,\eta^-_{\Lambda_{R_i}\setminus\Lambda_{N_i}}\sigma^-)
\Bigr| \;\ge\;\delta
\end{equation}
for every $\sigma^\pm\in \Omega$.
\end{itemize}
\end{proposition}
To settle our non-quasilocality issue we now apply these considerations
to functions of the form $g(\,\cdot\,)=\mu_\Lambda(f\mid\cdot\,)$.
From Definition \ref{def:4.21} and the previous proposition
we obtain:
\begin{proposition}\label{prop:3.18}
Let $\mu\in\mathcal{P}(\Omega,\tribu)$. Then:
\begin{itemize}
\item[(a)] $\mu$ is not quasilocal at $\omega$ iff there exist a diverging sequence $(N_i)_{i\ge 1}$
of natural numbers and real numbers $\delta_+$ and
$\delta_-$ with $\delta^+-\delta^->0$ such that
for each $i\ge 1$ there exist sets
$\mathcal{N}^+_i, \mathcal{N}^-_i
\in\tribu_{\Lambda^{\rm c}_{N_i}}$ with
\begin{equation}\label{eq:3.72}
\limsup_{i\to\infty}\mu\Bigl( f\,\one_{C_{\omega_{ \Lambda_{N_i}}}}
\,\one_{\mathcal{N}^+_i}\Bigr) \;>\; \delta^+
\quad \mbox{and} \quad
\liminf_{i\to\infty}\mu\Bigl( f\,\one_{C_{\omega_{ \Lambda_{N_i}}}}
\,\one_{\mathcal{N}^-_i}\Bigr) \;<\; \delta^-
\end{equation}
%
\item[(b)] If $\mu$ is non-null, then it is
not quasilocal at $\omega$ if there exists a diverging sequence $(N_i)_{i\ge 1}$
of natural numbers such that for each $i\ge 1$ there exist
a natural number $R_i>N_i$
and two configurations $\eta^+,\eta^-$ such that
\begin{equation}\label{eq:3.73}
\limsup_{i\to\infty}\,
\Bigl| \mu(f\mid
\omega_{\Lambda_{N_i}}\,\eta^+_{\Lambda_{R_i}\setminus\Lambda_{N_i}}\sigma^+)
- \mu(f\mid
\omega_{\Lambda_{N_i}}\,\eta^-_{\Lambda_{R_i}\setminus\Lambda_{N_i}}\sigma^-)
\Bigr| \;\ge\;\delta
\end{equation}
for every $\sigma^\pm\in \Omega$.
\end{itemize}
\end{proposition}
As we have seen, condition \reff{eq:3.73} is a stronger form of
non-quasilocality [(b) of Definition \ref{def:4.19}]. In this case
it is appropriate to say that $\mu$
is \emph{strongly non-quasilocal}, or \emph{strongly non-Feller}
\cite[Definition 4.14]{vEFS_JSP}.
To obtain \reff{eq:3.72} we have used consistency.
In practice, the lack of quasilocality has been detected
by proving \reff{eq:3.73} for functions of
the form $f(\sigma)=\sigma_\Lambda$.
Furthermore, only single-site regions need to be checked
due to Proposition \ref{pro:3.1}. In the presence of translation
invariance, then, non-quasilocality proofs
typically refer to \reff{eq:3.73}
for $\Lambda=\{0\}$ and $f(\sigma)=\sigma_0$.
(This is not always the case, see for instance Section
4.3.5 in \cite{vEFS_JSP}.)
\bigskip
After all these mathematical considerations,
it is natural to wonder about the \emph{physical}
reasons for non-quasilocality.
In quasilocal measures instead of \reff{eq:3.73}
we get a limit zero as $\Gamma\to\lat$. This means
that the influence of $\sigma^\pm$ is shielded off
if the intermediate spins are frozen in some
configuration $\omega$. In heuristic terms,
in quasilocal measures the influence of far
away regions is carried by the fluctuations
of the spins in between; if these fluctuations
are stopped so is the connection between the regions.
Non-quasilocality means, thus, that there is
some mechanism connecting distant regions
that remains active even in the absence
of fluctuations.
For measures obtained as images of a transformation the
mechanism is clear; it goes under the keywords
``hidden variables''. While the measure acts on the
space of ``unhidden'' \emph{image variables}
$\Omega'$, it is also determined by the ``hidden" \emph{object
variables} in $\Omega$ acting through the
transformation. In such a situation, the freezing
of an image spin configuration acts as a conditioning
on the object spin variables,
under which the latter may still keep
a certain amount of freedom
to fluctuate. For some choice $\omega'$ of image variables, the
conditioned object system may exhibit a \emph{phase transition}
which causes a long-range order that correlates local
behavior to what happens at infinity. This produces
non-quasilocality ---that is nonzero differences \reff{eq:3.73}---
for this particular $\omega'$.
This ``hidden variables'' scenario explains non-quasilocality for
renormalized measures and for measures obtained through cellular
automata or spin-flip dynamics. While the non-quasilocality of joint
measures of disordered models is of a different nature, still phase
transitions are behind it \cite{kul99,entetal00b}, as we shall discuss
in Section \ref{sec:dis} below.
The actual proofs of the failure of quasilocality are typically very
technical. They combine a number of analytical tools (correlation
inequalities, Pirogov-Sinai theory, strict convexity of
thermodynamical potentials,\dots) with particular properties of each
model in question. A systematic exposition of them is well beyond the
scope of this course and may not be pedagogically useful. I prefer to
discuss, instead, the overal strategy of the proof of
non-quasilocality for block-renormalized measures, and illustrate
other mathematical features through examples. These examples are
relatively simple to analyze, and, in part due to its simplicity,
have played a benchmark role
in the understanding of the different
manifestations of non-Gibbsianness.
%Furterhmore, they each of the models correspond to a ``surprising"
%situation where non-quasilocality emerged
%rather unexpectedly. I hope this will help
%convincing the reader that non-Gibbsianness is
%a phenomenon worth understanding.
\subsection{Surprise number one: renormalization maps}
%\subsection{Non-quasilocality of block-renormalized measures}
\subsubsection{The scenarios}
Physicists define and work with renormalization transformations
at the level of interactions (they speak of Hamiltonians,
but they are really referring to interactions). Formally,
they consider maps $\mathcal{R}$ related to our measure
transformations $\tau$
according to the following diagram:
\begin{equation}\label{eq:5.1}
\begin{array}{ccc}
\mu &\stackrel{\textstyle\tau}{\longrightarrow} &\mu'\\
\uparrow & &\downarrow\\
\Phi &\stackrel{\textstyle\mathcal{R}}{\longrightarrow} &\Phi'
\end{array}
\end{equation}
The diagram gives hints as to the possible mathematical
complications of computing $\mathcal{R}$. While the upwards
arrow on the left roughly corresponds to an exponential
(Boltzmann prescription), the downwards arrow on the
right corresponds to a log. This step is at the origin
of the complicated diagrammatics associated to
renormalization transformations. In contrast, the
transformation $\tau$ is a linear object, much cleaner
and straightforward at the mathematical level.
In fact, from a computational standpoint,
$\tau$ and $\mathcal{R}$ have complementary
disadvantages: $\mathcal{R}$ involves logarithms,
but $\tau$ acts on spaces of much larger dimensions.
Conceptually, however, $\tau$ has the advantage
of being always well defined while the status of
$\mathcal{R}$ is less clear.
Renormalization transformations were initially
devised to study critical points, approaching them
from the high-temperature
side where there is only one measure to contend with.
But quickly physicists started to apply the successful
renormalization ideas to first-order
phase transitions, where there are several
measures consistent with the same interaction.
In these cases it is natural to wonder whether
the different renormalized measures are
associated to the same or different potentials:
\begin{equation}\label{eq:5.2}
\begin{array}{ccc}
\{\mu_1,\cdots\}
&\build{\longrightarrow}_{\textstyle\longrightarrow}^{\textstyle\longrightarrow}
&\{\mu'_1,\cdots\}\\
\uparrow\uparrow\uparrow & &\searrow\downarrow\swarrow\\
\Phi &\longrightarrow & \Phi'
\end{array}
\qquad\mbox{or}\qquad
\begin{array}{ccc}
\{\mu_1,\cdots\} &\build{\longrightarrow}_{\textstyle\longrightarrow}^{\textstyle\longrightarrow}
&\{\mu'_1,\cdots\}\\
\uparrow\uparrow\uparrow & &\downarrow\downarrow\downarrow\\
\Phi &\build{\longrightarrow}_{\textstyle\longrightarrow}^{\textstyle\longrightarrow}
&\{\Phi'_1,\cdots\}
\end{array}
\quad ?
\end{equation}
While the leftmost scenario would be consistent
with the renormalization paradigm, the rightmost
one would indicate a \emph{multivalued} map $\mathcal{R}$
quite contradictory to usual ideas. In fact, some
numerical evidence did suggest the actual occurrence
of this last scenario. To add to the
confusion, the celebrated work of Griffiths and Pearce~\cite{gripea79}
pointed to the possible presence of ``peculiarities''
that would prevent any reasonable definition of $\mathcal{R}$.
(The reader is referred to \cite[Section 1.1]{vEFS_JSP} for
historical references.)
Non-Gibbsian theory provided the necessary clarifications.
It led to the following conclusions:
\begin{itemize}
\item[(a)] The ``multivaluedness scenario'' [rightmost
possibility in \reff{eq:5.2}] is impossible within reasonable
spaces of interactions \cite[Theorem 3.6]{vEFS_JSP}.
\item[(b)] In many instances, however,
as initially shown by Israel~\cite{isr79},
renormalized measures may fail to be quasilocal. That is,
the downwards arrows in \reff{eq:5.2} may fail to exist.
\item[(c)] If the interaction $\Phi$
and measures $\mu_i$ are translation invariant,
either the renormalized measures $\mu'_i$ are all Gibbsian for
the \emph{same} interaction, or they are \emph{all}
non-Gibbsian \cite[Theorem 3.4]{vEFS_JSP}.
\end{itemize}
In conclusion, instead of those in \reff{eq:5.2}, the two competing
scenarios are
\begin{equation}\label{eq:5.3}
\begin{array}{ccc}
\{\mu_1,\cdots\}
&\build{\longrightarrow}_{\textstyle\longrightarrow}^{\textstyle\longrightarrow}
&\{\mu'_1,\cdots\}\\
\uparrow\uparrow\uparrow & &\searrow\downarrow\swarrow\\
\Phi &\longrightarrow & \Phi'
\end{array}
\qquad\mbox{or}\qquad
\begin{array}{ccc}
\{\mu_1,\cdots\}
&\build{\longrightarrow}_{\textstyle\longrightarrow}^{\textstyle\longrightarrow}
&\{\mu'_1,\cdots\}\\
\uparrow\uparrow\uparrow & &\not\,\downarrow\\
\Phi &\not\!\!\longrightarrow & ??
\end{array}
\end{equation}
Both of these scenarios occur ---the left one
probably more often--- but I will concentrate
on the general strategy to prove the
validity of the rightmost scenario.
I will only sketch the different steps, relying on two
examples as an illustration: $2\times 2$-decimation
and Kadanoff transformations of the translation-invariant
states of the two-dimensional Ising model in zero magnetic
field at low enough temperature. The decimation example is
the first and simplest example of non-quasilocal renormalized
measure, which carefully analyzed by Israel~\cite{isr79}, and is the
genesis of the non-Gibbsianness work in
\cite{vEFS_JSP}. Kadanoff transformations, on
the other hand, illustrate transformations with
strictly positive kernels and they were already considered
by Griffiths and Pearce as sources of ``pathologies''. I will skip
all fine calculational details ---which are fully
given in \cite[Section 4.1.2]{vEFS_JSP}---
and concentrate on the main brush strokes (which
are already complicated enough).
The strategy, which is naturally divided into four steps,
in fact shows that the non-quasilocality of the
renormalized measures $\mu'=\mu\tau$ satisfies the stronger
property \reff{eq:3.73}.
\subsubsection{Step zero: Understanding
the conditioned measures}\label{ssec:under}
To understand the meaning of $\mu'_{\Lambda'}(\,\cdot\mid\omega')$,
for $\Lambda'\subset\lat'$, $\omega'\in\Omega'$, we introduce
the measure on
$(\Omega\times\Omega',\tribu\times\tribu')$ with marginals
(=projections on $\Omega$ and $\Omega'$) $\mu$ and
$\mu'$. Explicitly,
\begin{equation}\label{eq:5.4}
\widetilde\mu(\widetilde F) \;=\;
\int \widetilde F(\omega,\omega')\, \mu(d\omega)\,\tau(d\omega'\mid\omega)
\end{equation}
for every function $\widetilde F$ that is
$\tribu\times\tribu'$-measurable and bounded.
It is useful to visualize $\Omega\times\Omega'$ as configurations
on two parallel ``slices'', $\lat$ and $\lat'$. The
spins on the former are the \emph{original},
\emph{object} ot \emph{internal} spins
and those on the latter the
\emph{renormalized} or \emph{image} spins.
A simple verification of
the properties determining Definition \ref{def:2.2} shows that
\begin{equation}\label{eq:5.5}
\mu'_{\Lambda'}(\,\cdot\mid\omega') \;=\;
\widetilde\mu_{\Lambda'\times\lat}(\,\cdot\mid
\omega'_{\comp{\Lambda'}})\;.
\end{equation}
We see, then,
that $\mu'_{\Lambda'}(\,\cdot\mid\omega')$ is a measure
on an infinite spin space formed by the spins in $\lat$
plus those in the finite region $\Lambda'$. The proper
definition of measures for unbounded regions needs
some care. In our case we count on the help of specifications.
Indeed, we are interested in a measure $\mu$ that is
Gibbsian for some interaction $\Phi$ and in
product transformations \reff{eq:3.60}/\reff{eq:3.61}.
The measure $\widetilde\mu$ must then be
consistent with a specification defined by
the interaction $\Phi$ on the slice $\lat$
and ``conic bonds'' connecting $\lat$ and
$\lat'$ defined by the functions $T_{x'}$.
Rather that writing the full details for $\widetilde\mu$
let us focus on our target measure
$\widetilde\mu_{\Lambda'\times\lat}(\,\cdot\mid
\omega'_{\comp{\Lambda'}})$. To simplify matters
still further, let me advance the fact that the addition of the finitely
many spins in $\Lambda'$, being only a local modification,
does not produce any major change in the
properties we are after (we shall come back to
this in step 3 below). Hence, we look at
$\widetilde\mu_{\lat}(\,\cdot\mid\omega')$
which we interpret as the measure on $\Omega$
obtained by conditioning the original spins to
be ``compatible'' with the image configuration $\omega'$.
Our previous comment on a $\Phi$-$T$ interaction is
formalized, even more generally, as follows.
\begin{proposition}\label{pro:5.1}
Let $\mu$ be consistent with a specification $\Pi$
whose density functions are $\{\gamma_\Lambda:\Lambda\Subset\lat\}$
and let $\tau$ be a block transformation defined
by densities $\{T_{x'}(\omega'_{x'}\mid\omega_{B_{x'}})\}$.
For each $\omega'\in\Omega'$ let
\begin{equation}\label{eq:5.6}
\Omega^{\omega'} \;=\; \Bigl\{\omega\in\Omega:
T_{x'}(\omega'_{x'}\mid\omega_{B_{x'}})>0\;,\; x'\in\lat'\Bigr\}\;.
\end{equation}
The, the measure
$\widetilde\mu_{\lat}(\,\cdot\mid\omega')$
is consistent with the specification $\Pi^{\tau,\omega'}$
on $\Omega^{\omega'}$
defined by the density functions
\begin{equation}\label{eq:5.7}
\gamma^{\omega'}_\Lambda(\sigma_\Lambda\mid
\omega_{\comp\Lambda} )\;=\;
\frac{1}{\mbox{Norm.}} \,\gamma_\Lambda(\sigma_\Lambda\mid
\omega_{\comp\Lambda})\,\prod_{x'\in B'_\Lambda}
T_{x'}\Bigl(\omega'_{x'}\Bigm|(\sigma_\Lambda\,\omega)_{B_{x'}}\Bigl)
\end{equation}
where $B'_{\Lambda}=\{x'\in\lat': B_{x'}\cap\Lambda\neq\emptyset\}$
and ``Norm" stands for the sum over $\sigma_\Lambda$
of the numerator. The pair $(\Omega^{\omega'}, \Pi^{\tau,\omega'})$
is the \embf{$\omega'$ constrained internal-spin system}.
\end{proposition}
\begin{exercise}
Prove this proposition. (\emph{Hint:} The shortest route
to prove that \reff{eq:5.7} indeed defines a specification is through
property \reff{eq:2.19}.)
\end{exercise}
If $\Pi$ is defined by an interaction $\Phi$ and the functions
$T_{x'}$ are strictly positive, then $\Omega^{\omega'}=\Omega$
and $\Pi^{\tau,\omega'}$ is Gibbsian for the interaction
\begin{equation}\label{eq:5.8}
\phi^{\tau,\omega'}_B(\omega) \;=\;
\phi_B(\omega) +
\left\{\begin{array}{ll}
- \log T_{x'}(\omega'_{x'}\mid\omega_{B_{x'}})
& \mbox{if } B=B_{x'} \mbox{ for some } x'\in\lat'\\
0 & \mbox{otherwise}\;.
\end{array}
\right.
\end{equation}
Observe that if the temperature is considered,
then the factor $\beta$ multiplies only
the terms $\phi_B$, but \emph{not} the last logarithm.
For example, for the $p$-Kadanoff transformation of
the Ising model with magnetic field $h$ at inverse temperature $\beta$,
the measure $\widetilde\mu_{\lat}(\,\cdot\mid\omega')$
is Gibbsian for the interaction with formal Hamiltonian
\begin{equation}\label{eq:5.9}
-\beta \biggl\{\sum_{\langle x,y\rangle} \omega_x\,\omega_y
-h \sum_x \omega_x
-p\,\beta^{-1}\sum_x \omega'_x\sum_{y\in B_x} \omega_y
+ \beta^{-1} \sum_x \log\Bigl[ 2 \cosh
\Bigl(p\sum_{y\in B_x} \omega_y\Bigr)\Bigr]\biggr\}
\end{equation}
which corresponds to an Ising model with an additional magnetic
field that is positive, block-dependent, and also
temperature-dependent, plus a multispin non-linear
antiferromagnetic term with temperature-dependent
couplings.
If the renormalization transformation is not strictly positive,
for instance if it is deterministic, we fall into the framework
of models with exclusions. Its analysis depends on the
type of exclusion. The example of decimation transformations
is particularly simple, as the constraints determined
by each $\omega'$ amount to fixing the spins at the
sites in $b\mathbb{Z}^d$. In such a situation it is
better simply to ignore these decimated sites
and consider the measure
$\widetilde\mu_{\mathbb{Z}^d\setminus b\mathbb{Z}^d}(\,\cdot\mid\omega')$
on the remaining internal spins.
That is, we take as internal spin system
$\Omega_{\mathbb{Z}^d\setminus b\mathbb{Z}^d}$
and the interaction obtained from the original one
by fixing the decimated spins. For the decimation
of the Ising model, this internal-spin interaction
corresponds to the same original
Ising interaction plus a field on neighbors to
decimated sites induced by the links to them. In Israel's
example, three sublattices arise naturally.
The decimated spins are on the \emph{even sublattice} $\lat_{\rm even}$
formed by sites with both coordinates even. The neighbors to
decimated sites occupy the \emph{odd/even sublattice}
$\lat_{\rm odd/even}$ where the two coordinates have different
parity. The remaining sites, with both coordinates odd,
form the \emph{odd sublattice} $\lat_{\rm odd}$.
The interaction defining
$\widetilde\mu_{\mathbb{Z}^2\setminus 2\mathbb{Z}^2}(\,\cdot\mid\omega')$
corresponds to the formal Hamiltonian
\begin{equation}\label{eq:5.10}
-\beta \biggl\{\sum_{x\in\lat_{\rm odd}}
\sum_{ y\in \lat_{\rm odd/even}\atop
\card{x-y}=1}\omega_x\,\omega_y
\quad+ \sum_{x\in\lat_{\rm odd/even}} \Bigl(
\sum_{y\in\lat_{\rm even}\atop \card{x-y}=1} \omega'_y\Bigr)
\,\omega_x\biggr\}\;.
\end{equation}
In conclusion, step zero teaches us that each conditioned measure
in question ---$\widetilde\mu_{\Lambda'\times\lat}(\,\cdot\mid
\omega'_{\comp{\Lambda'}})=
\mu'_{\Lambda'}(\,\cdot\mid\omega')$
or $\widetilde\mu_{\lat}(\,\cdot\mid\omega')$--- is determined
through consistency with some interaction. If the interaction
presents a first-order phase transition, there are
infinitely many measures to choose from.
The proof of non-Gibbsianness, in fact \emph{needs}
the presence of such phase transitions. Let us now move
to the remaining steps.
\subsubsection{The three steps of a non-quasilocality proof}
\paragraph{Step 1: Choice of an image configuration
producing a phase transition on the internal spins}
We need to choose some special configuration
$\spec$ for which the constrained internal spins
undergo a first order phase transition. That is,
$\spec$ must be such that there exist two
different measures
$\mu^{\spec}_+,\mu^{\spec}_-\in\mathcal{G}(\Pi^{\tau,\spec})$.
Those being different means that there exists a local
observable $f$ such that
\begin{equation}\label{eq:5.10.1}
\bigl|\mu^{\spec}_+(f)-\mu^{\spec}_-(f)\bigr| \;\defby\; \delta\;> \;0\;.
\end{equation}
In such a situation, one may wonder which measure
has the right to be
denoted $\widetilde\mu_{\lat}(\,\cdot\mid\spec)$. While we
do not answer this, the rest of the argument shows that
whichever the choice, it leads to a discontinuity
at $\spec$.
The choice of $\spec$, of
course, depends on the problem. If the original model
already exhibits multiple phases, then the rule of
thumb is to choose $\spec$ so as not to favor any of
these phases. For the Kadanoff and decimation examples
this means that $\spec$ must be ``magnetically neutral''.
The simplest choice, the alternating configuration
$\spec_x=(-1)^{\card{x}}$, is already suitable.
For Israel's example, this choice causes the
cancellation of the
effective field due to neighboring decimated spins,
which corresponds to replacing the even spins by holes. Formally,
the second sum in the
internal-spin interaction \reff{eq:5.10} disappears.
This corresponds to an Ising model on the \emph{decorated} lattice
$\lat\setminus\lat_{\rm even}$,
formed by sites with four neighbors ---those in $\lat_{\rm odd}$---
and the ``decorations'' ---sites in $\lat_{\rm odd/even}$---
linked only to two other
sites. If we are only interested in observables on the
odd lattice we can sum first over the spins at the decorations.
A little bit of algebra shows that
\begin{equation}\label{eq:5.11}
\sum_{\sigma_d=\pm 1} \exp\bigl(\beta\,\sigma_1\sigma_d
+\beta\,\sigma_d\sigma_2\bigr) \;=\;
C \exp\bigl(\beta'\sigma_1\sigma_2\bigr)
\end{equation}
where $C$ is an uninteresting constant and
\begin{equation}\label{eq:5.12}
\beta'\;=\; {\textstyle\frac{1}{2}} \log\cosh 2\beta\;.
\end{equation}
This means that the internal-spin system, constrained
by the alternating configuration, becomes equivalent
to an Ising model at a lower temperature. If the initial
model is at a low enough temperature, $\beta'$
exceeds the critical Onsager value and the internal-spin
model acquires two different pure phases, respectively
supported on configurations formed by a percolating
sea of ``$+1$'' and a sea of ``$-1$'', with fluctuations
on finite and isolated ``islands''. These are our measures
$\mu^{\spec}_{+}$ and $\mu^{\spec}_{-}$; they
are characterized by the fact that
\begin{equation}\label{eq:5.13}
0\;<\; m(\beta')\;\bydef\; \mu^{\spec}_{+}(\sigma_0)
\;=\;-\mu^{\spec}_{-}(\sigma_0)\;.
\end{equation}
The analogous proof for the Kadanoff transformed measure
is much more involved. It demands a technical, but
widely used, perturbative approach
starting from the zero-temperature phase diagram.
Let me describe it briefly, while referring the reader to
\cite[Appendix B]{vEFS_JSP} for a detailed
presentation and all relevant definitions and references.
In a nutshell, the approach has two stages:
\begin{description}
\item{\emph{Stage 1:}} Determination of the translation-invariant
\emph{ground states} of the model. These are the translation-invariant
measures consistent with the specification obtained
as the zero-temperature ($\beta\to\infty$) limit
of the specification under study. Two type of conditions
must be met for the approach to be applicable. First, the
extremal points of such set of measures (\emph{pure
phases}) must be $\delta$-like, that is, supported
by single configurations. Second, the resulting
phase diagram (that is, the catalogue of ground
states for different values of parameters like $h$)
must be \emph{regular}, in some precise sense,
or have appropriate symmetry properties. In particular
the number of extremal translation-invariant
ground states must be finite throughout a whole region
of parameter values.
\item{\emph{Stage 2:}}
Stability of the zero-temperature phase diagram.
This is proven through a very powerful and sophisticated
theory due to Pirogov and Sinai. Its hypotheses
include the regularity features mentioned above
plus the so-called \emph{Peierls condition}
which roughly means that local fluctuations
of ground states are suppressed exponentially
in its volume. This allows to show stability of
phases by suitably generalizing the
Peierls contour argument for the Ising model.
\end{description}
It is relatively simple to verify that the translation-invariant
ground states interaction
\reff{eq:5.9} with an alternating block-field $\omega'_x$
are ($\delta$-measures on) the all-``$+1$'' or the all-``$-1$'' configurations,
depending on $h$, with coexistence for $h=0$ (by symmetry
reasons). The validity of the Peierls condition follows,
by continuity considerations, from that of the Ising model.
Some subtleness arises from the fact that
\reff{eq:5.9} has $\beta$-dependent parameters
(the last two). This requires a stronger (uniform)
version of Pirogov-Sinai theory. The conclusion
of all this analysis is that the interaction \reff{eq:5.9}
admits two different consistent measures
$\mu^{\spec}_{+}$ and $\mu^{\spec}_{-}$,
with properties similar to those of the decimation case. In particular
they satisfy \reff{eq:5.13}.
\paragraph{Step 2: Choice of discontinuity neighborhoods}
To prove \reff{eq:3.73} the measures consistent with $\Pi^{\tau,\spec}$
need to be approximated by measures obtained similarly for
image spins fixed in configurations of the form
$\spec_{\Lambda'}\eta'_{\Gamma'\setminus\Lambda'}\sigma'$.
The idea is to find configurations
$\eta^{\prime\,\pm}\in\Omega'$ and a sequence
of natural numbers $N_R$, with $N_R>R$,
such that \emph{all} measures $\mu^{R,\sigma^{\prime\,+}}$ and
$\mu^{R,\sigma^{\prime\,-}}$, respectively
consistent with the specifications
$\Pi^{\tau, \spec_{\Lambda'_R}\eta^{\prime\,+}_{\Lambda'_{N_R}
\setminus\Lambda'_R}\sigma^{\prime+}}$
and
$\Pi^{\tau, \spec_{\Lambda'_R}\eta^{\prime\,-}_{\Lambda'_{N_R}
\setminus\Lambda'_R}\sigma^{\prime-}}$,
satisfy that, for any choice of $\sigma^{\prime+},
\sigma^{\prime-}\in \Omega'$
\begin{equation}\label{eq:5.14}
\mu^{R,\sigma^{\prime\,\pm}}(f) \;\tend{R}{\infty}{}\;
\mu^{\spec}_{\pm}(f)
\end{equation}
where $f$ is the observable satisfying \reff{eq:5.10.1}.
Combining the latter with \reff{eq:5.14} we thus obtain that
\begin{equation}\label{eq:5.15}
\lim_{R\to \infty} \, \biggl|\widetilde\mu_\lat\Bigl(f\Bigm|
\spec_{\Lambda'_R}\,\eta^{\prime\,+}_{\Lambda'_{N_R}
\setminus\Lambda'_R}\,\sigma^{\prime+}\Bigr) -
\widetilde\mu_\lat\Bigl(f\Bigm|
\spec_{\Lambda'_R}\,\eta^{\prime\,-}_{\Lambda'_{N_R}
\setminus\Lambda'_R}\,\sigma^{\prime-}\Bigr)\biggr|
\;\ge\;\delta
\end{equation}
for any $\sigma^{\prime+}, \sigma^{\prime-}\in \Omega'$
for $R$ large enough. In view of \reff{eq:5.5},
this almost proves \reff{eq:3.73} for the renormalized measure
$\mu'$. The existence of configurations $\eta^{\prime\pm}$ with
the above properties is, as a matter of fact, a further condition for
the choice of $\spec$.
For the $2\times 2$-decimation of the Ising model it is
relatively simple to prove \reff{eq:5.14}. Indeed,
a short calculation shows that
if the decimated spins are fixed in the alternating
configuration inside a region $\Lambda_R$ and
equal to $+1$ in the annulus immediately outside,
the internal spins in the region $\Lambda_{R+1}$,
are subjected to an Ising interaction with a
magnetic field at the boundary. This field is at least
equal to $\beta'$
\emph{whatever the configuration of
the spins further out} (internal or otherwise).
Hence, regardless of the image configuration
on $\Lambda_{R+1}^{\rm c}$,
the expected magnetization at the origin is
(by Griffiths inequalities) no smaller than that
of an Ising model on a square with ``$+$'' boundary
conditions, which converges
to that of the ``$+$'' Ising measure when the size of the square
diverges. An analogous
argument can be done for a ``$-1$'' boundary
condition. We conclude that \reff{eq:5.14}
is verified for $N_R=R+1$, $\sigma^{\prime\pm}=\pm$
and $f(\sigma)=\sigma_0$, and thus
\begin{equation}\label{eq:5.16}
\lim_{R\to \infty} \, \biggl|\widetilde\mu_\lat\Bigl(\sigma_0\Bigm|
\spec_{\Lambda'_R}(+1)_{\Lambda'_{R+1}
\setminus\Lambda'_R}\,\sigma^{\prime+}\Bigr) -
\widetilde\mu_\lat\Bigl(\sigma_0\Bigm|
\spec_{\Lambda'_R}(-1)^{\prime\,-}_{\Lambda'_{R+1}
\setminus\Lambda'_R}\,\sigma^{\prime-}\Bigr)\biggr|
\;=\;\ 2\,m(\beta')
\end{equation}
which is nonzero if the temperature of the original Ising
model is low enough.
The argument for all other cases (including decimation
in higher dimensions) is less simple. The standard
strategy involves finding configurations
$\eta^{\prime+},\eta^{\prime-}\in\Omega'$ such that:
\begin{itemize}
\item[(i)] The specifications
$\Pi^{\tau,\eta^{\prime+}}$ and $\Pi^{\tau,\eta^{\prime-}}$
admit \emph{unique} consistent measures
respectively denoted by
$\mu^{\eta^{\prime+}}$ and $\mu^{\eta^{\prime-}}$.
\item[(ii)] For any $R>0$, all measures
$\mu^{R,\eta^{\prime+}}$ consistent with
$\Pi^{\tau,\spec_{\Lambda'_R}\,\eta^{\prime+}}$ and all measures
$\mu^{R,\eta^{\prime-}}$ consistent with
$\Pi^{\tau,\spec_{\Lambda'_R}\,\eta^{\prime-}}$
satisfy
\begin{equation}\label{eq:5.17}
\Bigl| \mu^{R,\eta^{\prime+}}(f) - \mu^{R,\eta^{\prime-}}(f)\Bigr|
\;\ge\; \Bigl| \mu^{\spec}_+(f) - \mu^{\spec}_-(f)\Bigr|
\end{equation}
(this is often done with the help of correlation
inequalities).
\end{itemize}
Property (i) implies that each of the specifications
$\Pi^{\tau,\spec_{\Lambda_R}\eta^{\prime\pm}}$, $R>0$, has also
a single consistent measure because it is obtained from
$\Pi^{\tau,\eta^{\prime\pm}}$ by a local change.
We also make use of the following fact: Let $(\mu_n)$ and
$(\Phi_n)$ be respectively sequences of
measures and interactions on $(\Omega,\tribu)$
such that $\mu_n\in\mathcal{G}(\Phi_n)$. Then, if
$\Phi_n$
converges (in $\buno$) to an interaction $\Phi$, every
convergent subsequence of $(\mu_n)$ is consistent
with $\Pi^\Phi$.
We apply this to the sequence of interactions
$\Phi^{\spec_{\Lambda'_R}\,\eta^{\prime\pm}_{\Lambda'_N}}$
which converges, as $N\to\infty$, to
$\Phi^{\spec_{\Lambda'_R}\,\eta^{\prime+}}$, to conclude, from
\reff{eq:5.17} and \reff{eq:5.10.1}, that for each $R>0$
one can choose $N_R$ sufficiently large so that
\reff{eq:5.15} is valid for a $\delta$ slightly smaller than
that in \reff{eq:5.10.1}.
\paragraph{Step 3: ``Unfreezing'' of $\Lambda'$}
The last step consists in showing that, as a consequence
of the previous steps, we can actually find an
observable $f'\in\tribu'_{\Lambda'}$, somehow related
or inspired by $f$, so that the analogue of \reff{eq:5.15} holds
for $\widetilde\mu_{\Lambda'\times\lat}(f'\mid\cdot\,)$.
In fact, for each $\omega'\in\Omega'$
the specification $\Pi^{\tau,\Lambda',\omega'}$
defining the measures
$\widetilde\mu_{\Lambda'\times\lat}(\,\cdot\mid
\omega'_{\comp{\Lambda'}})$ is obtained from
the specification $\Pi^{\tau,\omega'}$ defining
$\widetilde\mu_{\lat}(\,\cdot\mid\omega')$ by
``unfreezing'' the factors
$ T_{x'}(\,\cdot\mid\omega_{B_{x'}})$ for $x'\in\Lambda'$.
This corresponds to a multiplication of the kernels of
$\Pi^{\tau,\omega'}$ by a local density, or to the addition
of a finite number of bonds to the interaction defining the latter.
Therefore there is a canonical bijection between
$\mathcal{G}(\Pi^{\tau,\Lambda',\omega'})$ and
$\mathcal{G}(\Pi^{\tau,\omega'})$ for each fixed $\omega'$.
In particular, the existence of unique or multiple phases in one
of them implies the same feature in the other one.
We conclude that the configurations $\spec$ and
$\eta^{\tilde\pm}$ chosen above allow also
the successful completion of steps 1 and 2 for
$\widetilde\mu_{\Lambda'\times\lat}$ for every
$\Lambda'\Subset\lat'$. We only need
to show the existence of $f'$ such that
\begin{equation}\label{eq:5.18}
\bigl|\widetilde\mu^{\spec,\Lambda'}_+(f')-
\widetilde\mu^{\spec,\Lambda'}_-(f')\bigr| \;> \;0\;.
\end{equation}
where
\begin{equation}\label{eq:5.19}
\widetilde\mu^{\spec,\Lambda'}_\pm(f') \;=\;
\sum_{\sigma'_{\Lambda'}}
\int f'(\sigma'_{\Lambda'}) \prod_{x'\in\Lambda'}
T_{x'}\bigl(\sigma'_{x'}\bigm| \omega_{B_{x'}}\bigr)
\;\mu^{\spec}_\pm(d\omega)\;.
\end{equation}
The properties of $mu^{\spec}_\pm$ must now be exploited.
For the decimation and Kadanoff examples, we have to consider
\begin{equation}\label{eq:5.20}
2\,m'\;\bydef\;\ \sum_{\sigma'_0}
\int \sigma'_0 \;T_0\bigl(\sigma'_0\bigm| \omega_{B_0}\bigr)
\;\Bigl[\mu^{\spec}_+(d\omega) - \mu^{\spec}_-(d\omega)\Bigr]\;.
\end{equation}
At low enough temperatures the measure $\mu^{\spec}_+$ favors ``$+1$''
spins, while $\mu^{\spec}_-$ favors ``$-1$'' (this can be seen by
correlation inequalities or contour arguments: the probability that a
finite region be inside or intersecting a contour goes to zero as
temperature decreases). The transformation density $T_0$, on the
other hand, favors alignment of $\sigma'_0$ with the majority of the
spins in $\omega_{B_0}$. Both effects combined lead to $m'>0$.
\subsubsection{Non-quasilocality throughout the phase diagram}
Following the preceding argument, non-quasilocality has been exhibited
for renormalizations of the Ising model at low temperature and zero
field, for all of the block transformations described in Section
\ref{ssec:ren}. The renormalized measures have subsequently been
shown to be weakly Gibbs \cite{brikuplef98}, while decimated measures
are, in fact, almost quasilocal \cite{ferlenred03,ferlenred03b}.
We see, however, that the above argument relies on the existence of
phase transitions \emph{for the constrained internal spin system}
rather than for the original system. Therefore, non-quasilocality
should be expected also outside the coexistence region of the original
model, in situations where the constraints produced by the
renormalized spins act like fields that bring the internal system into
a phase coexistence region. So we must add
the scenario
\begin{equation}\label{eq:5.21}
\begin{array}{ccc}
\mu &\stackrel{\textstyle\tau}{\longrightarrow} &\mu'\\
\uparrow & &\not\,\downarrow\\
\Phi &\not\!\!\longrightarrow & ??
\end{array}
\end{equation}
in competition with scenario \reff{eq:5.1}. Israel~\cite{isr79}
already exhibited such a phenomenon in his $2\times 2$-decimation
example: A small but non-zero magnetic field of the original Ising
model can be compensated by the (non translation-invariant) field
created by a suitable $\spec$ so that, at low (original) temperatures,
the non-decimated spins undergo a phase transition and the decimated
measure becomes non-quasilocal. This measure is, however,
almost-quasilocal~\cite{ferpfi96}, and its quasilocality can be
restored by further decimations~\cite{maroli93}. More dramatic
examples include block-averaging~\cite[Section 4.3.5]{vEFS_JSP} and
majority~\cite{entferkot95} transformations of the Ising model at high
magnetic field, and decimations of high-$q$-Potts models above the
critical temperature~\cite{entferkot95}. One can even design, for
each temperature, a perverse transformation such that the
renormalization of the Ising measure at this temperature is
non-quasilocal~\cite{ent97}.
There is a clear message coming from these examples: The choice of a
renormalization transformations is a touchy business.
Top-of-the-shelf choices may lead to non-Gibbsian renormalized
measures for which calculations of Hamiltonian parameters
---renormalized temperature, renormalized couplings--- have a doubtful
meaning. The transformation must be well-adapted to the problem, and
the questions, at hand. In particular, block-spin transformations may not
be a good idea at low temperatures, where long-range order
pervades. Rather, renormalization ideas should be applied at the level
of collective variables, like contours~\cite{gawkotkup86}.
\subsection{Surprise number two: spin-flip evolutions}
Metropolis and heath-bath algorithms have been instrumental for the
simulation of statistical mechanical systems. They are processes in
which each spin of a finite lattice is visited according to a certain
routine (sequentially, randomly, by random shuffling) and updated
stochastically by comparing energies before and after the proposed
flip. Their continuous-time counterpart are the \emph{Glauber
spin-flip dynamics} in which the updates are attempted according to
independent Poissonian clocks attached to each site. The dynamics are
tailored so as to converge to a target spin measure which is the object
of the simulation. Each simulation realization is started as some
initial configuration, and a sample configuration is collected after a
number of steps. If this number is sufficiently large, the samples
are distributed almost like in the target measure. Often, the initial
configuration is a ground state, or zero-temperature measure, and the
simulation acts as a numerical furnace that heats it up
(``unquenches'' it) so to bring it to a typical configuration at the
intended temperature.
These simulation schemes define a sequence of transformations of
measures as considered in Section \ref{ssec:ren}. Actual simulations
apply these transformations to Boltzmann measures in finite regions
(usually with periodic boundary conditions), but ideally they should
be applied to measures on the whole lattice. An ideal ``unquenching''
transformation is, then, a high-temperature Metropolis or Glauber
dynamics (that is, a dynamics converging to a high temperature Gibbs
state) applied to a low-temperature Gibbs state. We were surprised by
the fact that, if the temperature difference between the initial and
final states is big enough, non-Gibbsianness enters into the
picture~\cite{entetal02}.
To see how, let us consider a very simple updating process for
Ising spins, in which
at successive time units each spin is flipped independently with
probability $\epsilon\in(0,1)$. The invariant measure for this process
gives equal probability to each spin configuration, thus the process
can be interpreted either as an infinite-temperature parallel Metropolis
algorithm, or an infinite-temperature
discrete time Glauber dynamics. Mathematically,
this process is a block transformation with
$\Omega'=\Omega=\{-1,1\}^{\mathbb{Z}^d}$,
single-site blocks and kernel densities
\begin{equation}\label{eq:5.22}
\begin{array}{rcl}
T_{\{x\}}(\omega_x\mid\omega_x) &=& 1-\epsilon\\
T_{\{x\}}(-\omega_x\mid\omega_x) &=& \epsilon
\end{array}
\end{equation}
Such densities are better expressed as a matrix
$(T_x)_{\sigma\,\eta}\bydef T_{\{x\}}(\sigma\mid\eta)$
which takes the form
\begin{equation}\label{eq:5.23}
T_x\;=\; \left(
\begin{array}{cc}
1-\epsilon & \epsilon \\
\epsilon & 1-\epsilon
\end{array}
\right)
\;=\; \mathbb{I}-\epsilon
\left(
\begin{array}{cc}
1 & -1 \\
-1 & 1
\end{array}
\right)
\;\defby\; \mathbb{I}-\epsilon \,\mathbb{J}\;.
\end{equation}
The $n$-th iteration of such a transformation corresponds, thus,
to single-site kernels
$T^n_{\{x\}}(\sigma_x\mid\eta_x)= (T^n_x)_{\sigma_x\,\eta_x}$
where $T^n_x$ is the $n$-th power of the matrix $T_x$.
Given that $\mathbb{J}^\ell = 2^{\ell-1} \mathbb{J}$ if $\ell\ge 1$
(and equal to $\mathbb{I}$ if $\ell=0$), we obtain
\begin{eqnarray}\label{eq:5.24}
T^n_x &=& \sum_{\ell = 0}^{n} {n \choose \ell} (-\epsilon)^\ell
\,\mathbb{J}^\ell
\;=\; \mathbb{I} + \frac{1}{2}
\sum_{\ell = 1}^{n} {n \choose \ell} (-2\epsilon)\,^\ell
\mathbb{J}
\;=\; \mathbb{I} + \frac{1}{2}\bigl[(1-2\epsilon)^n-1\bigr]\,\mathbb{J}\\
&=& \frac{1}{2}
\left(
\begin{array}{cc}
1+a_n & 1-a_n \\
1+a_n & 1-a_n
\end{array}
\right)
\end{eqnarray}
with
\begin{equation}\label{eq:5.25}
a_n \;=\; (1-2\epsilon)^n.
\end{equation}
Therefore
\begin{equation}\label{eq:5.26}
T_{\{x\}}^n(\omega'_{x}\mid\omega_x) \;=\;
\frac{1}{2} + \frac{a_n}{2}\,\omega'_x\,\omega_x \;=\;
A_n\,\eee^{h_n\,\omega'_x\,\omega_x}
\end{equation}
where the factor $A_n=\bigl[2\cosh h_n\bigr]^{-1}$ will be
eaten up by normalizations and
\begin{equation}\label{eq:5.27}
h_n \;=\; \log\Bigl(\frac{1+a_n}{1-a_n}\Bigr)\;.
\end{equation}
[In fact, $T^n$ is a Kadanoff transformation with
single-site blocks and $p=h_n$.]
Let me observe that
\begin{equation}\label{eq:5.28}
h_n\build{\searrow}_{n\to\infty}^{} 0 \qquad \mbox{and} \qquad
h_n\build{\nearrow}_{\epsilon\to 0}^{}\infty\;.
\end{equation}
We can now make use of the analysis of the previous section.
For fixed $n$ we look to the pair of slices $\Omega\times\Omega'$
respectively formed by the initial configurations and those
at ``time'' $n$, that is at the $n$-th iteration of the process.
The non-quasilocality of the
transformed measure $\mu'$ is related to the
existence of some $\spec$ for which the resulting constrained
initial-spin system exhibits multiple phases. Such a system
corresponds to an interaction which includes, as additional
terms, the bonds \reff{eq:5.26}. Therefore, if we start with an Ising
measure, the condition of observing a configuration $\omega'$ at time $n$
is seen by the initial spins as a correction to the magnetic field leading
to a formal Hamiltonian
\begin{equation}\label{eq:5.29}
-\beta\,\biggl\{\sum_{\langle x, y \rangle} \omega_x\,\omega_y
- \sum_x\Bigl(h+\frac{h_n}{\beta}\omega'_x\Bigr)\,\omega_x
\biggr\}\;.
\end{equation}
We can distinguish three regimes:
\begin{itemize}
\item[(i)] \emph{Short times:} For $n$ small, the effective
magnetic field
$\bigl|h+\frac{h_n}{\beta}\omega'_x\bigr|$ is large if $\epsilon$ is sufficiently
small [rightmost observation in \reff{eq:5.28}].
Hence no phase transition is present and the time-$n$ measure
is expected to be quasilocal. This can be proven, for
$n$ small enough, through an argument that relies
on the existence of ``global'' specifications from
which the specifications
$\Pi^{\tau,\omega'}$ are derived. The argument exploits
FKG monotonicity and Dobrushin uniqueness criterion. If
the initial model is itself at high temperature, then the
measure remains Gibbsian throughout the evolution.
\item[(ii)] \emph{Long times:} For $n$ large,
$h+\frac{h_n}{\beta}\omega'_x \sim h$
[rightmost observation in \reff{eq:5.28}] hence
no phase transition, and thus the quasilocality of $\mu'$,
is expected (and proven) if $h>0$, while for
large $\beta$ and $h=0$ a phase transition
makes the transformed measure discontinuous at
$\spec=$alternating configuration.
\item[(iii)] \emph{Intermediate times:} If $h>0$ and $\epsilon$
small, then for large enough $\beta$ there is a range of $n$
for which a configuration $\spec$ exists such that
$\frac{h_n}{\beta}\spec_x$ effectively compensates $h$.
The resulting phase transition leads to the non-quasilocality
of the evolved measure.
\end{itemize}
\begin{figure}[h]\label{fig:5.1}
\begin{center}
\setlength{\unitlength}{1cm}
\begin{picture}(14,4)
\thicklines
\put(0,3.4){($h=0$)}
\put(2,3.25){\line(0,1){.5}}
\put(1.9,2.9){$0$}
\put(2,3.5){\line(1,0){2}}
\put(2.5,3.6){Gibbs}
\put(4,3.25){\line(0,1){.5}}
\put(3.8,2.9){$n_1$}
\put(4.2,3.5){\ldots\ldots}
\put(5.3,3.5){\vector(1,0){7}}
\put(7,3.6){Non-Gibbs (NQL)}
\put(5.3,3.25){\line(0,1){.5}}
\put(5.1,2.9){$n_2$}
%
\put(0,1.4){($h>0$)}
\put(2,1.25){\line(0,1){.5}}
\put(1.9,0.9){$0$}
\put(2,1.5){\line(1,0){2.3}}
\put(2.6,1.6){Gibbs}
\put(4.3,1.25){\line(0,1){.5}}
\put(4.1,0.9){$n_1$}
\put(4.4,1.5){\ldots}
\put(5,1.5){\line(1,0){4}}
\put(5,1.25){\line(0,1){.5}}
\put(4.8,0.9){$n_2$}
\put(5.5,1.6){Non-Gibbs (NQL)}
\put(9,1.25){\line(0,1){.5}}
\put(8.8,1){$n_3$}
\put(9.2,1.5){\ldots}
\put(9.8,1.5){\vector(1,0){4}}
\put(11,1.6){Gibbs}
\put(9.8,1.25){\line(0,1){.5}}
\put(9.6,1){$n_4$}
\end{picture}
\end{center}
\caption{Proven regimes of Gibbsianness and non-Gibbsianness for
a low-temperature Ising measure subjected to fast heating}
\end{figure}
The situation is summarized in Figure \ref{fig:5.1} when
the stirring probability $\epsilon$ is small. Larger values
of $\epsilon$ lead to larger changes in each time unit
and some of the initial regions may disappear (some $n_i$
may turn out to be smaller than one).
Through a more complicated but
similar analysis the same statements are proven for
general high-temperature stochastic dynamics
both in discrete and continuous time~\cite{entetal02}.
In these cases the effective Hamiltonian for
the evolved measure acquires some long-range terms
that decay exponentially with the diameter of the bond.
They must be controlled by perturbative arguments
(cluster expansions, Pirogov-Sinai theory).
The heuristic explanation of these results is as follows.
For short enough times the evolution causes only a few
changes. Therefore the evolved measure differs
little of the initial measure and, in particular, preserves
its Gibbsian character. This is true for more general
reversible dynamics, for instance for
dynamics of Kawasaki type ---which
conserve the total number of spins in each value---
or mixtures of Glauber and Kawasaki
dynamics~\cite{lenred02}.
The onset of non-Gibbsianness
at later times ---and of the subsequent Gibbsianness
if $h>0$--- corresponds to a transition in the
\emph{most probable history of an improbable
configuration} (the expression is of Aernout van Enter).
There are two competing mechanisms to explain
the presence of a droplet $\omega_\Lambda$ at some
instant of the evolution:
(i) It has been created by the dynamics, and (ii) the
droplet was already present initially and it survived
the evolutionary period. The probabilistic cost of the first
event increases, roughly, exponentially
with the volume $\card\Lambda$ of the droplet.
The second mechanism is even more costly
if the droplet is atypical for the initial measure,
because its initial presence costs already a volume
exponential. But if $\omega$ is typical of any
of the phases of the initial system, this factor
becomes exponential only in the surface
area $\card{\partial\Lambda}$. As droplets
at worst shrink at constant velocity, the second
mechanism is more probable for such a
droplet for intermediate times.
Suppose now
that at some not-too-short time we observe a
configuration
$\spec_\Lambda\sigma'_{\Gamma\setminus\Lambda}$
with $\Lambda$ large and $\Gamma$
enormous,
$\spec$ atypical of any of the phases of the initial system
and $\sigma'$ typical of one of them. The most likely explanation
is, thus, that $\spec_\Lambda$ was formed during the
evolution, while $\sigma'_\Gamma$ remains of
the initial configuration. The initial gigantic $\sigma'$ droplet
causes a bias on the evolved configuration around the origin.
In this way, through the original (``hidden'') spins,
the far-away annulus
$\sigma'_{\Gamma\setminus\Lambda}$ determines
the evolved measure close to the origin; quasilocality
is lost. For non-zero magnetic field, the initial system
has only one phase. If the elapsed time is large
enough, only droplets typical of this phase
are able to survive, any other $\sigma'$
must have been created by the evolution.
This creation is a local phenomenon, so quasilocality
is recovered.
Whereas in a renormalization context, lack of
quasilocality implies that a renormalization group map does not exist,
here the physical interpretation is that the evolved (fastly heated)
measure cannot be described by a temperature, after some time.
This phenomenon has been the object of a numerical study~\cite{olipet05}.
\subsection{Surprise number three: disordered models}\label{sec:dis}
A statistical mechanical system is \emph{disordered} if there
are parameters in the interaction that are themselves random variables.
Its mathematical framework is as follows. Besides the space
of spin configurations $(\Omega=\sing^\lat,\tribu)$ there is another
space of \emph{disorder variables}
$\bigl(\Omega^*=(\sing^*)^{\lat^*},\tribu^*\bigr)$, where
$\sing^*$ is some space that need not be finite or discrete,
$\lat^*$ is a countable set and
$\tribu^*$ is the product $\sigma$-algebra of some natural
Borel measure structure of $\sing^*$. The disorder
variables come equipped with some \emph{disorder measure}
$\mathbb{P}$ that is often extremely simple, typically a product measure.
A \emph{disordered interaction} is a family of functions
$\bigl\{\phi_A(\,\cdot\mid\cdot\,)\in\tribu\times\tribu^*: A\Subset\lat\bigr\}$
such that $\phi_A(\,\cdot\mid\eta^*)\in\tribu_A$ for each
$A\Subset\lat$ and $\eta^*\in\Omega^*$. Often, the disorder
dependence is also local in the sense that for each
$A\Subset\lat$ there exists $A^*\Subset\lat^*$ such that
$\phi_A(\sigma\mid\cdot\,)\in\tribu^*_{A^*}$ for
each $\sigma\in\Omega$. A disordered interaction
defines for each value $\eta^*$ an interaction
$\Phi(\,\cdot\mid\eta^*)=
\bigl\{\phi_A(\,\cdot\mid\eta^*): A\Subset\lat\bigr\}$
on $(\Omega,\tribu)$ which, under the $\buno$-summability
condition
$\sum_{A\ni x}\norm{\phi_A(\,\cdot\mid\eta^*)}<\infty$,
$x\in\lat$, leads to Gibbsian
specifications
$\Pi^{\Phi(\,\cdot\mid\eta^*)}$ on $(\Omega,\tribu)$.
The study of \emph{quenched disorder} amounts to the
determination of features of the phase diagram
and properties of extremal measures of the models defined
by these specifications for \emph{fixed} typical choices of the disorder.
More precisely, the interest
focuses on features and properties valid
$\mathbb{P}$-almost surely, that is, for almost all
disorder configurations $\eta^*$. In contrast, in
the analysis of \emph{annealed disorder} there is
a previous $\mathbb{P}$-average over the Gibbs weights of the magnitude
in question. This averaging makes annealed disorder
much easier to study than its quenched counterpart.
Let me mention three well-known examples.
\begin{description}
\item{\emph{Random-field Ising model:}} It represents an
Ising model with a random independent magnetic field
at each site. That is, $\lat^*=\lat$, $\sing^*\subset\mathbb{R}$,
$\mathbb{P}$ is the product of reasonable single-site distributions
(ex.\ Gaussian or of bounded support) and the disordered interaction
yields the formal Hamiltonians
\begin{equation}\label{eq:5.30}
-\sum_{\langle x,y \rangle}\sigma_x\sigma_y
-h\sum_x \eta^*_x\,\sigma_x\;.
\end{equation}
\item{\emph{Edwards-Anderson spin glass:}} It corresponds
to a zero-field Ising model with random independent couplings.
Therefore the disorder acts on the bond-lattice,
$\lat^*=\bigl\{\{x,y\}: x,y\in\lat, \card{x-y}=1\bigr\}$,
$\sing^*\subset\mathbb{R}$,
$\mathbb{P}$ is a product of reasonable single-bond distributions and
the formal disordered Hamiltonians are
\begin{equation}\label{eq:5.31}
-\sum_{\langle x,y \rangle}\eta^*_{\{x,y\}}\,\sigma_x\sigma_y\;.
\end{equation}
\item{\emph{Griffiths singularity (GriSing) random field:}} It describes
an Ising model on the random lattice determined by independent
site-percolation. Thus $\lat^*=\lat$, $\sing^*=\{0,1\}$ and
$\mathbb{P}$ is the product of Bernoulli variables taking value $1$ with
probability $p$ and $0$ with probability $1-p$. The formal Hamiltonians
are
\begin{equation}\label{eq:5.32}
-\sum_{\langle x,y \rangle}\eta^*_x\eta^*_y\,\sigma_x\,\sigma_y
-h\sum_x \eta^*_x\sigma_x\;.
\end{equation}
This model was introduced by Griffiths to illustrate the appearance
of singularities, now known as Griffiths singularities, that prevent
the infinitely derivable disordered free energy from being analytic.
\end{description}
A natural approach to the study of quenched disorder is
to place spin and disorder variables on the same footing
and consider a ``grand-canonical ensemble'' on
the product space $\Omega\times\Omega^*$
from which quenched measures are obtained as projections
on $\Omega$. In this way quenching is incorporated within
the grand-canonical average and hence constitutes an
``annealed approach to quenching disorder''. Such an approach
was first advocated by Morita in the sixties~\cite{mor64}.
Formally, this corresponds to considering the
\emph{skew space}
$(\Omega\times\Omega^*,\tribu\times\tribu^*)$
and joint-variable measures $K$ obtained as weak limits
\begin{equation}\label{eq:5.33}
K(d\omega,d\eta^*) \;=\; \lim_{n\to\infty}\lim_{m\to\infty}
\mathbb{P}_{\Lambda^*_{r_m}}(d\eta^*\mid
\alpha^*)\;
\pi_{\Lambda_{s_n}}^{\Phi(\,\cdot\mid\eta^*_{\Lambda^*_{r_m}}\alpha^*)}
(d\omega\mid\sigma)
\end{equation}
where $(r_n)$ and $(s_m)$ are diverging sequence of box sizes and
$\alpha^*$ and $\sigma$ are
disorder and spin boundary conditions. Such limits always exist,
by compactness, if $\sing^*$ is compact.
Morita's theory supposed the existence of an effective
Hamiltonian for the joint variables, that is, the
Gibbsianness of these measures $K$. It is now
known that this assumption is
false in general~\cite{kul99,entetal00b,entkulmae00}.
A rough explanation of this fact comes from the fact that
a joint effective Hamiltonian should deal with terms of
the form
\begin{equation}\label{eq:5.34}
\log\biggl(\frac{\mathbb{P}(\eta^*_{\Lambda^*})}
{Z_\Lambda^{\Phi(\,\cdot\mid\eta^*)}}\biggr)
\end{equation}
which become ill-defined, in the limit $\Lambda\to\lat$, precisely
when there are Griffiths singularities (or other phase transitions).
As an illustration, let us consider the conditional
probability at the origin of a measure of type \reff{eq:5.33} for
the GriSing model. After a brief verification we see that
\begin{eqnarray}\label{eq:5.35}
\lefteqn{K_{\{0\}}\bigl(\eta^*_0=+1\bigm|\sigma\,\eta^*\bigr)}\nonumber\\
&=& \lim_{n\to\infty}\;
\frac{\mathbb{P}(\eta^*_0=1)\;
\gamma_{\Lambda_n}^{\Phi(\,\cdot\mid 1_{\{0\}} \eta^*_{\Lambda_n\setminus\{0\}})}
(\sigma_{\Lambda_n}\mid\sigma_{\Lambda_n^{\rm c}})}
{\mathbb{P}(\eta^*_0=1)\;
\gamma_{\Lambda_n}^{\Phi(\,\cdot\mid 1_{\{0\}} \eta^*_{\Lambda_n\setminus\{0\}})}
(\sigma_{\Lambda_n\setminus\{0\}}\mid\sigma_{\Lambda_n^{\rm c}})
\;+\; \mathbb{P}(\eta^*_0=0)\;
\gamma_{\Lambda_n}^{\Phi(\,\cdot\mid 0_{\{0\}}\eta^*_{\Lambda_n\setminus\{0\}})}
(\sigma_{\Lambda_n\setminus\{0\}}\mid\sigma_{\Lambda_n^{\rm c}})}\nonumber\\[8pt]
&=& \frac{p}{p\,\Delta_{\rm QL} + (1-p)\,\Delta_{\rm NQL}}
\end{eqnarray}
(if necessary, $\Lambda_n$ should be replaced by $\Lambda_{r_n}$). The term
\begin{equation}\label{eq:5.36}
\Delta_{\rm QL} \;=\; \lim_{n\to\infty}\;\frac{
\gamma_{\Lambda_n}^{\Phi(\,\cdot\mid 1_{\{0\}} \eta^*_{\Lambda_n\setminus\{0\}})}
(\sigma_{\Lambda_n\setminus\{0\}}\mid\sigma_{\Lambda_n^{\rm c}})}
{\gamma_{\Lambda_n}^{\Phi(\,\cdot\mid 1_{\{0\}} \eta^*_{\Lambda_n\setminus\{0\}})}
(\sigma_{\Lambda_n}\mid\sigma_{\Lambda_n^{\rm c}})}
\;=\;1+\frac{
\gamma_{\{0\}}^{\Phi(\,\cdot\mid 1_{\{0\}} \eta^*_{\Lambda_n\setminus\{0\}})}
(-\sigma_0\mid\sigma_{\comp{\{0\}}})}
{\gamma_{\{0\}}^{\Phi(\,\cdot\mid 1_{\{0\}} \eta^*_{\Lambda_n\setminus\{0\}})}
(\sigma_0\mid\sigma_{\comp{\{0\}}})}
\end{equation}
is perfectly continuous with respect to both $\eta^*$ and $\sigma$.
The discontinuity appears in
\begin{equation}\label{eq:5.37}
\Delta_{\rm NQL} \;=\; \lim_{n\to\infty}\;\frac{
\gamma_{\Lambda_n}^{\Phi(\,\cdot\mid 0_{\{0\}} \eta^*_{\Lambda_n\setminus\{0\}})}
(\sigma_{\Lambda_n\setminus\{0\}}\mid\sigma_{\Lambda_n^{\rm c}})}
{\gamma_{\Lambda_n}^{\Phi(\,\cdot\mid 1_{\{0\}} \eta^*_{\Lambda_n\setminus\{0\}})}
(\sigma_{\Lambda_n\setminus\{0\}}\mid\sigma_{\Lambda_n^{\rm c}})}
\end{equation}
because of the presence of the ratio
\begin{equation}\label{eq:5.38}
%\Delta_n(\eta^*,\sigma) \;\bydef\;
\frac{
Z_{\Lambda_n}^{\Phi(\,\cdot\mid 1_{\{0\}} \eta^*_{\Lambda_n\setminus\{0\}})}
(\sigma_{\Lambda_n^{\rm c}})}
{Z_{\Lambda_n}^{\Phi(\,\cdot\mid 0_{\{0\}} \eta^*_{\Lambda_n\setminus\{0\}})}
(\sigma_{\Lambda_n^{\rm c}})}
\;=\; \pi_{\Lambda_n\setminus\{0\}}^{\Phi(\,\cdot\mid 0_{\{0\}} \eta^*_{\Lambda_n\setminus\{0\}})}
\Bigl(2\cosh\Bigl[\beta\sum_{\card{y}=1}\eta^*_y\,\sigma_y\Bigr]
\Bigm|\sigma_{\Lambda_n^{\rm c}}\Bigr)\;.
\end{equation}
The discontinuity takes place at disorder configurations $\eta^*$ with
more than one percolation clusters, all of them excluding the origin.
Then, it is not hard to see~\cite{entetal00b} that a local
modification causing a connection between clusters produces a finite
change en the expectation \reff{eq:5.38}. The absolute value of this
change is is bounded below by a positive constant that does not depend
on the distance at which the connection is established. Quasilocality
is thereby lost. The point of discontinuity depends only on the
disorder variable $\eta^*$; the conditioning spin configuration
$\sigma$ is irrelevant.
\bigskip
This type of non-quasilocality is, in my opinion, more subtle
and surprising than those analyzed in the previous sections.
It appears for values of the disorder that are close to
those for which the quenched system has a phase transition.
These are precisely the disorder configurations triggering
the presence of arbitrary long, but finite, order that leads
to Griffiths singularities. In the present model, these
configurations are unlikely if $p$ is smaller than the critical
percolation probability. Thus, the model is
almost quasilocal for those $p$~\cite{entetal00b}.
The random-field Ising model in three or more dimensions has
a more dramatical feature~\cite{kul99}. At low temperature there
is a full measure set of random fields for which the quenched model
has a phase transition. Hence the joint measure is \emph{almost
non-quasilocal} that is, the set of discontinuities has full
measure. On the other hand, the joint measures of finite-range
disordered models can be proven to be weakly Gibbsian~\cite{kul01b}, hence
we have here the largest possible divorce between the notions
of almost quasilocality and weak Gibbsianness.
There is another sense in which the non-Gibbsianness of joint
disorder measures is complementary to that caused by
renormalization transformations or spin dynamics.
In the previous cases there was a two-slice system, defined
on $\Omega\times\Omega'$ that was Gibbsian, and
non-Gibbsianness appeared upon projection to the $\Omega'$
variables. In the present case, the two-slice model
on $\Omega\times\Omega^*$ is non-Gibbsian, while
projections to each of the slices can restore Gibbsianness.
[The $\Omega$ projection is the quenched average of Gibbsian
measures which can be Gibbsian, while the $\Omega^*$-projection is the disorder
measure $\mathbb{P}$ which is usually a product measure, and thus trivially Gibbsian.]
In fact, the non-Gibbsianness of the joint measures turned out to be
beneficial to Morita's approach. Indeed, besides the hypothetical
joint Hamiltonian, Morita's theory included other assumptions
equally contradictory with Gibbsianness. And yet, the approach was
undeniably successful. Non-Gibbsianness
does solve such a paradox~\cite{kul04}. First, it removes
inconsistencies related with the untenable Gibbsian hypothesis, and
second, it allows for a rigorous justification of the
equations solving the model. This has
been a remarkable
achievement of non-Gibbsianness theory.
%\section{Final comments}
%%Physicist's calculations = weak potential?
\section*{Acknowledgments}
Our field is blessed by the fact that its founding fathers
set up a friendly, open an unassuming style of work, where
ideas are discussed generously and freely. I thank wholeheartedly
Aernout, Anton, Fran\c{c}ois and Frank for the immense task of
setting up a school fully inscribed in such a tradition.
I also thank Aernout van Enter for an invaluable critical reading of the
manuscript.
%\bibliography{gibbs}
\begin{thebibliography}{10}
\bibitem{brikuplef98}
J.~Bricmont, A.~Kupiainen, and R.~Lefevere.
\newblock Renormalization group pathologies and the definition of {G}ibbs
states.
\newblock {\em Comm. Math. Phys.}, 194(2):359--388, 1998.
\bibitem{brikuplef01}
J.~Bricmont, A.~Kupiainen, and R.~Lefevere.
\newblock Renormalizing the renormalization group pathologies.
\newblock {\em Phys. Rep.}, 348(1-2):5--31, 2001.
\newblock Renormalization group theory in the new millennium, II.
\bibitem{dacnah01}
S.~Dachian and B.~S. Nahapetian.
\newblock Description of random fields by means of one-point conditional
distributions and some applications.
\newblock {\em Markov Proc. Rel. Fields}, 7:193--214, 2001.
\bibitem{dacnah04}
S.~Dachian and B.~S. Nahapetian.
\newblock Description of specifications by means of probability distributions
in small volumes under condition of very weak positivity.
\newblock {\em J. Stat. Phys.}, 117:281--300, 2004.
\bibitem{dob95}
R.~L. Dobrushin.
\newblock A {G}ibbsian representation for non-{G}ibbs fields.
\newblock Lecture given at the workshop {``Probability and Physics''}, Renkum,
The Netherlands, 1995.
\bibitem{dob68b}
R.~L. Dobrushin.
\newblock The description of a random field by means of conditional
probabilities and conditions of its regularity.
\newblock {\em Theor. Prob. Appl.}, 13:197--224, 1968.
\bibitem{dobshl97}
R.~L. Dobrushin and S.~B. Shlosman.
\newblock Gibbsian description of {``non-{G}ibbsian''} fields.
\newblock {\em Russian Math. Surveys}, 52:285--97, 1997.
\bibitem{dobshl98}
R.~L. Dobrushin and S.~B. Shlosman.
\newblock ``{N}on-{G}ibbsian'' states and their {G}ibbs description.
\newblock {\em Comm. Math. Phys.}, 200(1):125--179, 1999.
\bibitem{dorvan89}
T.~C. Dorlas and A.~C.~D. van Enter.
\newblock Non-{G}ibbsian limit for large-block majority-spin transformations.
\newblock {\em J. Stat. Phys.}, 55:171--181, 1989.
\bibitem{ferlenred03b}
R.~Fern{\'a}ndez, A.~Le~Ny, and F.~Redig.
\newblock Restoration of {G}ibbsianness for projected and {FKG} renormalized
measures.
\newblock {\em Bull. Braz. Math. Soc. (N.S.)}, 34(3):437--455, 2003.
\bibitem{ferlenred03}
R.~Fern{\'a}ndez, A.~Le~Ny, and F.~Redig.
\newblock Variational principle and almost quasilocality for renormalized
measures.
\newblock {\em J. Statist. Phys.}, 111(1-2):465--478, 2003.
\bibitem{fermai04}
R.~Fern\'andez and G.~Maillard.
\newblock Chains with complete connections and one-dimensional {G}ibbs
measures.
\newblock {\em Electron. J. Probab.}, 9:145--76, 2004.
\bibitem{fermai05}
R.~Fern\'andez and G.~Maillard.
\newblock Construction of a specification from its singleton part, 2005.
\newblock Paper 05-288 at {\tt http://www.ma.utexas.edu/mp\underline{\ }arc}.
\bibitem{ferpfi96}
R.~Fern\'andez and C.-Ed. Pfister.
\newblock Global specifications and non-quasilocality of projections of {G}ibbs
measures.
\newblock {\em Ann. Probab.}, 25:1284--1315, 1997.
\bibitem{fertoo03}
R.~Fern\'andez and A.~Toom.
\newblock Non-gibbsianness of the invariant measures of of non-reversible
cellular automata with totally asymmetric noise.
\newblock {\em Asth\'erisque}, 287:71--87, 2003.
\bibitem{fis83}
M.~E. Fisher.
\newblock Scaling, universality and renormalization group theory.
\newblock In F.~J.~W. Hahne, editor, {\em Critical Phenomena (Stellenbosch
1982)}, pages 1--139. Springer-Verlag (Lecture Notes in Physics \#186),
Berlin--Heidelberg--New York, 1983.
\bibitem{gawkotkup86}
K.~Gaw{\c{e}}dzki, R.~Koteck{\'y}, and A.~Kupiainen.
\newblock Coarse-graining approach to first-order phase transitions.
\newblock In {\em Proceedings of the symposium on statistical mechanics of
phase transitions---mathematical and physical aspects (Trebon, 1986)},
volume~47, pages 701--724, 1987.
\bibitem{geo88}
H.-O. Georgii.
\newblock {\em Gibbs Measures and Phase Transitions}.
\newblock Walter de Gruyter (de Gruyter Studies in Mathematics, Vol.\ 9),
Berlin--New York, 1988.
\bibitem{gol92}
N.~Goldenfeld.
\newblock {\em Lectures on Phase Transitions and the Renormalization Group}.
\newblock Addison-Wesley (Frontiers in Physics 85), 1992.
\bibitem{gri81}
R.~B. Griffiths.
\newblock Mathematical properties of renormalization-group transformations.
\newblock {\em Physica}, 106A:59--69, 1981.
\bibitem{gripea78}
R.~B. Griffiths and P.~A. Pearce.
\newblock Position-space renormalization-group transformations: {S}ome proofs
and some problems.
\newblock {\em Phys. Rev. Lett.}, 41:917--920, 1978.
\bibitem{gripea79}
R.~B. Griffiths and P.~A. Pearce.
\newblock Mathematical properties of position-space renormalization-group
transformations.
\newblock {\em J. Stat. Phys.}, 20:499--545, 1979.
\bibitem{gri73}
G.~Grimmett.
\newblock A theorem about random fields.
\newblock {\em Bull. London Math. Soc.}, 5:81--4, 1973.
\bibitem{gri95}
G.~Grimmett.
\newblock The stochastic random-cluster process and the uniqueness of
random-cluster measures.
\newblock {\em Ann. Prob.}, 23:1461--510, 1995.
\bibitem{hag96}
O.~H{\"a}ggstr{\"o}m.
\newblock Almost sure quasilocality fails for the random-cluster model on a
tree.
\newblock {\em J. Stat. Phys.}, 84:1351--61, 1996.
\bibitem{isr79}
R.~B. Israel.
\newblock Banach algebras and {K}adanoff transformations.
\newblock In J.~Fritz, J.~L. Lebowitz, and D.~Sz{\'a}sz, editors, {\em Random
Fields -- Rigorous Results in Statistical Mechanics and Quantum Field
Theory}, volume~II, pages 593--608. North-Holland, Amsterdam, 1981.
\newblock Colloquia Mathematica Societatis Janos Bolyai {\bf 27}, Esztergom
(Hungary), 1979.
\bibitem{koz74}
O.~K. Kozlov.
\newblock Gibbs description of a system of random variables.
\newblock {\em Probl. Inform. Transmission}, 10:258--65, 1974.
\bibitem{kul99}
C.~K{\"u}lske.
\newblock ({N}on-) {G}ibbsianness and phase transitions in random lattice spin
models.
\newblock {\em Markov Process. Related Fields}, 5(4):357--383, 1999.
\bibitem{kul01}
C.~K{\"u}lske.
\newblock On the {G}ibbsian nature of the random field {K}ac model under
block-averaging.
\newblock {\em J. Statist. Phys.}, 104(5-6):991--1012, 2001.
\bibitem{kul01b}
C.~K{\"u}lske.
\newblock Weakly {G}ibbsian representations for joint measures of quenched
lattice spin models.
\newblock {\em Probab. Theory Related Fields}, 119(1):1--30, 2001.
\bibitem{kul03}
C.~K{\"u}lske.
\newblock Analogues of non-{G}ibbsianness in joint measures of disordered mean
field models.
\newblock {\em J. Statist. Phys.}, 112(5-6):1079--1108, 2003.
\bibitem{kul04}
C.~K{\"u}lske.
\newblock How non-{G}ibbsianness helps a metastable {M}orita minimizer to
provide a stable free energy.
\newblock {\em Markov Process. Related Fields}, 10(3):547--564, 2004.
\bibitem{kullenred04}
C.~K{\"u}lske, A.~Le~Ny, and F.~Redig.
\newblock Relative entropy and variational properties of generalized {G}ibbsian
measures.
\newblock {\em Ann. Probab.}, 32(2):1691--1726, 2004.
\bibitem{lanrue69}
O.~E. {Lanford III} and D.~Ruelle.
\newblock Observables at infinity and states with short range correlations in
statistical mechanics.
\newblock {\em Commun. Math. Phys.}, 13:194--215, 1969.
\bibitem{lebmae87}
J.~L. Lebowitz and C.~Maes.
\newblock The effect of an external field on an interface, entropic repulsion.
\newblock {\em J. Stat. Phys.}, 46:39--49, 1987.
\bibitem{lebsch88}
J.~L. Lebowitz and R.~H. Schonmann.
\newblock Pseudo-free energies and large deviations for non-{G}ibbsian {FKG}
measures.
\newblock {\em Prob. Th. Rel. Fields}, 77:49--64, 1988.
\bibitem{lef99}
R.~Lefevere.
\newblock Variational principle for some renormalized measures.
\newblock {\em J. Statist. Phys.}, 96(1-2):109--133, 1999.
\bibitem{lor98}
J.~L{\"o}rinczi.
\newblock Non-{G}ibbsianness of the reduced {SOS}-measure.
\newblock {\em Stoch. Proc. Appl.}, 74:83--8, 1998.
\bibitem{lorvel94}
J.~L{\"o}rinczi and K.~Vande Velde.
\newblock A note on the projection of {G}ibbs measures.
\newblock {\em J. Stat. Phys.}, 77:881--7, 1994.
\bibitem{lorwin92}
J.~L{\"o}rinczi and M.~Winnink.
\newblock Some remarks on almost {G}ibbs states.
\newblock In N.~Boccara, E.~Goles, S.~Martinez, and P.~Picco, editors, {\em
Cellular Automata and Cooperative Systems}, pages 423--432, Dordrecht, 1993.
Kluwer.
\bibitem{redmaemof98}
C.~Maes, A.~Van Moffaert, and F.~Redig.
\newblock Almost {G}ibbsian versus weakly {G}ibbsian measures.
\newblock {\em Stoch. Proc. Appl.}, 79:1--15, 1998.
\bibitem{maeetal00}
C.~Maes, F.~Redig, F.~Takens, A.~van Moffaert, and E.~Verbitskiy.
\newblock Intermittency and weak {G}ibbs states.
\newblock {\em Nonlinearity}, 13(5):1681--1698, 2000.
\bibitem{maevel94}
C.~Maes and K.~Vande Velde.
\newblock The (non-){G}ibbsian nature of states invariant under stochastic
transformations.
\newblock {\em Physica A}, 206:587--603, 1994.
\bibitem{maevel97}
C.~Maes and K.~Vande Velde.
\newblock Relative energies for non-{G}ibbsian states.
\newblock {\em Commun. Math. Phys.}, 189:277--86, 1997.
\bibitem{mak97}
D.~Makowiec.
\newblock Gibbsian versus non-{G}ibbsian nature of stationary states for {T}oom
probabilistic cellular automata via simulations.
\newblock {\em Phys. Rev. E}, 55:6582--8, 1997.
\bibitem{mak99}
D.~Makowiec.
\newblock Stationary states of {T}oom cellular automata in simulations.
\newblock {\em Phys. Rev. E}, 60:3787--95, 1999.
\bibitem{maroli93}
F.~Martinelli and E.~Olivieri.
\newblock Some remarks on pathologies of renormalization-group transformations.
\newblock {\em J. Stat. Phys.}, 72:1169--1177, 1993.
\bibitem{maroli94}
F.~Martinelli and E.~Olivieri.
\newblock Instability of renormalization-group pathologies under decimation.
\newblock {\em J. Stat. Phys.}, 79:25--42, 1995.
\bibitem{marsco91}
F.~Martinelli and E.~Scoppola.
\newblock A simple stochastic cluster dynamics: rigorous results.
\newblock {\em J. Phys. A}, 24:3135--57, 1991.
\bibitem{mor64}
T.~Morita.
\newblock Statistical mechanics of quenched sollid solutions with applications
to magnetically diluted alloys.
\newblock {\em J. Math. Phys.}, 5:1402--5, 1964.
\bibitem{lenred02}
A.~Le Ny and F.~Redig.
\newblock Short time conservation of {G}ibbsianness under local stochastic
evolutions.
\newblock {\em J. Statist. Phys.}, 109(5-6):1073--1090, 2002.
\bibitem{olipet05}
M.~Oliveira and A.~Petri.
\newblock Boltzmann temperature in out-of-equilibrium lattice gas.
\newblock {\tt ArXiv cond-mat{/}0511263}, 2005.
\bibitem{pfivan95}
C.-E. Pfister and K.~Vande Velde.
\newblock Almost sure quasilocality in the random cluster model.
\newblock {\em J. Stat. Phys.}, 79:765--74, 1995.
\bibitem{pfi02}
C.-Ed. Pfister.
\newblock Thermodynamical aspects of classical lattice systems.
\newblock In {\em In and out of equilibrium (Mambucaba, 2000)}, volume~51 of
{\em Progr. Probab.}, pages 393--472. Birkh\"auser Boston, Boston, MA, 2002.
\bibitem{pre76}
C.~Preston.
\newblock {\em Random Fields}.
\newblock Springer-Verlag (Lecture Notes in Mathematics \#534),
Berlin--Heidelberg--New York, 1976.
\bibitem{sch89}
R.~H. Schonmann.
\newblock Projections of {G}ibbs measures may be non-{G}ibbsian.
\newblock {\em Commun. Math. Phys.}, 124:1--7, 1989.
\bibitem{sok81}
A.~D. Sokal.
\newblock Existence of compatible families of proper regular conditional
probabilities.
\newblock {\em Z. Wahrscheinlichkeitstheorie verw. Geb.}, 56:537--548, 1981.
%\bibitem{sok82}
%A.~D. Sokal.
%\newblock More surprises in the general theory of lattice systems.
%\newblock {\em Commun. Math. Phys.}, 86:327--336, 1982.
\bibitem{sul73}
W.~G. Sullivan.
\newblock Potentials for almost {M}arkovian random fields.
\newblock {\em Commun. Math. Phys.}, 33:61--74, 1973.
\bibitem{ent97}
A.~C.~D. van Enter.
\newblock Ill-defined block-spin transformations at arbitrarily high
temperatures.
\newblock {\em J. Stat. Phys.}, 83:761--5, 1996.
\bibitem{ent00}
A.~C.~D. van Enter.
\newblock A remark on the notion of robust phase transitions.
\newblock {\em J. Statist. Phys.}, 98(5-6):1409--1416, 2000.
\bibitem{vanfer89}
A.~C.~D. van Enter and R.~Fern\'andez.
\newblock A remark on different norms and analyticity for many-particle
interactions.
\newblock {\em J. Stat. Phys.}, 56:965--972, 1989.
\bibitem{entetal02}
A.~C.~D. van Enter, R.~Fern{\'a}ndez, F.~den Hollander, and F.~Redig.
\newblock Possible loss and recovery of {G}ibbsianness during the stochastic
evolution of {G}ibbs measures.
\newblock {\em Comm. Math. Phys.}, 226(1):101--130, 2002.
\bibitem{entferkot95}
A.~C.~D. van Enter, R.~Fern\'andez, and R.~Koteck\'y.
\newblock Pathological behavior of renormalization-group maps at high fields
and above the transition temperature.
\newblock {\em J. Stat. Phys}, 79:969--92, 1995.
\bibitem{vEFS_JSP}
A.~C.~D. van Enter, R.~Fern{\'a}ndez, and A.~D. Sokal.
\newblock Regularity properties and pathologies of position-space
renormalization-group transformations: Scope and limitations of {G}ibbsian
theory.
\newblock {\em J. Stat. Phys.}, 72:879--1167, 1993.
\bibitem{entkulmae00}
A.~C.~D. van Enter, C.~K\"ulske, and C.~Maes.
\newblock Comment on: Critical behavior of the randomly spin diluted 2d ising
model: A grand ensemble approach, by r. k\"uhn.
\newblock {\em Phys. Rev. Lett.}, 84:6134, 2000.
\bibitem{entetal00b}
A.~C.~D. van Enter, C.~Maes, R.~H. Schonmann, and S.~B. Shlosman.
\newblock The {G}riffiths singularity random field.
\newblock In {\em On Dobrushin's way. From probability theory to statistical
physics}, volume 198 of {\em Amer. Math. Soc. Transl. Ser. 2}, pages 51--58.
Amer. Math. Soc., Providence, RI, 2000.
\bibitem{entetal00}
A.~C.~D. van Enter, C.~Maes, and S.~B. Shlosman.
\newblock Dobrushin's program on {G}ibbsianity restoration: weakly {G}ibbs and
almost {G}ibbs random fields.
\newblock In {\em On Dobrushin's way. From probability theory to statistical
physics}, volume 198 of {\em Amer. Math. Soc. Transl. Ser. 2}, pages 59--70.
Amer. Math. Soc., Providence, RI, 2000.
\bibitem{entshl98}
A.~C.~D. van Enter and S.~B. Shlosman.
\newblock ({A}lmost) {G}ibbsian description of the sign fields of {SOS} fields.
\newblock {\em J. Statist. Phys.}, 92(3-4):353--368, 1998.
\bibitem{entver04}
A.~C.~D. van Enter and E.~A. Verbitskiy.
\newblock On the variational principle for generalized {G}ibbs measures.
\newblock {\em Markov Process. Related Fields}, 10(3):411--434, 2004.
\bibitem{wil91}
D.~Williams.
\newblock {\em Probability with Martingales}.
\newblock Cambridge University Press, Cambridge, 1991.
\end{thebibliography}
\end{document}
A Glauber evolution at infinite temperature can be seen to be
identical to a $p-$Kadanoff transformation with a time-dependent $p$.
The analysis of \cite{entetal02} implies that, starting from a (any)
low-temperature zero-field Ising Gibbs measure, after a certain time
the evolved measure becomes non-quasilocal. A more complicated but
similar analysis implies that similar statements hold for high- but
finite-temperature evolutions. For a numerical study of this problem
see \cite{olipet05}. Whereas in a renormalization context, lack of
quasilocality implies that a renormalization group map does not exist,
here the physical interpretation is that the evolved (fastly heated)
measure cannot be described by a temperature, after some time.
---------------0603150838533--