\documentstyle[leqno]{article}
\topmargin -1.1in
\textheight 10in
\oddsidemargin -0.4in
\textwidth 7.2in
\parindent 0in
\parskip 1em
\newcounter{bibno}
\makeatletter
\def\reusecounter#1{\@nmbrlisttrue\def\@listctr{#1}}
\def\@bibitem#1{\item\immediate\write\@auxout
{\string\bibcite{#1}{\the\c@bibno}}}
\makeatother
\newenvironment{biblike}[2]{\subsubsection*{#2}\begin{list}
{[\arabic{bibno}]}{\settowidth\labelwidth{[#1]}
\leftmargin\labelwidth\reusecounter{bibno}}}{\end{list}}
\def\sec#1{\section{\hskip -1em. #1}}
\def\subsec#1{\subsection{\hskip -1em. #1}}
\newtheorem{PROBLEM}{}[subsection]
\def\QUIZ#1#2#3{\begin{PROBLEM}\hskip -0.9ex
{\bf. #1.\rm #2}\label{#3}\end{PROBLEM}}
\def\Qex#1#2{\QUIZ{Exercise}{#2}{#1}}
\def\Qpr#1#2{\QUIZ{Problem}{#2}{#1}}
\def\Qun#1#2{\QUIZ{Unsolved problem}{#2}{#1}}
\def\RB{\char'051~~}
\newcounter{NAS}
\newenvironment{assert}{\begin{list}%
{\hbox{\kern5ex}\arabic{NAS}\RB\hbox{\kern0.5ex}}%
{\usecounter{NAS}\labelwidth=8ex\parskip=0em}}{\end{list}}
\newcounter{NL}
\def\LN{\item\quad}
\newenvironment{pscode}[1]%
{\begin{list}{\arabic{NL}}{\usecounter{NL}}%
\setcounter{NL}{#1}\addtocounter{NL}{-1}%
\parskip=0em\itemsep=0em\parsep=0em}%
{\end{list}}
\def\CODE#1#2#3{
\begin{equation}
\parbox{5.6in}{\begin{pscode}{#2}#3\end{pscode}}\label{#1}
\end{equation}}
\def\FORM#1#2{\begin{equation}#2\label{#1}\end{equation}}
\def\SYST#1#2{{\begin{equation}\cases{#2}\label{#1}\end{equation}}}
\def\answer{{\bf Answer: }}
\def\com{{\bf Commentary: }}
\def\proof{{\bf Proof: }}
\def\qed{~~\raisebox{-0.3ex}{$\Box$}~}
\def\all#1{\hbox{\sl `all #1'} }
\def\az{\all{zeroes}}
\def\ao{\all{ones}}
\def\amo{\all{minus ones}}
\def\fine{f\hskip -0.07em ine}
\def\fast{\hbox{\small\rm fast}}
\def\slow{\hbox{\small\rm slow}}
\def\rift{{\it ri\hskip -0.1em ft}}
\def\LEFT{le \hskip -0.05em f\hskip -0.05em t}
\def\mem{m}
\def\A{{\cal A}}
\def\B{{\cal B}}
\def\C{{\cal C}}
\def\D{{\cal D}}
\def\H{{\cal H}}
\def\M{{\cal M}}
\def\P{{\cal P}}
\def\R{{\cal R}}
\def\S{{\cal S}}
\def\T{{\cal T}}
\def\Z{{\cal Z}}
\def\Xo{X_0}
\def\bfP{\hbox{\bf P}}
\def\U{\tilde{U}}
\def\DIM{{d}}
\def\tr{\hbox{tr}}
\def\TR{\hbox{TR}}
\def\MIN{{\min}}
\def\MAX{{\max}}
\def\origin{{\cal O}}
\def\epsil{\varepsilon}
\def\*{\hspace*{2em}}
\def\iff{\Longleftrightarrow}
\def\implies{\Longrightarrow}
\def\ot{\leftarrow}
\def\bz{\bar 0}
\def\tz{\rightarrow 0}
\def\ti{\rightarrow\infty}
\def\CORNER{\sl ~~~~~~~~~~}
\def\HL{\\ \hline}
\def\TABLE{
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|c|c|}\hline
{\CORNER themes}&computer&attractors&proofs&proofs&~critical~~&rate &chaos\\
&results &and & of &of non-&values &of con-&~approxi-\\
{\sl examples}& &eroders&ergodicity&ergodicity& &vergence&mation \HL
Percolations & + & + & + & + & + & + & + \HL
Flattening & - & + & + & + & - & + & - \HL
NEC-Voting & + & + & + & + & + & + & + \HL
1-Dim. Votings& + & + & + & - & - & - & + \HL
Windroses & + & + & + & + & - & - & + \HL
Soldiers & + & + & + & - & - & + & - \HL
2-Line Voting & - & + & + & - & - & - & + \HL
\end{tabular}
\end{center}
}
\begin{document}\large
\title{Cellular Automata With Errors:\\
Problems for Students of Probability}
\author{Andrei Toom}
\maketitle
\begin{abstract}
The paper treats of uniform interacting random processes
with discrete space and time, which may also be called
probabilistical cellular automata with memory.
Some notions and results about them, particularly about
non-ergodicity vs. ergodicity and slow vs. fast convergence,
are presented in a form understandable for undergraduates.
Special attention is paid to examples and
problems of various levels of difficulty.
\end{abstract}
We consider probabilistic systems with local interaction
which may be thought of as a special kind of Markov processes.
Informally speaking, they describe functioning of a finite or
infinite system of automata, all of which update their states at
every moment of the discrete time according to some deterministic
or stochastic rule depending on their neighbors' states.
>From statistical physics came the interest to uniqueness
of the limit $t\ti$ behavior of the infinite system.
Existence of at least one limit follows from the famous
fix-point theorem and is proved here as Theorem 1.
Systems which have the unique limit behavior and converge to it
from any initial condition may be said to forget everything, when
time tends to infinity, while the others remember something forever.
The former systems are called {\em ergodic}, the latter {\it non-ergodic}.
In statistical physics the non-ergodic systems help to
understand conservation of non-symmetry in macro-objects.
Computer scientists prefer to consider finite systems.
Although non-degenerate finite systems certainly converge,
some of them converge enormously slower than others.
Some converge {\em fast}, i.e. lose practically all the information
about the initial state in a time which does not depend on their size
(or grows as logarithm of their size, according to another definition),
whereas some others converge {\em slow}, i.e. can retain information
during time which grows exponentially in their size.
The latter may help to solve the problem of designing
reliable systems consisting of unreliable elements.
There is much of common between the discrete-time
and continuous-time approaches to interacting processes.
American readers are better acquainted with the continuous-time
approach, summarized, for example, in \cite{Liggett}.
Methods to prove ergodicity are similar in
the continuous and discrete time cases.
Our Theorem 2 is a simpler version of the well-known
result about ergodicity \cite{Vaserstein}.
The same result in continuous time is given as
Theorem 4.1 in Chapter 1 of \cite{Liggett}.
Methods to prove non-ergodicity, however, are
till now better developed for the discrete time case.
I think this is because the discrete time case is technically simpler.
For example, the theoretical problems of definition
provide no essential difficulties in this case.
In a fairly general case every question about the discrete-time systems
can be reformulated as a question about `hidden' independent random variables,
and we use this possibility throughout this paper.
Theory of interacting systems is a relatively new branch of mathematics
where efficient general methods are still in the process of development.
In these circumstances it may be useful to pay special attention
to individual cases which defy all known methods.
Thus we speak of several more or less specific {\em Examples}
which we call special names for reference purposes.
Pluses in the following table show which cases of interaction (boxes) between
examples (rows) and general themes (columns) are paid attention here.
\TABLE
We would like to share with the reader the flavor
of connecting theretical work with computer simulation.
Computer results are placed in the first column of our table,
as a natural prerequisite for theoretical work.
In fact all of them were obtained using the Monte Carlo method, that is
generation of realizations of the process in question using random numbers.
In the vein of this method we shall illustrate functioning
of our systems with pseudo-codes similar to computer programs.
In the sixties-seventies Ilya Piatetski-Shapiro
who consulted with Roland Dobrushin and Yakov Sinai,
challenged members of a seminar of Moscow mathematicians including me
with results of computer simulations of several carefully chosen examples
of interacting random processes with discrete time
and proposed to prove the observed properties,
particularly switches from ergodic to non-ergodic behavior
as a result of continuous change of parameters.
\cite{Survey} is the most complete survey of these developments.
Some of our examples were first proposed at that seminar.
Examples first proposed in \cite{Stavskaya} and \cite {VPP}
and described as 2-Percolation and NEC-Voting systems here,
moved me to develop a method to prove non-ergodicity
\cite{Toom-68,Toom-74,Toom-80,Toom-82}.
One version of this result is formulated here as Theorem 3.
Presenting results here, we prefer them to be
explicit and understandable rather than general.
More general versions of them can be found
in original papers to which we refer.
The only prerequisite for this paper is the standard course of probability.
Our potential readers are students who love
to learn a theory by solving problems.
The idea to write a paper with problems and examples about
time-discrete interacting random processes comes from Laurie Snell.
Problems of different levels of difficulty can be found here:
{\bf Exersises} are technical; they are for students
who strive to become professional mathematician.
{\bf Problems} need more of non-trivial reasoning, and some of
them were or might be considered publishable results in the past.
I try to refer to papers which contain their solutions,
but some have never been published.
Some proofs and commentaries are given here;
sign\qed marks ends of the longer ones.
{\bf Unsolved problems} are the most feasible
of those whose solutions are unknown to me now.
The boundaries between the three certainly are not absolute.
After all, our common purpose is to turn
every unsolved problem into a series of exercises.
\sec{Systems and Deterministic Case}
\subsec{Systems of Neighborhood}
A {\em system of neighborhood} or just a {\em system} is defined as follows.
We assume that there is a finite or countable set which we call $Space$.
We call a system finite if its $Space$ is finite, and infinite otherwise.
The natural $\mem$ is called {\em memory}.
(All our examples belong to the simplest case $\mem=1$.)
The set $Time=\{-\mem,\ldots,-1,~0,~1,~2,~\ldots\}$ is the set
of values of the discrete time variable $t$.
The product $V=Space \times Time$ is called {\em volume}
whose elements $(s,t)$ are called {\em points}.
Points with $t<0$ are called {\em initial} and
their set $I$ is called {\em initial volume}.
$V_t=\{(s,t),~s\in Space\}$ is called $t$-volume.
Any subset $W \subset V$ may be called a {\em subvolume}.
For every non-initial point $w$ there is a finite sequence
$N_1(w), \ldots ,N_n(w)$ of points which are called its {\em neighbors}.
The set $N(\cdot)$ of neighbors of a point is called its {\em neighborhood}.
The number $n$ of neighbors is one and the same for all non-initial points.
It is essential that all the neighbors of a point $(s,t)$
have values of their time variable less than $t$.
We also denote $N_i(s,t)=(n_i(s,t),~t-t_i(s,t))$ where $t_i(s,t)>0$.
Thus, to specify a {\em system} $\S$ one has to choose
$Space$, naturals $\mem$ and $n$ and $n$ neighbors of
every non-initial point: $\S= \{ Space,~\mem,~n,~N(\cdot) \}$.
\Qex{neighbors}{
Call neighbors of a point its 1-st degree neighbors.
Define a point's $k$-th degree neighbors as
neighbors of its $(k-1)$-th degree neighbors.
What is the greatest and the smallest number of a point's
$k$-th degree neighbors in systems with a given $n$ ?
}
\answer The smallest number is 1.
($n$ if we assume that a point's neighbors may not coincide.)
The greatest number is $n^k$.
The latter takes place (for all $k$) for the {\em Tree system}
which can be defined as follows:
$Space$ is the set of vertices of an infinite directed tree,
every vertex of which serves as the end point of $n$ edges
and as the starting point of one edge.
Neighbors of a point $(s,t)$ have their time components equal
to $t-1$ and those space components whence edges go to $s$.
\subsec{Standard Systems and Uniformity}
Let $Z=( \ldots -1,0,1, \ldots )$ stand for the ring of integers
and $Z_\kappa$ stand for the ring of residues modulo $\kappa$.
Thus, $Z^\DIM$ is the $\DIM$-dimensional discrete space
and $Z_\kappa^\DIM$ is the $\DIM$-dimensional torus.
Most of our examples belong to the following {\em standard} case:
$Space$ is $Z^\DIM$ or $Z_\kappa^\DIM$ and there are such $n$
{\em neighbor vectors} $v_1, \ldots v_n \in Z^{\DIM +1}$ that
for all non-initial $(s,t)$ and for all $i=1,\ldots,n$
$$
N_i(s,t)=(s,t)+v_i.
$$
(Time components of all the neighbor vectors are negative.)
In the standard case it is sufficient to state neighbors of
the origin $\origin=(\bz,0)$ of $Space \times Time$, to define neighbors
of all the points, and this is what we shall typically do.
Standard systems are a special case of uniform systems
in which `all automata are equal'. Call a one-to-one map
$a:Space \mapsto Space$ a system's {\em automorphism} if
$$
\forall s \in Space,~t \geq 0,~i=1,\ldots,n~:~N_i(a(s),t)=a(N_i(s,t)).
$$
Automorhisms of a system form a group $\A$. Call $\A$ transitive if
$$
\forall s_1,s_2 \in Space~~\exists a \in \A ~:~ a(s_1)=s_2.
$$
Call a system {\em uniform} if it has a transitive group of automorphisms.
Call a system {\em commutative} if it has a
commutative transitive group of automorphisms.
All the systems treated of in our paper are uniform
and all except the 2-Line Voting are commutative.
In theory this seems not to be very restrictive.
I can mention only one theoretical result
\cite{Zirelson} where non-uniformity is essential.
However, computer simulations sometimes have to be performed
on a finite subvolume of $Z^\DIM$, and in these non-uniform cases
one has to specify what is going on at the boundaries.
In uniform systems $n_i(s,t)$ and $t_i(s,t)$ do not actually depend on $t$,
and we shall call them $n_i(s)$ and $t_i(s)$:
\FORM{n-neighbors}{
N_i(s,t)=(n_i(s),~~t-t_i(s)).
}
Call a uniform system {\em $\DIM$-dimensional} if
\FORM{dimensional}{
Space=Z^\DIM \times S=\{ (i,j)~:~i \in Z^\DIM,~j \in S \}
}
where $S$ is finite, and the automorphisms group $\A$ contains the subgroup
$\Z^\DIM$ whose elements are shifts at any constant vector $V \in Z^\DIM$:
\FORM{shifts}{
\Z^\DIM = \{ z ~:~(i,j) \mapsto (i+V,j),~V \in Z^\DIM \}.
}
\Qex{sta-un}{
Prove that any standard system is $\DIM$-dimensional and commutative.
}
\Qex{tree-comm}{
Prove that the Tree system which we defined in the answer
to exercise \ref{neighbors} is commutative but non-standard.
}
\com The group of all automorphisms of a Tree system is not commutative.
But it has transitive commutative subgroups.
\Qex{neighbors-automorphisms}{
In a commutative system every map $n_i ~:~Space \mapsto Space$
is an automorphism, where $i=1,\ldots,n$ and $n_i(\cdot)$ is that
of formula (\ref{n-neighbors}).
}
\proof Choose some $s_0 \in Space$.
We can choose such $a_i \in \A$ that $a_i(s_0)=n_i(s_0)$.
Now let us prove that $a_i(s)=n_i(s)$ for any $s \in Space$.
Take such $a \in \A$ that $s=a(s_0)$. Then
$$
a_i(s)=a_i(a(s_0))=a(a_i(s_0))=a(n_i(s_0))=n_i(a(s_0))=n_i(s) \qed
$$
\Qex{commutative-subgroup}{
Given a commutative system.
Prove that $\A$ contains such a subgroup $\A_0$ that for any
$s,~s'~\in Space$ there is exactly one $a \in \A_0$ such that $a(s)=s'$.
}
\com For any $s \in Space$ denote the subgroup $\A_s=\{a \in \A~:~a(s)=s\}$.
Generally, $\A_s$ may depend on $s$, but in the commutative case it does not,
and $\A_0=\A / \A_s$.
Using this exercise, we can choose a {\em reference element} $s_0 \in Space$
which defines the one-to-one correspondence between $\A_0$ and $Space$
by the rule $a \leftrightarrow a(s_0)$.
\subsec{States and Configurations}
Let us call elements of $Space$ {\em automata}.
Every automaton at any time has one and the same finite
set $\Xo$ of possible states. We may denote $\Xo=\{0,\ldots,k\}$.
The simplest case $\Xo=\{0,1\}$ alone provides so many
unsolved problems that we shall enjoy it as much as possible.
To every point $(s,t) \in V$ there corresponds a variable $x_{s,t}\in \Xo$
which represent the state of automaton $s$ at time $t$.
Set $X=X_V=\Xo^V$ is the configuration space
whose elements are called configurations.
Any configuration consists of {\em components}, that is
elements of $A$ corresponding to all points.
`All $a$-s' will mean any configuration, all components of which equal
one and the same element $a \in \Xo$.
$X_I=\Xo^I$ stands for the set of {\em initial conditions},
that is configurations on $I$.
Elements of the configuration space $X_t=\Xo^{V_t}$
at time $t$ are called $t$-configurations.
Generally, to any subvolume $W$ there corresponds its configuration space
$X_W=S^W$ whose elements are called configurations on $W$.
Also $X^\mem=\Xo^{Space \times [1,\mem]}$ will be used to speak of operators.
We call a point $w$ a {\em point of difference} between
two configurations $s$ and $s'$ iff $s_w \neq s'_w$.
{\em Set of difference} between two configurations
is the set of their points of difference.
Call a configuration a {\em finite perturbation} of
another configuration if their set of difference is finite.
\subsec{Deterministic Systems}
To turn a system into a {\em deterministic system} we
need to choose a set $\Xo$ of states of every point and a
{\em transition function} $\tr: \Xo^{n} \mapsto \Xo$.
This function gives the state of a non-initial
point using states of its neighbors as arguments.
For any deterministic system we define a map $\TR: X_I \mapsto X$ as follows:
an initial configuration $x_I$ being given,
first, initial components of $\TR(x_I)$ are the same as of $x_I$,
and, second, all the non-initial components of $y=\TR(x_I)$
are determined inductively according to the formula $y_w = \tr(y_{N(w)})$.
We call $\TR(x_I)$ the {\em trajectory}
resulting from the initial condition $x_I$.
To any deterministic system $\D$ ~there corresponds a
{\em (deterministic) operator} $D~:~X^\mem\mapsto X^\mem$ defined
in the following way: the result of application $D$ to a configuration
$x=(x_{s,t}),~s\in Space,~t\in [1,\mem]$ has components
$$
D(x)_{s,t}=\cases{ x_{s,t+1} &if $t<\mem$,\cr
\tr(x_{N(s,t)}) &if $t=\mem$.}
$$
(If $\mem=1$, the upper line is not used.)
Call a configuration $x \in X^\mem$ {\em fixed} for an operator $D$
(and thereby for the corresponding system $\D$) if $D(x)=x$.
\Qex{fixed}{
Prove: If $x$ is fixed then components
of $x$ and $\TR(x)$ do not depend on $t$.
}
\Qex{determ-uniform}{
Prove that any deterministic system $\D$ and operator $D$
are uniform in the following sense:
For any automorphism $\A$ of the system, the corresponding
one-to-one maps of $X$ and $X^\mem$ commute with $\D$ and $D$ respectively.
}
\Qpr{commutative-not-depend}{
For any commutative $\DIM$-dimensional system with $Space$
given by formula (\ref{dimensional}), prove that if components
of an initial state do not depend on $j$, then components of
the resulting trajectory also do not depend on $j$.
}
{\bf Sketch of proof:}
>From exercise \ref{neighbors-automorphisms}, every $n_i$ is an automorphism.
Call $\H ~:~ \A \mapsto \Z^\DIM$ the homomorphism of the
commutative transitive group $\A$ of the system's automorphisms
to the subgroup $\Z^\DIM$ of shifts (\ref{shifts}).
Thus $\H(n_i)$ is a shift at some vector $V_i$.
Now let $z(w)$ stand for the first coordinate of any point
$w \in V$ in the (\ref{dimensional}) representation of $Space$.
The statement in question is assured by the fact that
the difference $z(n_i(s))-z(s)$ does not depend on $s$.
This is because this difference equals $V_i$ \qed
This exercise shows that commutative $\DIM$-dimensional systems
boil down to standard ones if we apply them only to
configurations whose components do not depend on $j$.
\subsec{Pseudo-Codes}
Since this article emphasizes computer approach,
we illustrate the functioning of our systems with
pseudo-codes of imaginary programs which model them.
These pseudo-codes do not obey strict syntactical rules
of any actually existing programming language;
they explain the algorithm.
$Space$ and the range of $t$ are assumed
to be finite whenever we write a pseudo-code.
Lines of every pseudo-code are numerated for reference purposes.
For example, functioning of a deterministic system can
be expressed by the following pseudo-code:
\CODE{det}{1}{
\LN for $t=-\mem$ to $-1$ do
\LN \* for~ all $s \in Space$ do
\LN \*\* $x_{s,t} \ot x_{s,t}^{initial}$
\LN for $t=0$ to $t_{max}-1$ do
\LN \* for~ all $s \in Space$ do
\LN \*\* $x_{s,t} \ot \tr(x_{N(s,t)})$
}
Here lines 1-3 assign the initial configuration and
lines 4-6 calculate $t_{max}$ subsequent $t$-configurations.
In our imagination, values of $x_{s,t}$ for all $s \in Space$
are assigned simultaneously, at one moment $t$.
However, in the actual programming this is not essential,
and practically the line `for all $s \in Space$ do' can be implemented
using $\DIM$ nested cycles if $Space$ is $\DIM$-dimensional.
The well-known Cellular Automata (see, for example, \cite{Cellular})
are standard deterministic systems with $\mem=1$ and $Space=Z^\DIM$.
\Qex{life}{
Write a pseudo-code for the well-known Game Life,
that is a cellular automaton with $Space=Z^2$ and $\Xo=\{0,1\}$.
Here neighbors of a cell mean its eight nearest neighbors.
A cell in the state 0 is called `dead' and
a cell in the state 1 is called `alive'.
Transition rule: a dead cell becomes alive at the next moment
only if it has exactly three live neighbors and an alive cell
stays alive only if it has two or three live neighbors.
}
\sec{Measures}
\subsec{Measures and Topology} \label{Measures and Topology}
For any configuration space $X_W$ we shall consider the
set $\M_W$ of probabilistic, i.e. normed measures on the $\sigma$-algebra
generated by cylinder sets in the product-space $X_W$.
Most often we treat of $\M_V$ which we call simply $\M$.
Call a measure {\em degenerate}
if it has a zero value on at least one cylinder set.
Let $\delta_s$ stand for the $\delta$-measure
concentrated in one configuration $s$.
A product-measure on any configuration space
$X_W=\Xo^W$ is a product of measures for coordinates.
A {\em mixture} of several normed measures is their linear combination
with non-negative coefficients whose sum equals 1.
A measure is called uniform if it is fixed for any map on $\M$
corresponding to an automorphism.
Let {\em weak topology} correspond to convergence on all the cylinder sets
and let {\em strong topology} correspond to convergence on all the elements
of the $\sigma$-algebra generated by them.
The following exercises show why the weak topology is typically
used when treating of infinite interactive systems:
\Qex{weak-strong}{
Prove that if $W$ is finite, the weak and strong topologies on $\M_W$
coincide, but in the infinite case they are different.
}
\Qex{strong}{
Let $B(p)$ stand for the uniform product-measure on
the product-space $X=\{0,1\}^Z$ in which every component
equals 1 with probability $p$ independently of others.
Prove that when $p\tz$, the measure $B(p)$ tends to the measure
concentrated in the configuration \az in the weak topology,
but does not tend in the strong topology.
}
\Qex{compact}{ Prove:
\begin{assert}
\item For any configuration space $\Xo^W$ where $\Xo$ is
finite (or a compact) and $W$ is finite or countable, $\M_W$ is
compact in the weak topology.
\item For any configuration space $\Xo^W$ where $|\Xo|>1$ and
$W$ is infinite, $\M_W$ is not compact in the strong topology.
\end{assert}
}
For any two normed measures $\mu_{1}$ and $\mu_{2}$
on one and the same measurable space $X$,
call a measure on $X \times X$ {\it a coupling} of
$\mu_1$ and $\mu_2$ if its marginals are $\mu_1$ and $\mu_2$.
Coupling of measures is the simplest version of couplings.
Below we shall speak of couplings of random systems.
(See \cite{Griffeath-78,Liggett}.)
\subsec{A Distance between Measures}
As long as we are interested only in convergence vs.
non-convergence, the weak topology is sufficient for us.
But as soon as we want to speak about how fast
systems converge, we need a distance between measures.
For any coupling $c$ of $\mu_1$ and $\mu_2$ on one
and the same configuration space $X_W=\Xo^W$, let
$$
\rift(c)=\sup\{c(x^1_w \neq x^2_w),~w \in W\}
$$
where $x^1=(x^1_w)$ and $x^2=(x^2_w)$ stand for the first
and second components in $X^1 \times X^2=X_W \times X_W$.
Define $dist(\mu_1,\mu_2)$, i.e. the {\em distance} between
$\mu_1$ and $\mu_2$, as the infimum of $\rift(c)$ over all
the couplings $c$ of $\mu_1$ and $\mu_2$.
\Qex{[0,1]}{
Prove: Any value of $\rift(\cdot)$ or $dist(\cdot)$ belongs to $[0,1]$.
}
\Qex{projections}{
Let $\mu_1,\mu_2 \in \M_W$ where $X_W=\prod_{w \in W}X_w$.
Let $\mu_1|_w$ and $\mu_2|_w$ stand for projections of $\mu_1$ and $\mu_2$
to $X_w$. Prove:
$$
dist(\mu_1,\mu_2)=\sup\{ dist(\mu_1|_w,\mu_2|_w)~:~w \in W \}.
$$
}
\Qex{cylinder}{
Let $S$ be a cylinder subset of $X_W=\{(x_w),~w \in W\}$ indicated
by a condition involving some $r$ components $x_1,\ldots,x_r$. Prove:
$$
\forall \mu_1,\mu_2 \in \M_W~:
$$
}
\Qpr{metric}{
Prove that $dist$ is a metric, that is
\begin{assert}
\item $dist(\mu_1,\mu_2) \geq 0$.
\item $dist(\mu_1,\mu_2)=dist(\mu_2,\mu_1)$.
\item $dist(\mu_1,\mu_2)=0 \iff \mu_1=\mu_2$.
\item $dist(\mu_1,\mu_3) \leq dist(\mu_1,\mu_2) + dist(\mu_2,\mu_3)$.
\end{assert}
}
\proof We leave proof of 1) and 2) to the reader and
3) follows from exercise \ref{cylinder}.
The following proof of 4) for the finite case was proposed by Eugene Speere.
Take any $\mu_1$, $\mu_2$ and $\mu_3$ on $X_W$.
Let $c_{12}$ be any coupling of $\mu_1$ and $\mu_2$ and
$c_{23}$ be any coupling of $\mu_2$ and $\mu_3$.
All we have to do is to present a coupling $c_{13}$ of
$\mu_1$ and $\mu_3$ for which
\FORM{triangle}{
\rift(c_{13}) \leq \rift(c_{12}) + \rift(c_{23}).
}
Take the following measure $c_{123}$ on $X^1 \times X^2 \times X^3 = X_W^3$:
$$
c_{123}(x^1, x^2, x^3)\cases{
0 &if $\mu_2(x^2)=0$,\cr
c_{12}(x^1, x^2) \times c_{23}(x^2, x^3) /\mu_2(x^2) &otherwise.}
$$
Since $\mu_2$ is a marginal of $c_{23}$, the sum of
$c_{23}(x^2, x^3)$ over $x^3$ gives $\mu_2(x^2)$.
Therefore the sum of $c_{123}(x^1, x^2, x^3)$
over $x^3$ gives $c_{12}(x^1, x^2)$.
Analogously the sum of $c_{123}(x^1, x^2, x^3)$
over $x^1$ gives $c_{23}(x^2, x^3)$.
Thus $c_{123}$ is a normed measure and the following is
a coupling of $\mu_1$ and $\mu_3$:
$$
c_{13}(x^1, x^3) = \sum_{x^2} c_{123}(x^1, x^2, x^3).
$$
Thus defined $c_{13}$ fits our claim (\ref{triangle}) because
$$
\forall w \in W~:~c_{13}(x_w^1 \neq x_w^3)
\leq c_{12}(x_w^1 \neq x_w^2) + c_{23}(x_w^2 \neq x_w^3) \qed
$$
\Qex{triangle-infinite}{
Extend this proof to the infinite case.
}
Call a coupling $c$ of $\mu_1$ and $\mu_2$ their {\it fine coupling}
if $\rift(c)=dist(\mu_1, \mu_2)$.
\Qex{metric-finite}{
Let $W$ be finite.
Prove that our metric $dist$ is continuous both in the weak and
strong topology and that $\M_W$ is a compact with our metric.
Prove also that in this case for every two measures
there exists a fine coupling.
}
\Qex{non-compact}{
Let $W$ be infinite and $|\Xo|>1$.
Prove that $\M$ is not compact with $dist$ as metric.
Prove also that $dist$ is not continuous in the weak topology,
because there is such $\mu$ and such a sequence $\mu_1,\mu_2,\ldots$
which tends to $\mu$, that $dist(\mu_n,\mu)$ does not tend to 0.
}
\Qun{existence}{
Is it true that for any two normed measures
on $X_W$ there exists a fine coupling ?
}
\com The difficulty lies in non-compactness of $\M$.
However, I can not present such measures for which there is no fine coupling.
\Qex{non-unique}{
Prove that fine coupling may be non-unique.
}
\proof For example, let some $x_w$
certainly equal 0 in $\mu_1$ and certainly equal 1 in $\mu_2$.
Then $\rift$ of any coupling of these measures equals 1,
and all their couplings are fine \qed
The following are some cases when fine coupling exists
and sometimes is unique even with an infinite $W$:
\Qex{fine-same}{
If $\mu_1=\mu_2$ then their fine coupling $c_{\fine}$ exists and is unique.
Prove this and write an explicit formula
for $c_{\fine}(S)$ for any cylinder set $S \subset X_W \times X_W$.
}
\Qex{fine-delta}{
Let $\mu_1$ be an arbitrary measure on $X_W$
and let $\mu_2$ be a $\delta$-measure $\delta_y$
concentrated in one configuration $y\in X_W$.
Then $\mu_1$ and $\mu_2$ have only one coupling at all.
Therefore their fine coupling exists and is unique and
$dist(\mu_1,\mu_2)=\sup\{\mu_1(x_w \neq y_w)~:~w \in W\}$.
}
\Qpr{asterisk}{
Let $|W|=1$, i.e. $X_W=\Xo$, and $\mu_1$ and $\mu_2$ be any measures on $X_W$.
Denote $m(a)=\min(\mu_1(a),\mu_2(a))$ for all $a\in \Xo$. Then:
\begin{assert}
\item \begin{eqnarray*}
dist(\mu_1,\mu_2)&=&\max\{ \mu_1(S)-\mu_2(S)~:~S \subset \Xo \}\\
&=&\max\{ \mu_2(S)-\mu_1(S)~:~S \subset \Xo \}\\
&=&\sum_{a\in \Xo}(\mu_1(a)-m(a))
= \sum_{a\in \Xo}(\mu_2(a)-m(a)).
\end{eqnarray*}
\item The set of fine couplings of $\mu_1$ and $\mu_2$ is non-empty
and coincides with the set of those couplings of $\mu_1$ and $\mu_2$
which have the form $c_{\fine}=c_0+c_1$ where
$$
c_0(x_1,x_2)=\cases{m(x_1) &if $x_1=x_2$,\cr 0 &otherwise.}
$$
and $c_1$ is any measure on $\Xo \times \Xo$, for which
\end{assert}
}
\SYST{asterisk-add}{
\forall x_1~:~\sum_{x_2 \in \Xo }c_1(x_1,x_2)=\mu_1(x_1)-m(x_1),\cr
\forall x_2~:~\sum_{x_1 \in \Xo }c_1(x_1,x_2)=\mu_2(x_2)-m(x_2).
}
\com One example of $c_1$ which satisfies (\ref{asterisk-add}) is
$$
c_1^*(x_1,x_2)=(\mu_1(x_1)-m(x_1)) \times (\mu_2(x_2)-m(x_2)),
$$
and the resulting $c^*_{\fine}=c_0+c_1^*$ is Vaserstein's $\ast$-operation
which was defined in \cite{Vaserstein} for any measurable $\Xo$.
\Qpr{gen-product}{
Let $\mu_1$ and $\mu_2$ be product-measures on the product-space $X_W$:
$$
\mu_1=\prod \mu_1^s,~~\mu_2=\prod \mu_2^s,~~s \in Space.
$$
Then any product of fine couplings of $\mu_1^s$ and $\mu_2^s$
is a fine coupling of $\mu_1$ and $\mu_2$.
}
\Qpr{uniform-product}{
Let $\Xo=\{0,\ldots,k\}$, let $X=\Xo^W$ and
let $\B(p)$ stand for the product-measure on the product-space $X$, every
factor of which is one and the same measure $p=(p_0,\ldots,p_k)$ on $\Xo$.
Prove that for any $\B(p)$ and $\B(q)$ there exists a fine coupling and
$$
dist(\B(p), \B(q))=\max\{ p(S)-q(S)~:~S \subset \Xo \}.
$$
}
Sure, there are many ways to define a distance between measures. The
distance we describe here seems to be useful at least in the present context.
The facts that it is not continuous in the weak topology and that $\M$
is not compact seem to be inevitable trade-off for relevance to problems
about how fast or slow large interactive systems converge.
The idea of this definition of distance was present in \cite{Vaserstein}
(in the form of that definition on p. 50 which speaks of estimate $\alpha_k$).
However, the $\ast$-operation proposed in \cite{Vaserstein},
generally speaking, does not provide a fine coupling.
For example, the $\ast$-operation applied directly to any
different uniform product-measures on $X^W$ with $|X|\geq 2$
and infinite $W$ has $\rift$ equal to 1.
\sec{Random Systems}
We shall define {\em random systems} using (every time the same)
mutually independent {\em hidden} variables $h_w$ for all points
$w \in V$, everyone of which is uniformly distributed on $[0,1]$;
thus we have the {\em hidden} product-measure $\eta$
on the hidden configuration space $H=[0,1]^V$ of the hidden variables.
In pseudo-codes the same idea is expressed in
generating and using random numbers $rnd$.
We assume that every call of $rnd$ in a pseudo-code generates
a random number uniformly distributed on $[0, 1]$ which is
independent from all the past events, including past calls of $rnd$.
\subsec{Special case: $\Xo=\{0,1\}$}
In this case all we need to define a random system is a function
$p_0:\Xo^n \mapsto [0,1]$, the conditional probability
of a point to be in the state 0 given states of its neighbors.
A point's probability to be in the state 1 is $p_1(\cdot)=1-p_0(\cdot)$.
Values of $p_0(\cdot)$ and $p_1(\cdot)$
are called {\em transition probabilities}.
If at least one of them equals 0,
we call the random system {\em degenerate}.
The following pseudo-code describes functioning of an
arbitrary random system with $\Xo=\{0,1\}$ and a finite $Space$:
\CODE{bin-ran-code}{1}{
\LN for $t=-\mem$ to $-1$ do
\LN \* for all $s \in Space$ do
\LN \*\* $x_{s,t} \ot x_{s,t}^{initial}$
\LN for $t=0$ to $t_{max}-1$ do
\LN \* for all $s \in Space$ do
\LN \*\* if $rnd0$.}
}
\Qex{gen-code}{
Write a pseudo-code for a random system with $\Xo=\{0,1,\ldots,k\}$.
}
\Qex{mixture}{
Given a random system.
Prove that a mixture of two processes is a process too.
}
\Qex{limit}{
Prove that if a sequence of processes of a random system has a limit
(in the weak topology),
this limit is also a process for the same random system.
}
\Qex{det-ran}{
For any deterministic system with a transition function $\tr(\cdot)$
let us define the corresponding random system with the same
$Space$, $\mem$ and neighborhoods as follows:
$$
p(a_w | b_{N(w)})=\cases{1 &if $a_w=\tr(b_{N(w)})$,\cr 0 &otherwise.}
$$
Prove that for any deterministic initial condition
the resulting process is concentrated in the resulting trajectory.
}
\subsec{Deterministic Systems with Noise}
Here we consider random systems which result from
deterministic ones by adding a random noise.
First assume $\Xo=\{0,1\}$. In this case we need only two
non-negative parameters $\epsil$ and $\delta$ where $\epsil+\delta\leq 1$.
Take the process induced by the hidden measure $\eta$
with the following map from $H$ to $X$:
\SYST{epsil-delta-form}{
\forall s \in Space,~t \in [-\mem,-1]: x_{s,t}=x_{s,t}^{initial},\cr
\forall s \in Space,~t \in [~~~0,~\infty): x_{s,t}=
\cases{0 &if $h_{s,t}< \epsil$,\cr
1 &if $h_{s,t}>1-\delta$,\cr
\tr(x_{N(s,t)}) &otherwise.}
}
We shall say that this system results from the deterministic one
by adding an $\epsil$-$\delta$ noise.
To obtain the corresponding pseudo-code it is sufficient
to add the following two lines to the pseudo-code (\ref{det}):
\CODE{epsil-delta-code}{7}{
\LN \*\* $h \ot rnd$
\LN \*\* if $h< \epsil$ then $x_{s,t} \ot 0$
else if $h>1-\delta$ then $x_{s,t} \ot 1$
}
The resulting pseudo-code (\ref{det}+\ref{epsil-delta-code})
imitates a process in which every automaton first follows
the deterministic rule, then makes a two-way random error:
turns 0 with probability $\epsil$ and turns 1 with probability $\delta$.
We call the $\epsil$-$\delta$ noise {\em symmetric} if $\epsil=\delta$,
{\em biased} if $\epsil\neq\delta$, and
{\em one-way} or {\em degenerate} if $\epsil=0$ or $\delta=0$.
\Qex{epsil-delta-gen}{
Show that for any $\tr(\cdot)$,~ $\epsil$ and $\delta$
there is such $p(\cdot)$ that the process (\ref{bin-ran-form})
is the same as the process (\ref{epsil-delta-form}).
}
Now the general case: We need non-negative parameters
$\epsil_i$ for all $i\in \Xo$ whose sum does not exceed 1.
The process is induced by any initial measure and
the hidden measure $\eta$ with the following map:
\SYST{noise-form}{
\forall s \in Space,~t \in [-\mem,-1]: x_{s,t}=x_{s,t}^{initial},\cr
\forall s \in Space,~t \in [~~~0,~\infty): x_{s,t}=
\cases{0 &if $h_{s,t}<\epsil$,\cr
i &if $\sum_{j=0}^{i-1}\epsil_j \leq h_{s,t} <
\sum_{j=0}^i\epsil_j$, for $i>0$,\cr
\tr(x_{N(s,t)}) &otherwise.}
}
To obtain the corresponding pseudo-code we may
add the following lines to the pseudo-code (\ref{det}):
\CODE{noise-code}{7}{
\LN \*\* $h \ot rnd$
\LN \*\* $i \ot 0;~\theta \ot \epsil_0$
\LN \*\* while $iFrom exercise \ref{limit} every limit point of $\mu_n$
is an invariant process \qed
{\bf Definition.} Say that a random system is {\em ergodic} if
it has only one invariant process $\mu_{inv}$ and for all its processes
$\mu$ the limit $\lim_{n\ti}\T^n(\mu)$ exists and equals $\mu_{inv}$.
\Qex{one}{
Does exist a non-ergodic random system which has only one invariant process ?
}
\answer Yes. Take a deterministic system with $Space$
consisting of one element 0 with $N(0,t)$ consisting of one
point $(0,t-1)$, in which $\Xo=\{0,1\}$ and $\tr(a)=1-a$.
\Qun{nondeg-one-?}{
Does exist a non-degenerate non-ergodic
random system which has only one invariant process ?
}
\Qex{fin-erg}{
Prove that any non-degenerate finite random system is ergodic.
}
\subsec{Patterns}
Let us say that all those standard systems belong to one {\em pattern}
which have one and the same $\DIM$, $\mem$, and $N(\origin)$.
In the same vein,
a deterministic pattern is a quadruple $(\DIM,~\mem,~N(\origin),~\tr(\cdot))$
and a random pattern is a quadruple $(\DIM,~\mem,~N(\origin),~ p(\cdot))$.
This allows us to compare behavior of the infinite system
and of finite systems with various $\kappa$ belonging to
one pattern ($\DIM$, $\mem$, $N(\origin)$).
Let $\mu_{inv}$ stand for the invariant process of
any system which has only one invariant process.
For any random pattern let us define $\rho(\kappa,t)$ as supremum of
$dist(\T^t(\mu),\mu_{inv})$ over all processes $\mu$ of the finite
system with $Space=Z^\DIM_\kappa$ that belongs to the given pattern.
Let us say that a pattern {\em converges fast}
if there is such a positive constant $q<1$ that
$$
\forall \kappa,t~:~\rho(\kappa,t) \leq q^t.
$$
Let us say that a pattern {\em converges slow}
if there are such $\epsil>0$ and $Q>1$ that
$$
\forall \kappa,t~:~t \leq Q^\kappa \implies \rho(\kappa,t) \geq \epsil.
$$
\Qun{fin-inf-?}{
It is conjectured that any random pattern
converges fast iff the infinite system is ergodic and it
converges slow iff the infinite system is non-ergodic. Is it always true ?
}
\com Note that computer simulations refer directly to finite systems.
(Typically they imitate functioning of some
finite system using the Monte Carlo method.)
Thus we certainly need to know (or assume) some relation
between finite and infinite systems whenever we interpret
results of computer simulations as telling us something
about ergodicity or non-ergodicity of infinite systems.
\subsec{Operators}
Equivalently, it is possible to define ergodicity in the following way.
Let $\M^\mem$ stand for the set of normed measures on $X^\mem$.
To any random system there corresponds a linear {\em operator}
$P:\M^\mem \mapsto \M^\mem$
whose definition is based on the following assumption:
application of $P$ to a $\delta$-measure concentrated in a state
$y=(y_{1},\ldots,y_{\mem})\in X^\mem$
is a product of the following measures pertaining to components:
\begin{center}
the distribution of $x_{s,t}$ is
$\cases{\delta_{y_{s,t+1}} &if $t<\mem$,\cr p(y_{N(s,t)}) &otherwise.}$
\end{center}
(If $\mem=1$, the upper line is not actually used.)
Call a measure $\mu$ {\em invariant} for an operator $P$ if $P(\mu)=\mu$.
\Qex{has}{
Prove that any operator has at least one invariant measure.
}
\proof analogous to proof of Theorem 1.
Let us say that operator $P$ of a random system is {\em ergodic}
if it has only one invariant measure $\mu_{inv}$ and for all $\mu$
the limit $\lim_{n\ti}P^n(\mu)$ exists and equals $\mu_{inv}$.
\Qex{erg-erg}{
Prove that a random system is ergodic iff its operator is ergodic.
}
\subsec{Monotony}
We shall use signs ~$\preceq$~ and ~$\succeq$~ to speak about
(perhaps, partial) ordering of sets.
For example, any set $S$ of real numbers is said to be {\em naturally ordered}
if~ ~$\forall i, j \in S: i \preceq j \iff i \leq j$.~
Call a function $f$ from one ordered set to another {\em monotonic}~
if $s \preceq s' \implies f(s) \preceq f(s')$.
Let $\Xo$ be ordered (for example, naturally).
Then any configuration space $\Xo^W$ is partially ordered by the rule
$s \preceq s' \iff \forall i \in W~:~ s_i \preceq s'_i$.
Call a deterministic system {\em monotonic} if
$$\forall s,s'\in X_I : s \preceq s' \implies \TR(s) \preceq \TR(s').$$
\Qex{sys-fun}{
Prove that a deterministic system is monotonic
iff its transition function is monotonic.
}
For any two measures $\mu_1,~\mu_2 \in \M_W$ on any configuration
space $X_W$ we say that $\mu_1 \preceq \mu_2$ iff
$\mu_1 f \leq \mu_2 f$ for any monotonic function $f:X_W \mapsto R$.
Call a random system {\em monotonic} if
$$
\forall \mu_1,\mu_2\in\M_I~:~
\mu_1 \preceq \mu_2 \implies~ proc(\mu_1) \preceq proc(\mu_2).
$$
Call an operator $P$ {\em monotonic} if
$$
\forall \mu_1,\mu_2\in\M^\mem~:~
\mu_1 \preceq \mu_2 \implies P(\mu_1) \preceq P(\mu_2).
$$
\Qex{monsys-monop}{
Prove that a random system is monotonic iff its operator is monotonic.
}
\Qex{monmap-monsys}{
Prove: if a map (\ref{gen-ran-form}) from $\M_I \times H$ to $X$ is monotonic
($[0,1]$ naturally ordered) then the resulting random system is monotonic.
}
\Qex{reversal}{
Any order can be reversed by turning ~$\preceq$~ into ~$\succeq$~
and vice versa. Prove that this reversal keeps all our kinds of monotony.
}
\Qpr{monotonic-ergodic}{
Let $A$ be ordered and have the smallest element
$a_\MIN$ and the largest element $a_\MAX$.
Thus $X$ has the smallest state $s_\MIN=$`all $a_\MIN$'
and the largest state $s_\MAX=$`all $a_\MAX$'.
Then for any monotonic operator $P$ the limits
$\lim_{t\ti}P^t(s_\MIN)$ and $\lim_{t\ti}P^t(s_\MAX)$ exist.
These limits are equal iff $P$ is ergodic.
}
\sec{Proofs of Ergodicity and Fast Convergence}
\subsec{Case $\Xo=\{0,1\}$}
To prove ergodicity and fast convergence of a system, it is convenient
to use its {\em coupling} (see \cite{Griffeath-78,Liggett})
which we shall first illustrate by the following pseudo-code:
\CODE{coupling}{1}{
\LN for $t=-\mem$ to $-1$ do
\LN \* for all $s \in Space$ do
\LN \*\* $x_{s,t} \ot x_{s,t}^{initial}$
\LN \*\* $y_{s,t} \ot y_{s,t}^{initial}$
\LN \*\* $m_{s,t} \ot 0$ \label{unmarked}
\LN for $t=0$ to $t_{max}-1$ do
\LN \* for all $s \in Space$ do
\LN \*\* $x_{s,t} \ot f(x_{N(s,t)},rnd)$ \label{xf}
\LN \*\* $y_{s,t} \ot f(y_{N(s,t)},rnd)$ \label{yf}
\LN \*\* $m_{s,t} \ot \min(m_{N_1(s,t)},\ldots,m_{N_n(s,t)})$ \label{prolif}
\LN \*\* $h \ot rnd$
\LN \*\* if $h< \epsil$ then
\LN \*\*\* $x_{s,t}\ot 0$ \label{x0}
\LN \*\*\* $y_{s,t}\ot 0$ \label{y0}
\LN \*\*\* $m_{s,t}\ot 1$ \label{die0}
\LN \*\* else if $h>1-\delta$ then
\LN \*\*\* $x_{s,t}\ot 1$ \label{x1}
\LN \*\*\* $y_{s,t}\ot 1$ \label{y1}
\LN \*\*\* $m_{s,t}\ot 1$ \label{die1}
}
Let us first ignore all the lines which deal with the values $m_{s,t}$ and
concentrate our attention on those which deal with $x_{s,t}$ and $y_{s,t}$.
Then we see that we are modelling simultaneously two processes of one
and the same system using for them a common source of random noise.
In both processes every automaton every time does the following:
first, due to line \ref{xf} or \ref{yf}, it assumes some
value which depends in a random way on its neighbors and,
second, it makes a random error, thereby
becoming 0 with probability $\epsil$ due to line \ref{x0} or \ref{y0} and
becoming 1 with probability $\delta$ due to line \ref{x1} or \ref{y1}.
(We assume that $\epsil + \delta \leq 1$.)
A coupling $c$ of $k$ random systems with a common $\S$
is a random system with the same $\S$ system of neighborhood,
$\Xo(c)=\Xo(1) \times \Xo(2) \times \cdots \times \Xo(k)$
and the transition probability which is a coupling
(in the sense of coupling of measures defined in section
\ref{Measures and Topology}) of their transition probabilities.
Note that according to this definition the coupled systems need not
to be identical: it is sufficient for them to correspond
to one and the same $\S$ system of neighborhood,
that is to have the same $Space$, $\mem$ and $N(\cdot)$.
Coupling typically used in the literature, are
couplings of a system with itself in our terms.
The pseudo-code (\ref{coupling}) describes coupling of a
given system with itself and with another system which
deals with the values $m_{s,t}$ (which we comment below).
We assume that the function $f:\Xo^n \times [0,1] \mapsto \Xo$ is chosen in
such a way that each marginal process coincides with a certain given one.
This assumption is substantiated below in the form of exercise \ref{bin-mark}.
\Qex{marginals}{
Prove that marginals of any process of a coupling of several
random systems are processes of the coupled random systems.
}
Generally, any system has many different couplings.
But only those of them help to prove ergodicity, in which points of
difference die out fast enough for any pair of initial conditions.
To check this, our pseudo-code assumes that at every point $w$ we have
a special {\em mark} $m_w$ which may equal 0 or 1.
This is to mark loss of the memory (when we are sure about it).
Let us say that a point $w$ is {\em marked} iff $m_w=1$.
Initially all the points are unmarked (line \ref{unmarked}) and
become marked (lines \ref{die0} and \ref{die1}) whenever they are
assigned values which certainly do not depend on the prehistory
and therefore cannot be points of difference.
Let us estimate the percentage $U_t$ of unmarked points at time $t$
in this process. Let $k_t$ stand for the number of the origin's
neighbors whose time coordinate equals $-t$.
(Note that $k_1+\cdots+k_\mem=n$.)
The percentage of unmarked points which proliferate into $V_t$ in result of
line \ref{prolif}, does not exceed $k_1 U_{t-1}+\ldots,k_\mem U_{t-\mem}$.
Then $\epsil+\delta$ of them die in result of lines \ref{die0} and \ref{die1}.
Thus
$$
U_{-\mem}=\cdots=U_{-1}=1, \qquad
U_t \leq (1-\epsil-\delta)(k_1 U_{t-1}+\ldots+k_\mem U_{t-\mem}).
$$
Therefore $U_t \leq \U_t$ where
$$
\U_{-\mem}=\cdots=\U_{-1}=1, \qquad
\U_t=(1-\epsil-\delta)(k_1 \U_{t-1}+\ldots+k_\mem \U_{t-\mem}).
$$
\Qex{U-to-0}{
Prove that $\U_t\tz$ ~iff~
$(1-\epsil-\delta)(k_1 +,\ldots,+ k_\mem)<1$, ~that is~ $1-\epsil-\delta<1/n$.
}
\Qpr{dying-erg-fast}{
For any (finite or infinite) system modelled by the
pseudo-code (\ref{coupling}) prove that dying out of
unmarked points guarantees ergodicity and fast convergence.
}
Now to apply our results to a general process with $\Xo=\{0,1\}$.
This is done by the following:
\Qex{bin-mark}{
Prove that for any two processes of one random system described by
the pseudo-code (\ref{bin-ran-code}) and formula (\ref{bin-ran-form})
there is a coupling which can be represented by
the pseudo-code (\ref{coupling}) with a suitable $f$ and
$$
\epsil=\min\{p_0(x_{N(s,t)}),~x_{N(s,t)}\in X_{N(s,t)}\},\qquad
\delta=\min\{p_1(x_{N(s,t)}),~x_{N(s,t)}\in X_{N(s,t)}\}.
$$
}
\Qpr{max-min}{
Prove that if $\Xo=\{0,1\}$ and
$$
\max\{p_0(x_{N(s,t)}),~x_{N(s,t)}\in X_{N(s,t)}\}-
\min\{p_0(x_{N(s,t)}),~x_{N(s,t)}\in X_{N(s,t)}\}<1/n
$$
then the system is ergodic and converges fast.
}
\com This follows from the three preceeding exercises.
\subsec{General case $\Xo=\{0,\ldots,k\}$}
Let $\bfP$ stand for the set of functions from
$A^n$ to the set of normed measures on $A$.
Let a $\delta$-function $\delta_a \in \bfP$ stand for a constant
function which maps any argument into one and the same
$\delta$-measure on $A$ concentrated in some $a \in \Xo$.
To prove ergodicity of a random system it makes sense to represent its
transition probability $p$ as a mixture of elements of $\bfP$, in which
$\delta$-functions have the maximal possible sum of coefficients.
Let $\Sigma(p)$ stand for this maximal possible sum.
\Qex{sigma}{
Prove that
$\Sigma(p)=1-\max\{|p(B|s_1)-p(B|s_2)|,~ s_1,s_2\in \Xo^n,~ B \subset \Xo \}$.
}
\Qex{1-sigma}{
Prove that if $1-\Sigma(p)<1/n$ then the system is ergodic and converges fast.
}
Based on these two exercises, the following holds:
{\bf Theorem 2.} If
$$
\max\{|p(B|s_1)-p(B|s_2)|,~ s_1,s_2 \in \Xo^n,~ B \subset \Xo \}<1/n
$$
then the random system is ergodic and converges fast.
\com See a similar result in \cite{Vaserstein}.
\sec{Percolation Systems}
\subsec{General Percolation}
Since marking proved useful, it makes sense to examine it as such.
This leads us to our first examples.
The following pseudo-code is obtained from (\ref{coupling})
by deleting everything which pertained to $x_{s,t}$ and $y_{s,t}$
and denoting $\theta=\epsil+\delta$:
\CODE{perc-code}{1}{
\LN for $t=-\mem$ to $-1$ do
\LN \* for all $s \in Space$ do
\LN \*\* $m_{s,t} \ot 0$
\LN for $t=0$ to $t_{max}-1$ do
\LN \* for all $s \in Space$ do
\LN \*\* $m_{s,t} \ot \min(m_{N_1(s,t)},\ldots,m_{N_n(s,t)})$
\LN \*\* if $rnd< \theta$ then $m_{s,t} \ot 1$
}
The process described by this code can be called a {\em Percolation process}
because of the following interpretation.
Assume that some fluid is supplied to all the initial points.
Every non-initial point is connected with its neighbors with pipes which
pass the fluid in one direction: to the point from its neighbors.
But in every non-initial point there is a tap which is closed with
probability $\theta$ independently from all the other taps.
Then a point is unmarked iff the fluid percolates to this point.
Lines 1-3 in this pseudo-code correspond to the initial condition \az.
Instead of this, we can take an arbitrary initial condition,
and thereby obtain an arbitrary {\em random Percolation system}.
Lines 4-6 describe the {\em deterministic Percolation system}
and line 7 adds the one-way random noise.
\Qex{perc-mon}{
Prove that any Percolation system is monotonic.
}
Note that the measure concentrated in \ao is an invariant process
for any Percolation system. Therefore, ergodicity of a Percolation system
amounts to tendence to \ao from any initial state.
\Qex{percsyst-op}{
Let $P$ stand for the {\em Percolation operator}, i.e.
operator of the Percolation system. Prove that any Percolation system
is ergodic iff $P^n(\az)$ tends to \ao when $n\ti$.
}
Let us discuss ergodicity of a Percolation system $\P(\theta)$ as
depending on $\theta$ with a given $Space$, $\mem$ and neighborhoods.
We are interested in {\em critical} values $\theta^*$
which separate ergodicity from non-ergodicity of $\P(\theta)$.
If a Percolation system is ergodic for all positive $\theta$,
we say that $\theta^*=0$ is its only critical value.
\Qex{0-for-finite}{
Prove that any finite Percolation system is ergodic for all $\theta>0$.
}
\Qex{collinear}{
Prove that any standard Percolation system in which all the neighbor
vectors are mutually collinear, is ergodic for all $\theta>0$.
}
\Qex{theta-prec}{
Prove that $\theta_1 < \theta_2 \implies \P(\theta_1) \preceq \P(\theta_2)$.
}
\Qex{one-crit}{
Prove that any Percolation system $\P(\theta)$ has only one critical value.
}
\Qex{perc-major}{
Let two Percolation processes $proc_1(\theta)$ and $proc_2(\theta)$
be defined with one and the same $Space$, $\mem$,
and initial condition and let $N_1(\origin) \subset N_2(\origin)$.
Prove that $\forall\theta~:~proc_1(\theta) \succeq proc_2(\theta)$
and that $\theta^*_1 \leq \theta^*_2$.
}
\Qpr{non-collinear}{
Prove: A standard infinite Percolation system has $\theta^*>0$
iff it has at least two non-collinear neighbor vectors.
}
\com Due to the preceeding exercise, it is sufficient to prove $\theta^*>0$
for 2-Percolation (see below) which is the simplest Percolation process.
\Qun{erg-crit-?}{
It is conjectured that Percolation systems are ergodic at
the critical point wnenever it is positive. Is it true ?
}
\com Some infinite Percolation systems, including 2-Percolation,
are proved to be ergodic at the critical point, see
\cite{BGN-1,BGN-2,BG-1,BG-2,BG-3}.
\subsec{2-Percolation}
The 2-Percolation system has $\DIM=1$, $\mem=1$, $n=2$ and
$N(0,0)=\{(-1,-1),(0,-1)\}$.
The following pseudo-code models 2-Percolation on $Z_\kappa$:
\CODE{2-perc-code}{1}{
\LN for all $s \in Z_\kappa$ do
\LN \* $x_{s,-1} \ot 0$
\LN for $t=0$ to $t_{max}-1$ do
\LN \* for all $s \in Z_\kappa$ do
\LN \*\* $x_{s,t} \ot \min(x_{s-1,t-1}, x_{s,t-1})$
\LN \*\* if $rnd<\theta$ then $x_{s,t} \ot 1$
}
This system, in its finite and infinite versions, was first
introduced in \cite{Stavskaya} and received much attention.
A proof of its non-ergodicity for small $\theta$ by the well-known
contour method can be found in \cite{Toom-68} or \cite{Survey}.
\subsec{Different Rates of Convergence of Finite Percolation Systems}
For Percolation systems the following special way is used
to speak about how fast or slow they converge:
Let $T(\theta,\kappa)$ stand for the mean time when a given
standard finite Percolation system with $Space=Z_\kappa$ and
a given $\theta$, first reaches the state \ao,
if it started from the state \az.
\Qex{M-log}{
Prove that if $\theta$ is large enough (say, $\theta>1-1/n$),
then the Percolation pattern converges fast and $T(\theta,\kappa)$
grows logarithmically as a function of $\kappa$.
}
\Qpr{M-exp}{
Prove that if $\theta$ is small enough,
then the Percolation pattern converges slow and $T(\theta, \kappa)$
grows exponentially as a function of $\kappa$.
}
\com See \cite{Toom-68} where it is proved for 2-Percolation.
For other Percolation patterns it follows from monotony.
\Qun{crit=?}{
For any Percolation pattern the following critical values may be defined:
\vskip 0.5em
\begin{tabular}{lll} \hbox{\qquad}
&$\theta^*_\infty=\theta^*$
& - the boundary between ergodicity and
non-ergodicity of the infinite system,\\
&$\theta^*_{\fast}$
& - infimum of those values of $\theta$ with
which finite systems converge fast,\\
&$\theta^*_{\slow}$ & - supremum of those values of $\theta$
with which finite systems converge slow,\\
&$\theta^*_{\log}$ & - infimum of those values of $\theta$ for which
$T(\theta, \kappa)$ grows logarithmically,\\
&$\theta^*_{\exp}$ & - supremum of those values of $\theta$ for which
$T(\theta, \kappa)$ grows exponentially.
\end{tabular}
\vskip 0.5em
It is conjectured that for any Percolation pattern
all of these critical values are equal. Is it true ?
}
\com This conjecture is not proved (or disproved) even for 2-Percolation.
However, some relations between these critical values can be stated:
\Qpr{perc-crit}{
Prove for all Percolation patterns:
$$
\left.
\begin{array}{rl}
\theta^*_{\slow} \leq &\theta^*_{\exp} \\
&\theta^*_\infty
\end{array} \right\} \leq \theta^*_{\log} \leq \theta^*_{\fast}.
$$
}
\com See \cite{Survey}, pp. 72-84 where several critical values
and relations between them are discussed.
\sec{Non-Ergodicity and Slow Convergence}
Theorem 2 has shown that any `random enough' system is ergodic.
Thus, to be non-ergodic, a system has to be `deterministic enough'.
We shall prove non-ergodicity of some systems obtained by `spoiling'
a deterministic system with a small random noise.
Naturally, properties of the deterministic system are essential
with such approach, and this is what we start with.
\subsec{Attractors and Eroders}
Let $\Xo$ have an element called 0.
Let an {\em island} stand for a finite perturbation of \az.
Call a deterministic system an {\em eroder} if for any initial island
$x_I$ the corresponding trajectory $\TR(x_I)$ is also an island.
\Qex{0-0}{
Prove that in any eroder $\TR(\az)=\az$.
}
We concentrated our attention on \az for simplicity; generalization is easy.
Say that an initial configuration $s_I$ is an {\em attractor}
for a deterministic system or {\em attracts} it if
for any finite perturbation $s'_I$ of $s_I$
the trajectory $\TR(s'_I)$ is a finite perturbation of $\TR(s_I)$.
For example, a system is an eroder iff it is attracted
by the initial configuration \az.
Note that according to our definition, any finite perturbation
of an attractor, is an attractor too.
So it is better to speak of {\em bunches} of attractors,
a bunch being a class of equivalence, two initial conditions
claimed equivalent iff their set of difference is finite.
Let us call a bunch an {\em attractor} iff all of its elements are attractors.
\Qex{attr-attr}{
Prove that if a bunch contains an attractor, it is an attractor.
}
\Qex{perc-attr}{
Prove that in any deterministic Percolation system
the only attractors are those in the bunch of \az.
}
\Qex{fin-non}{
Can a finite deterministic system have more that one bunch of attractors ?
}
\Qun{non-periodic-?}{
Can an infinite standard monotonic deterministic system
with $\Xo=\{0,1\}$ have a non-periodic fixed attractor ?
}
\com The answer is unknown even in the one-dimensional case with $\mem=1$.
\Qun{eroders-?}{
Consider all infinite standard deterministic monotonic systems
with $\Xo = \{0,1,\ldots,k\}$ naturally ordered.
Formulate a checkable equivalent criterions for their initial
condition \az to be: 1\RB to be an attractor, 2\RB to be stable.
}
\com The answer is unknown now even for the case
with $\mem=1$, $\DIM =2$ and $|\Xo|=3$.
Criterion for arbitrary $\DIM$,
$\Xo=\{0,1\}$, and $\mem=1$ was given in \cite{Toom-76monot}.
The same criterion generalized for any $\mem$ was given in
Theorem 6 of \cite{Toom-80} and is repeated here as Theorem 3.
Criterion for $\DIM=1$ and arbitrary $k$
and $\mem$ was given in \cite{Galperin}.
Monotony is essential for the problem to be solvable:
It is known that without demanding monotony, the problem of discerning
eroders is algorithmically unsolvable even with $\mem=k=1$ \cite{Petri}.
There are also similar results in \cite{Toom-Mityushin}.
\Qex{not-equivalent}{
Prove that items 1\RB and 2\RB of the
problem \ref{eroders-?} are not equivalent to each other.
}
Our interest in attractors is motivated by the following consideration.
It seems plausible (and sometimes it is true) that if an initial
configuration is an attractor, the system remains in the vicinity of
the resulting trajectory even in the presence of a small random noise.
Hence it is a good idea to construct systems which are attracted by more
than one bunch, and to see how they behave with a small random noise added.
In the next section we give exact definitions.
\subsec{Stable trajectories}
As before, $\M=\M_V$ stands for the set of normed measures on $X=\Xo^V$.
To every initial configuration $s$ there corresponds $\M^s \subset \M$ which
consists of those measures whose projection to $I$ is concentrated in $s$.
To every value of the parameter $\epsil\in [0, 1]$
there corresponds $\M_\epsil \subset \M$ defined as follows:
a measure $\mu$ belongs to $\M_\epsil$ if
$$
\hbox{for all finite } W \subset V~:~
\mu(\forall w\in W~:~x_w \neq \tr(x_{N(w)}))\leq\epsil^{|W|}
$$
where $|W|$ is the cardinality of $W$.
Finally, $\M_\epsil^s=\M_\epsil \cap \M^s$.
An initial condition $s$ and the resulting trajectory $\TR(s)$
are termed {\em stable} if
$$
\lim_{\epsil\ti}\sup\{\mu(x_w\neq \TR(s)_w):\mu\in M_\epsil^s,~w\in V\}=0.
$$
This definition always makes sense:
the supremum makes sense because the set $\M_\epsil^s$ is non-empty
as it contains the measure concentrated in $\TR(s)$,
and the limit makes sense because the set $\M_\epsil^s$
decreases when $\epsil$ decreases.
\Qpr{stables}{
Prove that a finite system can not have more than one
stable initial configuration.
}
To show that out definition is non-trivial, that is the set
$\M_\epsil^s$ is rich, we propose the following exercise
which constructs explicitly some multi-parametric subset of $\M_\epsil^s$.
\Qex{rich}{
Let us have a parameter $\epsil_w\leq\epsil$ for any
non-initial point $w$ and let the hidden measure $\eta$
induce a measure $\mu \in \M$ with the following map:
for all initial $w$ we set $x_w=s_w$ and for all non-initial $w$
the value of $x_w$ is defined inductively as follows:
$x_w=\tr(x_{N(w)})$ if $\eta_w \leq 1-\epsil_w$
and is assigned an arbitrary value otherwise.
Prove that any measure $\mu$ defined in this way belongs to $\M_\epsil^s$.
Show also that measures of this sort do not exhaust $\M_\epsil^s$.
}
We use Theorem 3 (below) to prove non-ergodicity, in fact
non-uniqueness of the invariant measure, in the following way.
\Qex{ex1/3}{
Prove that for any deterministic system $\D$
which has a fixed attractor $y=(y_{s,t})$
(where `fixed' means that $\TR(y)_{s,t}$ does not depend on $t$)
there is such $\epsil >0$ that any random system resulting from $\D$
by adding a random noise all of whose parameters $\epsil_i$ are less
than or equal to $\epsil$, has an invariant process $\mu$ for which
\vskip 0.5em
\FORM{form1/3}{
\forall~ s \in Space,~~t \in Time~:~\mu(x_{s,t}\neq y_{s,t})<1/3.
}
}
\Qex{many-fixed}{
Suppose that a deterministic system $\D$ has $k$ different fixed attractors.
Prove that there is such $\epsil >0$ that any random system resulting from
$\D$ by adding a random noise all of whose parameters $\epsil_i$ are less
than or equal to $\epsil$, has at least $k$ different invariant processes.
}
\proof Let $\D$ have fixed attractors $y^1,\ldots,y^k$.
According to the preceeding exercise, there are processes which
are close to $\TR(y^1),\ldots,\TR(y^k)$ in the sense of (\ref{form1/3}).
Applying to them an argument like that in the proof of Theorem 1,
we see that there are invariant processes $\mu^1,\ldots,\mu^k$
which are close to $y^1,\ldots,y^n$ in the same sense,
which assures that they are different.
\subsec{A theorem about attraction and stability}
The following theorem treats of standard deterministic systems with
$Space=Z^\DIM$. In this case the volume $V=Space \times Time$ is
a half of the $\DIM +1$-dimensional integer space.
Let us consider $V$ as a subset of the continuous space $R^{\DIM +1}$.
Let $conv(S)$ stand for the convex hull of any set $S$ in $R^{\DIM +1}$.
For any set $S \subset R^{\DIM +1}$ and any number $k$ we define:
$$
kS=\{ks:s\in S\},~~~ray(S)=\bigcup\{kS:~k \geq 0\}.
$$
We call $ray(S)$ {\em ray} of $S$.
Let a {\em zero-set} stand for such a subset
$z \subseteq N(\origin)$ for which
$$
(\forall w \in z : x_w=0) \implies \tr(x_{N(\origin)})=0.
$$
Let $\sigma(\D)$ stand for the intersection of rays of convex hulls
(in the continuous space) of all the zero-sets of $\D$.
{\bf Theorem 3.}
Let $\D$ be any monotonic standard deterministic system with $\Xo=\{0, 1\}$
and $Space=Z^\DIM$. The following four conditions are mutually equivalent:
\begin{assert}
\item $\D$ is an eroder.
\item $\sigma(\D)=\{\origin\}$.
\item There exist such homogeneous linear functions
$L_1,\ldots,L_r:R^{\DIM +1}\mapsto R$ (where $r\leq \nu+1$)
that $L_1+\cdots +L_r \equiv 0,$ and for every $i=1,\ldots,r$
the set $\{w\in N(\origin):L_i(w)\geq 1 \}$ is a zero-set.
\item The initial condition \az is stable.
\end{assert}
>From now on $r(\D)$ stands for the minimal value of $r$
with which the condition 3) holds for an eroder $\D$.
\Qex{2-perc-eroder}{
Prove that 2-Percolation is an eroder and find $L_1,\ldots,L_r$
with which the condition 3) is fulfilled.
}
\answer with $r=2,~L_1(s,t)=2s-t,~L_2(s,t)=-2s+t.$
\subsec{About the proof of Theorem 3}
The most difficult part to prove is that 1), 2) or 3) implies 4)
and this is non-trivial even for particular examples.
For 2-Percolation this proof boils down to examination of a directed planar
percolation which has been described in \cite{Toom-68} or survey \cite{Survey}.
Case $r=2$ (which is the smallest possible value of $r$) is more difficult
than 2-Percolation, but still is essentially simpler than the general case.
In this case `contours' also are used,
but there is no percolation interpretation.
For any system with $r>2$ there is a combinatorial general proof,
but the topological objects it involves are more complicated than contours.
The most general versions of this proof
are in \cite{Toom-80} and \cite{Toom-82}.
It seems that the best way to understand its main idea is to read
\cite{LMS} where the proof is reworded for the NEC-Voting (for which $r=3$).
There is another method \cite{Bramson-Gray} to obtain similar results,
but we do not discuss it here.
Here we prove only one part of Theorem 3: ``2) implies 3)''
to show how this proof uses the theory of convex sets.
Assume that $\sigma(D)=\{\origin\}$.
For any finite set $S\subset R^{\DIM +1}$ the set $ray(conv(S))$
can be represented as an intersection of several halfspaces.
(A halfspace is a set in $R^{\DIM +1}$, where some
homogeneous linear function is non-negative.)
Apply this to zero-sets, and you have a finite list
of zero-halfspaces whose intersection is $\{\origin\}$.
(A zero-halfspace is a halfspace whose
intersection with $N(\origin)$ is a zero-set.)
For everyone of these zero-halfspaces we introduce a homogeneous
linear function $f_i$ which is non-positive on it and only on it.
We know that the origin is the only point in
$R^{\DIM +1}$ where all $f_i$ are non-positive.
This allows us to apply Theorem 21.3 of \cite{Rockafellar}
(a variant of Helly's theorem) to the hyperplane $\Pi=\{(s,t)~:~t=-1\}$,
restrictions $f_i|_\Pi$ of our functions to $\Pi$ and
any non-empty closed convex set $C \subseteq \Pi$. We take $C=\Pi$.
Then, according to Theorem 21.3 of \cite{Rockafellar},
there exist such non-negative real numbers $\lambda_i$,
at most $\DIM +1$ of which are positive, that for some $\epsil>0$
$$
\forall w\in C : \sum_i \lambda_i f_i (w)\geq\epsil.
$$
Since the left part is linear and bounded from below by a
positive constant on $C$, it has to be a positive constant on $C$:
$$
\forall w\in C : \sum_i \lambda_i f_i (w)=\delta=const\geq\epsil.
$$
Hence
$$
\forall w\in R^{\DIM +1} : \sum_i \lambda_i f_i (w)=-\delta t.
$$
Then the functions $L_i=-(t + n\lambda_i f_i/\delta)$ fit our claim \qed
\Qex{sigma-minimal}{
Prove that $\sigma(D)$ equals the intersection
of convex hulls of rays of {\em minimal} zero-sets.
We call a zero-set {\em minimal} iff any its proper subset is not a zero-set.
This fact simplifies checking of condition 2) for particular systems.
}
\Qex{Theorem-3-for-ones}{
Reformulate Theorem 3 as a criterion for
attraction and stability of \ao.
}
\Qex{simplify}{
Simplify conditions 2) and 3) of the Theorem for the case $\mem=1$.
}
\Qpr{4)implies}{
Prove that item 4) of Theorem 3 implies 1), 2), and 3).
}
\Qpr{1)2)3)equivalent}{
Prove that items 1), 2) and 3) of Theorem 3 are equivalent.
}
\Qpr{Theorem-3-for-3}{
Formulate and prove an analogue of Theorem 3 for all
standard monotonic deterministic systems with $\Xo=\{-1,~0,~1\}$.
In other words, prove that in all of these systems the initial state
\az is stable iff it is an attractor and give a criterion for that.
}
\Qpr{Theorem-3-gen}{
Generalize Theorem 3 to any $\DIM$-dimensional
commutative system with $\Xo=\{0,1\}$.
}
\com See \cite{Toom-86}.
\subsec{Example: Flattening}
This example illustrates an application of Theorem 3.
It is a deterministic system with $\DIM=2,~\mem=1$ with
$\epsil$-$\delta$ noise added. The following pseudo-code shows
neighborhoods and transition probabilities of this system:
\CODE{flat-code}{1}{
\LN for all $(i,j) \in Z_\kappa^2$ do
\LN \* $x_{i,j,-1} \ot x_{i,j}^{initial}$
\LN for $t=0$ to $t_{max}-1$ do
\LN \* for all $(i,j) \in Z^2_\kappa$ do
\LN \*\* $x_{i,j,t} \ot \max(\min(x_{i ,j,t-1}, x_{i ,j-1,t-1}),~
\min(x_{i-1,j,t-1}, x_{i-1,j-1,t-1}))$
\LN \*\* if $rnd < \epsil$ then $x_{i,j,t} \ot 0$
else if $rnd > \delta$ then $x_{i,j,t} \ot 1$
}
The infinite Flattening has the same pattern and transition
probabilities and is an eroder. The condition 3) is fulfilled with
$r=2,~L_1(i,j,t)=2i-t,~L_2(i,j,t)=-2i+t$.
Theorem 3 guarantees that for $\epsil$ and $\delta$ small enough
the infinite version of this system is non-ergodic.
More specifically, Theorem 3 proves the following:
\begin{itemize}
\item[] $\forall (i,j) \in Z^2 : x_{i,j}^{initial}=0 \implies
\sup_{t,\epsil} proc(x_{i,j,t}=1) \to 0$ when $\delta\tz$.
\item[] $\forall (i,j) \in Z^2 : x_{i,j}^{initial}=1 \implies
\sup_{t,\delta} proc(x_{i,j,t}=0) \to 0$ when $\epsil\tz$.
\end{itemize}
(Both of these probabilities do not actually depend on $i$ or $j$.)
\Qex{Flattening-fast}{
For which values of $\epsil$ and $\delta$ our Theorem 2 proves
fast convergence of the finite version of Flattening ?
}
\Qex{Flattening-01}{
Prove that in the Flattening systems both bunches
of \az and \ao are attractors.
}
\Qpr{Flattening-fixed}{
Prove that although deterministic Flattening has infinitely many fixed initial
conditions, there are only two bunches of attractors: those of \az and \ao.
}
\Qpr{Flattening-slow}{
Prove that the finite version of Flattening converges slow
with small enough but positive $\epsil$ and $\delta$.
}
\com This can be proved by a method analogous to that of \cite{Berman-Simon}.
The general proof is in \cite{Toom-92}.
\Qun{Flattening-attractors-?}{
Does Flattening have any attractors besides
of bunches of \az and \ao ?
}
\subsec{Quasi-Stability}
In the section \ref{slow&fast} we have defined
slow convergence as a finite analogue of non-ergodicity.
In the same vein we can define quasi-stability
as a finite analogue of stability. Take some $a \in A$.
Take a pattern in which `all $a$-s' is a trajectory
and call this trajectory {\em quasi-stable} for the given pattern
if there exist such $\epsil >0$ and $Q>1$ that for the processes
resulting from the initial condition `all $a$-s'
$$
\forall \kappa,t~:~ t \leq Q^\kappa
\implies Prob(x_{s,t} \neq a) \leq\epsil.
$$
\Qex{quasi-slow}{
Prove that if an ergodic system has two non-equivalent
quasi-stable trajectories, it converges slow.
}
\Qpr{quasi}{
Prove that \az is quasi-stable for all
patterns for which conditions of Theorem 3 hold.
}
\com \cite{Berman-Simon} proved this for the NEC-Voting.
The general proof is in \cite{Toom-92}.
\sec{Standard Votings}
For any odd $n$, the Boolean function $Voting$ with $n$ arguments
equals $1$ iff the majority of its arguments are ones.
Equivalently, $Voting$ equals $0$ iff the majority of its arguments are zeros.
Let us consider the following systems which use $Votings$:
a deterministic Voting is a deterministic system which has $Voting$
as its transition function and a random Voting which results from
deterministic Voting by adding the $\epsil$-$\delta$ noise.
The first computer simulations of random Votings were
done in \cite{VPP} for the symmetric case $\epsil=\delta$.
\subsec{Symmetrical Votings}
Call a standard Voting {\em symmetric} if $N(\origin)$
is symmetric with respect to $ray(\bar 0,-1)$.
\Qex{Votings-noneroders}{
Prove that all deterministic symmetric Votings
are not eroders and have $\sigma=ray(\bar 0,-1)$.
}
\Qex{Votings-no-attractors}{
Prove that deterministic symmetric Votings have no fixed attractors.
}
\Qpr{Votings-oneway}{
Prove that all random symmetric Votings are ergodic if
$\epsil=0$ and $\delta>0$ (or $\epsil>0$ and $\delta=0$).
}
\com A similar statement is proved
as Proposition 1 in \cite{Toom-76unstable}.
\Qex{Votings-ergodicity}{
For which values of $\epsil$ and $\delta$
our Theorem 2 proves ergodicity of random Symmetrical Votings ?
}
\Qun{Votings-1dim}{
Prove that all one-dimensional Votings with a non-degenerate noise
are ergodic.
}
\com Computer simulations \cite{VPP} of one-dimensional symmetric Votings with
\FORM{line-vot}{N(\origin)=\{(s,-1)~:~s=-1,~1,~0\}}
showed ergodicity in any non-degenerate case.
It is very plausible that in fact all non-degenerate
one-dimensional Votings are ergodic.
However, a rigorous proof \cite{Gray-82} exists now
only for the continuous-time analogue of (\ref{line-vot}).
Let us call a Symmetrical Voting a {\em Windrose}
if its neighbor vectors are non-complanar.
\Qun{Windroses-nonergodic-?}{
Prove that all Windroses are non-ergodic with small enough $\epsil=\delta$.
}
\Qun{Windroses-some-?}{
Prove ergodicity of some Windrose with
some positive values of $\epsil$ and $\delta$.
}
\com Both of the last problems are unsolved even
for the following simplest two-dimensional cases:
\begin{itemize}
\item[] 5-Windrose has $n=5$ and
$N(\origin)=\{(s,-1)~:~s=(0,0),(-1,0),(1,0),(0,-1),(0,1)\}$.
\item[] 9-Windrose has $n=9$ and
$N(\origin)=\{(i,j,-1)~:~i,j=-1,0,1\}$.
\end{itemize}
\subsec{NEC and other Non-Symmetric Votings}
One of non-symmetric votings, the NEC-Voting has attracted
special attention. Here NEC means North, East, Center.
It has $N(\origin)=\{(s,-1)~:~s=(0,0),(1,0),(0,1)\}$.
NEC-Voting was first introduced in \cite{VPP} with a simmetric noise,
and the results of computer simulation showed that it
is non-ergodic if $\epsil=\delta$ is small enough.
Now this non-ergodicity is proved for any small enough $\epsil$ and $\delta$:
it follows from our Theorem 3 because NEC-Voting is an eroder.
Condition 3) is fulfilled with
$$
r=3,~~~L_1(i,j,t)=-3i-t,~~~L_2(i,j,t)=-3j-t,~~~L_3(i,j,t)=3i+3j+2t.
$$
\Qex{NEC-01}{
Prove that in the deterministic NEC-Voting both bunches
of \az and \ao are attractors.
}
\Qpr{NEC-fixed}{
Prove that although deterministic NEC-Voting has
infinitely many fixed initial conditions, it has only bunches
of attractors: those of \az and \ao.
}
\Qun{NEC-attractors-?}{
Does NEC-Voting have any attractors besides of bunches of \az and \ao ?
}
\Qpr{Voting-3}{
Consider any Voting with $\mem=1$ and $n=3$
whose neighbor vectors are non-complanar.
Prove that it is ergodic and non-ergodic with
the same values of $\epsil$ and $\delta$ as NEC-Voting.
}
\Qpr{NEC-use}{
Use NEC-Voting to construct deterministic systems with
at least $m$ stable fixed trajectories for any natural $m$.
In more detail: For any finite set of periodic configurations
in $Space=Z^\DIM$, propose a system for which all of the
corresponding fixed configurations are stable trajectories.
(You can even propose a monotonic system with this property.)
}
\com See \cite{Toom-80}.
\Qpr{Voting-eroder}{
Prove that a standard Voting is not an eroder iff there exists
such a ray $\rho \in R^{\DIM+1}$ (whose starting point is $\origin$)
that for any hyperplane $\pi$ that contains $\rho$,
$$
BEL > DIF_{out} + DIF_{in}.
$$
Here $BEL$ is the number of neighbor vectors that belong to $\rho$,~~~
$DIF_{out} > 0$ is the difference between
the number of neighbor vectors on one side of $\pi$ and
the number of neighbor vectors on the other side of $\pi$,~~~
$DIF_{in } > 0$ is the difference between
the number of neighbor vectors that belong to $\pi$
and are on one side of $\rho$ and
the number of neighbor vectors that belong to $\pi$
and are on the other side of $\rho$.
}
In the case when all the neighbor vectors are mutually non-collinear,
this means that there exists such a neighbor vector $V \in R^{\DIM +1}$
that all the other neighbor vectors form pairs, the two vectors of
each pair having the opposite $V$-direction projections to $Space$.
\com Incompatibility of symmetry and erosion was paid
attention to in \cite{Pippenger}.
\sec{One-Dimensional Conservators}
Let a {\em forget-me-not} stand for a non-ergodic non-degenerate process.
For $\DIM >1$ forget-me-nots certainly exist,
Flattening and NEC-Voting for example.
The question whether one-dimensional forget-me-nots exist,
was very intriguing for years.
The {\em positive rates conjecture} proposed by several authors claimed
that all non-degenerate one-dimensional random systems
are ergodic (see for example \cite{Liggett}, Chapter 4, section 3).
Now this seems to be refuted:
after some preliminary work \cite{Zirelson,Kurdyumov},
P\'{e}ter G\'{a}cs published an elaborate construction \cite{Gacs-86},
which presents an one-dimensional forget-me-not,
but his construction is not yet examined sufficiently,
and we shall not discuss it here.
Our purpose is more modest. Note that from the practical point of view,
it is not always necessary to remember forever:
it may be sufficient to keep information for a finite but long time.
So, it seems worth while to look for deterministic systems
which converge slow in presence of a small noise.
With this idea in mind, the term `conservator' and first
examples of conservators were proposed in \cite{GKL}.
A {\em conservator} stand for a deterministic system
with at least two different fixed initial attractors.
The first one-dimensional conservators were proposed in \cite{GKL}.
The following is the simplest of them.
\subsec{Soldiers}
This is a standard system in which
$$
N(\origin)=\{(s,-1)~:~s=-3,-1,~0,~1,~3\}.
$$
Let us call every point of $Z$ a {\em a soldier}. Every soldier has only
two possible states, $-1$ and $1$. The transition function equals
$$
\tr(a_{-3}, a_{-1}, a_0, a_1, a_3)=
\cases{-1 &if $a_0=~~1,~~a_1~=a_3~~=-1$,\cr
~~1 &if $a_0=-1,~a_{-1}=a_{-3}=~~1$,\cr
~a_0 &otherwise.}
$$
\Qex{Soldiers-non-monotonic}{
Prove that Soldiers system is non-monotonic,
whether we assume $-1 \prec 1$ or $-1 \succ 1$.
}
\Qex{Soldiers-symmetric}{
Prove that Soldiers system is symmetric in the following sense:
If we define a one-to-one map $a~:~X \mapsto X$ by the rule
$(a(x))_{s,t}=-x_{-s,t}$ then $\forall x_I~:~ a(\TR(x_I))=\TR(a(x_I))$.
}
\Qex{Soldiers-half}{
Prove that if Soldiers have only one invariant process $\mu_{inv}$ then
$$
\mu_{inv}(x_i=-1)=\mu_{inv}(x_i=1)=1/2.
$$
}
\com This follows from the symmetry of Soldiers.
\Qpr{Soldiers-conservator}{
Prove that Soldiers system is a conservator.
In more detail: prove that both \ao and \amo are
attractors. (From symmetry these two facts are equivalent.)
}
\com This was first proved in \cite{GKL} and reinforced in \cite{deSa-Maes}.
\Qun{Soldiers-attractors-?}{
Is it true that \ao and \amo are the only
non-equivalent attractors for Soldiers system ?
}
\com \cite{deSa-Maes} proved that \ao and \amo
are the only fixed attractors of Soldiers.
See also \cite{Li}.
\Qun{Soldiers-oneway-?}{
Prove ergodicity of Soldiers with the one-way
$\epsil$-$\delta$ noise in which $\epsil=0$ or $\delta=0$.
}
\subsec{2-Line Voting} \label{2-Line Voting}
This is a non-standard Voting with 3 neighbors.
Here $Space=Z\times \{1,-1\}$, that is automata form two parallel rows
and are indexed $(i,j)$ where $i \in Z$ and $j \in \{1,-1\}$, and
$$
N(i,j,t)=\{(s,t-1)~:~s=(i-2j,j),(i-j,j),(i,-j)\}.
$$
\Qex{2-line-uniform}{
Show that 2-Line Voting is uniform, but non-commutative.
}
\Qex{2-line-01}{
Prove that both \az and \ao are attractors for 2-Line Voting.
}
\Qpr{2-line-oneway}{
Prove that 2-Line Voting is ergodic in presence of an one-way noise,
which turns zeros into ones with probability
$\epsil$ and never turns ones into zeros (or vice versa).
}
\com This can be proved by the method of \cite{Toom-76unstable}.
Due to monotony, it it sufficient to take only \az and \ao as
initial states, because all the others are between them.
\Qpr{2-line-fixed}{
Has 2-Line Voting any other fixed attractors
besides of the bunches of \az and \ao ?
}
\answer No. To prove it, note that there are only six fixed states:
~~1) \az,
~~2) \ao,
~~3) `zeroes if $j= 1$, ones if $j=-1$',
~~4) `zeroes if $j=-1$, ones if $j= 1$',
~~5) `zeroes if $i$ is even, ones if $i$ is odd' and
~~6) `zeroes if $i$ is odd, ones if $i$ is even'.
\Qun{2-line-attr-?}{
Has 2-Line Voting any other attractors besides of the bunches of \az and \ao ?
}
\subsec{Relaxation Time}
\cite{GKL} reported that if the simulation of Soldiers begins in \ao or \amo,
the system remains in the vicinity of this state for a long time
(in fact, all the computer time which was available to the autors).
Moreover, the system approached one of these states
if started from a chaotic initial condition.
A recent computer simulation \cite{deSa-Maes} of Soldiers shows ergodicity
for all non-degenerate cases (which indeed is very plausible), but it
suggests some non-trivial dependence of the rate of convergence on $\epsil$.
In more detail: \cite{deSa-Maes} defines the `relaxation time'
as mean time when the percentage of ones first gets into the range
$(\kappa/2 -\sqrt{\kappa},~\kappa/2+\sqrt{\kappa})$
if we started from the state \amo.
This definition is convenient for all systems with $\Xo=\{-1,1\}$ and the kind
of symmetry which was shown for Soldiers in exercise \ref{Soldiers-symmetric}.
The computer results of \cite{deSa-Maes} seem to show that
the relaxation time for Soldiers does not depend on $\kappa$
(which corresponds to fast convergence in our sense),
but behaves as $\sim e^{const/\epsil}$ for $\epsil\tz$
which is enormously greater than that of $\D_\equiv$ (see the next exercise).
This means that the Soldiers system, although ergodic, yet
effectively checks dissent when $\epsil$ is small enough.
\Qex{identity}{
Let $\D_\equiv$ stand for the identity standard deterministic system
with $Space=Z$, in which automata never change their states.
Show that the relaxation time for $\D_\equiv$ with an
$\epsil$-$\delta$ noise is $O(1/(\epsil+\delta))$
when $\epsil$ and $\delta$ tend to 0.
}
\Qun{relax-2-Line}{
How do relaxation times of various Votings, especially of 2-Line Voting,
behave as functions of $\epsil$ when $\epsil\tz$ ?
}
{\bf Conjecture:} Relaxation times of one-dimensional votings
with $N(\origin)=\{(i,-1),~i \in [-R,~R]\}$ behave as
$O((\epsil+\delta)^{-(R+1)})$ and the relaxation time
of 2-Line Voting behaves like that of Soldiers.
\subsec{Further Problems}
\Qpr{no-one-dimensional}{
Prove that there are no commutative one-dimensional
monotonic conservators with $|\Xo|=2$.
}
{\bf Sketch of Proof:} Assume the contrary and come to a contradiction.
Formula (\ref{dimensional}) here becomes
$$
Space=Z \times S=\{ (i,j)~:~i \in Z,~j \in S \}.
$$
Define two initial configurations:
$$
L(i,j)=\cases{0, &if $i V_{right}$ then the bunch of \az is the only attractor.
\item If $V_{\LEFT} = V_{right}$ then there are no attractors \qed
\end{assert}
Note that we did more than the problem asked: we described completely
all attractors rather than only fixed ones.
Thus, to have a conservator we need either a greater-than-one
dimension, or non-monotonity, or non-commutativity, or $|\Xo|>2$.
In fact, in either case there is a conservator.
For a greater dimension it is Flattening or NEC-Voting.
For non-monotonity it is the Soldiers system.
For non-commutativity it is the 2-Line Voting.
An example for $|\Xo|>2$ is the subject of the following problem.
\Qpr{3-conservator}{
Propose a one-dimensional standard monotonic
deterministic system with $|\Xo|=3$ which is a conservator.
}
\com You can take $\Xo=\{-1,~0,~1\}$ and arrange such an interaction
that both \ao and \amo will be attractors.
You can make this system symmetric in a sense similar to Soldiers.
\sec{Chaos Approximation}
Let $\mem=1$ and let $\M^1$ be the set of normed measured on $\Xo^{Space}$.
For this case let us alternate our system's operator $\P~:~\M^1 \mapsto \M^1$
with the map $Chaos:\M^1 \mapsto \M^1$ which turns
every measure into the product measure with the same
distribution of every single variable $x_s,~s\in Space$.
Thus, $Chaos(\P)$, that is $Chaos$ applied after $\P$,
is the new operator which hopefully approximates $\P$.
The chaos approximation is an exact solution for the Tree system
which we defined in the answer to exercise \ref{neighbors}.
We shall illustrate the chaos approximation by two examples.
\subsec{Percolations}
Take any Percolation operator $\P(\theta)$ with $\mem=1$ and take
the measure concentrated in \az as the initial condition.
Let $P_t(\theta)$ and $\tilde P_t(\theta)$ stand for
the percentage of ones at $t$-th step in the measures
$\P^t(\az)$ and $(Chaos(\P))^t(\az)$.
The sequence of $\tilde P_t(\theta)$ satisfies the simple iteration formula:
$$
\tilde P_{t+1}(\theta)= \theta +(1- \theta ) \tilde P^n_t(\theta).
$$
This allows to examine the behavior of
$\tilde P_\infty(\theta)=\lim_{t\ti}\tilde P_t(\theta)$
and to compare it with the behavior of
$P_\infty(\theta)=\lim_{t\ti}P_t(\theta)$.
\Qex{chaos-perc-growth}{
Prove that when $\theta$ grows from 0 to 1,
$\tilde P_\infty(\theta)$ starts at 0 as $\theta+o(\theta)$,
grows monotonically and becomes equal to 1 at
the critical value $\theta^*_{chaos}=1-\frac{1}{n}$.
}
This exercise shows that crude as it is, the chaos approximation's
behavior may be similar to that of the original process.
\Qun{chaos-perc-tend-?}{
Let $\P$ be a Percolation operator.
Let us apply $Chaos$ after every $T$ steps of $\P$.
Does the critical point of $Chaos(\P^T)$ tend to that of $\P$ when $T \ti$ ?
}
\subsec{Votings}
It is instructive to examine the chaos approximation for random Votings.
As before, $\tilde P_t$ stands for the percentage of ones at $t$-step.
The iteration formula for $n=3$ is
$$
\tilde P_{t+1}=
\epsil +(1-\epsil -\delta )(\tilde P_t^3 +3\tilde P_t^2 (1-\tilde P_t )).
$$
Let us compare two sequences generated by this formula with two
different initial conditions $\tilde P_0=0$ and $\tilde P_0=1$.
We must discriminate the case when limits of these sequences
are equal from the case when they are different.
\Qex{chaos-voting-same}{
Show that these limits are equal if $\epsil+\delta$ is close
enough to 1 and are different if $\epsil$ and $\delta$ are small enough.
}
We see that behavior of the chaos approximation of a random Voting
with 3 neighbors is similar to that of NEC-Voting but different from
one-dimensional votings which seem to be ergodic in all non-degenerate cases.
\Qex{chaos-write}{
Write and examine the iteration formula for the chaos approximation of
a voting with 5 neighbors and compare it with the behavior of the 5-Windrose.
}
\begin{biblike}{10}{Books and surveys}
\bibitem{Rockafellar} R. T. Rockafellar. Convex Analysis.
Princeton Univ. Press, 1970.
\bibitem{Griffeath-78} David Griffeath.
Coupling Methods for Markov Processes. {\it Studies in Probability
and Ergodic Theory; Advances in Mathematics, Supplementary Studies},
Vol. 2, pp. 1-43. Academic Press, New York, 1978.
\bibitem{Liggett} Thomas M. Liggett.
Interacting Particle Systems. N.Y., Springer-Verlag, 1985.
\bibitem{Durrett} Richard Durrett.
Lecture Notes on Particle Systems and Percolation.
Wadsworth \& Brooks/Cole Statistics/Probability Series, 1988.
\bibitem{Survey} Discrete Local Markov Systems. A. L. Toom, N. B. Vasilyev,
O. N. Stavskaya, L. G. Mityushin, G. L. Kurdyumov and S. A. Pirogov.
{\it Stochastic Cellular Systems : ergodicity, memory, morphogenesis}.
Ed. by R. Dobrushin, V. Kryukov and A. Toom.
Nonlinear Science: theory and applications,
Manchester University Press, 1990.
\bibitem{Cellular} Cellular Automata, Theory and Experiment. Amsterdam, 1990.
%\bibitem{Media} Mathematics of Random Media.
%Lectures in Applied Mathematics. Volume 27.
%Werner E. Kohler, Benjamin S. White, Eds., 1991.
\end{biblike}
\begin{biblike}{10}{Papers}
\bibitem{Stavskaya} O. N. Stavskaya and I. I. Piatetski-Shapiro.
[Uniform networks of spontaneously active elements.]
{\it Problemy Kibernetiki}, V. 20 (1968), pp. 91-106 (in Russian).
\bibitem{Toom-68} Andrei Toom.
A family of uniform nets of formal neurons.
{\it Soviet Math. Dokl.}, V. 9 (1968), N. 6, pp. 1338-1341.
\bibitem{Vaserstein} Leonid Vaserstein. Markov Processes over
Denumerable Products of Spaces, Describing Large Systems of Automata.
{\it Problems of Information Transmission}, v. 5 (1969), N. 3, pp. 47-52.
\bibitem{VPP}
N. B. Vasilyev, M. B. Petrovskaya and I. I. Piatetski-Shapiro.
Simulation of voting with random errors.
{\it Automatica i Telemekhanika}, V. 10 (1969), pp. 103-107 (in Russian).
\bibitem{VL} Leonid Vaserstein and Andrei Leontovitch. Invariant Measures
of Certain Markov Operators Describing a Homogeneous Random Medium.
{\it Problems of Information Transmission}, V. 6 (1970), N. 1, pp. 61-69.
\bibitem{VP} N. B. Vasilyev and I. I. Pyatetskii-Shapiro.
The Classification of One-Dimensional Homogeneous Networks.
{\it Problems of Information Transmission}, V. 7 (1971), N. 4, pp.340-346.
\bibitem{Toom-74} Andrei Toom.
Non-ergodic multidimensional systems of automata.
{\it Problems of Information Transmission}, v. 10 (1974), pp. 239-246.
\bibitem{Galperin} Gregory Galperin.
One-Dimensional Local Monotone Operators With Memory.
{\it Soviet Math. Dokl.}, V. 17 (1976), N. 3, pp.688-692.
\bibitem{Toom-76monot} Andrei Toom. Monotonic Binary Cellular Automata.
{\it Problems of Information Transmission}, V. 12 (1976), N. 1, pp. 33-37.
\bibitem{Toom-Mityushin} Andrei Toom and Leonid Mityushin.
Two Results Regarding Non-computability for Univariate Cellular automata.
{\it Problems of Information Transmission}, V. 12 (1976), N. 2, pp. 135-140.
\bibitem{Toom-76unstable} Andrei Toom.
Unstable Multicomponent Systems.
{\it Problems of Information Transmission}, V. 12 (1976), N. 3, pp. 220-225.
\bibitem{GKL} P\'{e}ter G\'{a}cs, George Kurdjumov and Leonid Levin.
One-Dimensional Homogeneous Media, which Erode Finite Islands.
{\it Problems of Information Transm.}, V. 14 (1978), N. 3, pp. 223-226.
\bibitem{Zirelson} Boris S. Zirelson. Non-Uniform Local Ineraction
Can Produce `Far-Range Order' in a One-Dimensional System.
{\em Theor. Probability Appl.}, 21 (3), 681-683, 1976.
\bibitem{Kurdyumov} George Kurdyumov.
An example of a nonergodic one-dimensional homogeneous
random medium with positive transition probabilities.
{\it Soviet Math. Dokl.}, V. 19 (1978), N. 1, pp. 211-214.
\bibitem{Petri} N. V. Petri.
The unsolubility of the problem of discerning of annuling iterative nets.
{\it Researches in the Theory of Algorithms and Mathematical Logic.}
Moscow, Nauka, 1979 (in Russian).
\bibitem{Toom-80} Andrei Toom.
Stable and Attractive Trajectories in Multicomponent Systems.
{\it Multicomponent Random Systems, (R.L.Dobrushin, Ya.G.Sinai, Eds.)}
Advances in Probability and Related Topics, V. 6 (1980), Dekker, pp. 549-575.
\bibitem{Toom-82} Andrei Toom. [Estimations for measures that describe
the behavior of random systems with local interaction.
{\it Interactive Markov Processes and their Application to the
Mathematical Modelling of Biological Systems}].
Pushchino, 1982, pp. 21-33 (in Russian).
\bibitem{Toom-86} Andrei Toom. On Reliable Simulation in Real Time.
{\it Preprints of the 1-st World Congress of the Bernoulli Society
for Mathematical Statistics and Probability Theory}, 1986, v. 2, p. 676.
\bibitem{Pippenger} Nicholas Pippenger.
Symmetry and Self-Repair (Extended Abstract).
\bibitem{Gray-82} Lawrence F. Gray.
The Positive Rates Problem for Attractive Nearest Neighbor Spin Systems on $Z$.
{\it Z. Wahrscheinlichkeitstheorie verw. Gebiete}, V. 61 (1982), pp. 389-404.
\bibitem{Bennett-Grinstein} Charles H. Bennett and G. Grinstein.
Role of Irreversibility in Stabilizing Complex and Nonergodic Behavior
in Locally Interacting Discrete Systems.
{\it Phys. Rev. Letters}, v. 55 (1985), n. 7, pp. 657-660.
\bibitem{Gacs-86} P\'{e}ter G\'{a}cs.
Reliable Computation with Cellular Automata.
{\it Journal of Computer and System Sciences},
V. 32, N. 1 (February 1986), pp. 15-78
\bibitem{LMS} Joel L. Lebowitz, Christian Maes and Eugene R. Speer.
Statistical Mechanics of Probabilistic Cellular Automata. (Preprint.)
\bibitem{Berman-Simon} Piotr Berman and Janos Simon.
Investigations of Fault-Tolerant Networks of Computers
(preliminary version), 1988.
\bibitem{BGN-1}
David J. Barsky, Geoffrey Grimmett and Charles M. Newman.
Dynamic Renormalization and Continuity of the Percolation Transition
in Ortants. To appear in {\it Spatial Stochastic Processes},
K. Alexander and J. Watlings, Eds., Birkhauser, Boston.
\bibitem{BGN-2}
David J. Barsky, Geoffrey Grimmett and Charles M. Newman.
Percolation in Half-Spaces: Equality of Critical Densities
and Continuity of the Percolation Probability.
To appear in {\em Prob. Theory and Related Fields.}
\bibitem{BG-1}
Carol Bezuidenhout and Geoffrey Grimmett.
The Critical Contact Process Dies Out. (Preprint.)
\bibitem{BG-2}
Carol Bezuidenhout and Geoffrey Grimmett.
Critical Attractive Spin Systems. (Preprint.)
\bibitem{BG-3}
Carol Bezuidenhout and Lawrence Gray.
Critical Attractive Spin Systems. (Preprint.)
\bibitem{Bramson-Gray} Maury Bramson and Lawrence Gray.
A Useful Renormalization Argument. (Preprint.)
\bibitem{deSa-Maes} Paula Gonzaga de Sa and Christian Maes.
The Gacs-Kurdyumov-Levin Automaton Revisited.
(To appear in {\em Journal of Statistical Physics}.)
\bibitem{Li} Wentian Li.
Non-Local Cellular Automata.
1991 Lectures in Complex Systems, SFI Studies in the Sciences of Complexity.
Lect. Vol. IV, Eds. L. Nadel \& D. Stein, Addison-Wesley, 1992.
\bibitem{Toom-92} Andrei Toom.
A Critical Phenomenon in Growth Systems.
(Submitted to {\em Journal of Statistical Physics}.)
\end{biblike}
\end{document}