0$,
$\a^N_{Nt}$ converges in probability to a deterministic measure
$u(x,t)dx$ as $N\to\infty$. The function $u(x,t)$
is the unique entropy solution of {\rm (1.18)} with
$$f(u)=\frac12\lbrak 1+u-\bigl[(1+u)^2-4pu\bigr]^{1/2}\rbrak,$$
and the concave flux function $f(u)$ is the monotone conjugate
of $-h(z)$.
\endproclaim
A coordinate change transforms the
curve $w=h(z)$ into an arc of the ellipse
$$\frac{x^2}p+\frac{y^2}{1-p}=1.$$
Theorem 3A was first proved as the ``arctic ellipse
theorem'' for domino tilings of the Aztec diamond by Jockusch,
Propp, and Shor \cite{JPS}. They employed a Rost-like analysis of the
discrete time exclusion process, and covered the case
$p\le 1/2$. The case $p>1/2$ was done by
Cohn, Elkies, and Propp \cite{CEP} with complex analytic
tools.
\hbox{}
\subhead 3.2. Proofs
\endsubhead
We start by outlining the proof of some facts
about a discrete time queueing system.
\proclaim{Lemma 3.1} The i.i.d. queues $(\eta_i)_{i\in\mmZ}$ with
common distribution defined by {\rm (3.1)} are invariant for the
customer process $\eta(\cdot)$ with jump probability $p$.
In equilibrium,
a customer passing through the system experiences i.i.d.
waiting times $\{\sigma_i\}$ with common distribution
specified by $\sigma_i-1\sim\text{Geom}((p-r)/(1-r))$.
\endproclaim
\demo{Proof} The argument follows the development of \cite{Ke}
for continuous time queues.
Consider first one single server queue operating
as follows:
During the time interval $(\tau-1,\tau)$, $\tau=1,2,3,\ldots$,
one service
is completed with probability $p$ and one new customer arrives
with probability $r$. Easy calculation shows that $P$ of (3.1)
is invariant for this Markov chain, and in equilibrium
the departure process is Bernoulli($r$). Next one verifies
that, in equilibrium, the size of the queue at time $\tau$
is independent of the departure process up to time $\tau$.
>From this it follows that the i.i.d. queues are invariant
for the process. To describe the waiting times,
one first checks that a customer arriving
in an equilibrium system experiences a waiting time
distributed as $\sigma_0$ at the first server.
That waiting times at subsequent servers
are independent of $\sigma_0$
follows from proving that the departure process up
to the random time $\sigma_0-1$ is independent of $\sigma_0$.
\qed
\enddemo
The server process $z(\cdot)$ has the same state space
$\ZZ$ as in (1.7). It evolves according to this rule:
At each time step $\tau$, each server $z_k$ jumps one step
to the left with probability $p$, provided
$z_{k-1}(\tau-1)\le z_k(\tau-1)-1$.
Its construction via the graphical
representation is the same as in Example 1 except that
now the Poisson point processes of potential jump
times are replaced by Bernoulli($p$) variables that
signify possible jumps at integer times $\tau$.
The construction of the variables $\xi^\ell(i,\tau)$
in terms of the special server process $x(\cdot)$ is
done as for Example 1, except for the discrete time:
Initially $x_i=\ell$ for all $i\ge 0$, and $\xi^\ell(i,\tau)$
is the number of customers served by
(equivalently, the displacement of) $x_i$ by time $\tau$.
Then we can think of $T_{0,(n,k)}$ as the time when the $k$th
server $x_{k-1}$ has served $n$ customers. Comparing this
with the equilibrium system gives moment bounds:
If instead of queueing up immediately at server $x_0$
with all other queues empty, customers arrive
in a Bernoulli($r$) stream to a system in equilibrium
with parameter $r$,
we find that
$$ET_{0,(n,k)}\le n\cdot\frac1{r}+k\cdot\frac{1-r}{p-r}.
$$
The first term on the right accounts for being the $n$th
success in a sequence of Bernoulli($r$) trials, and the
second term for the waiting times $\sigma_0+\cdots+\sigma_{k-1}$
from Lemma 3.1. For $\la=(\la_1,\la_2)\in\mmN\times\mmN$
and $n\in\mmN$ we get, by optimizing over $r\in(0,p)$,
$$\frac1n ET_{0,n\la}\le
p^{-1}\bigl[\la_1+\la_2+2\sqrt{(1-p)\la_1\la_2}\,\bigr].
\tag 3.2
$$
After the limit $n\to\infty$ and the approximations that
extend it to all $(x_1,x_2)\in\rplus$ we combine (3.2) with a
trivial lower bound to write
$$p^{-1}(x_1+x_2)\le \tau(x_1,x_2)\le
p^{-1}\bigl[ x_1+x_2+2\sqrt{(1-p)x_1x_2}\,\bigr].
\tag 3.3
$$
>From this follow the continuity properties again, and
in particular $h(z)$ is well-defined by (1.4) as a
continuous convex function for $z\ge 0$.
The microscopic Lax-Oleinik formula
$$z_k(\tau)=\inf_{i\le k}\{z_i-\xi^{z_i}(k-i,\tau)\}
$$
is valid as before.
The details of the proof of the scaling limit for the
server process are the same as for Example 1, so we skip them and
consider this verified:
$$\lim_{N\to\infty} \frac1N z^N_{[Nx]}([Nt])=U(x,t)
%\tag
$$
holds in probability for $t>0$ and $x\in\mmR$, whenever
(1.12) is assumed and $U(x,t)$ is defined by (1.11)
for a nondecreasing right-continuous function $U_0(q)$.
\demo{Proof of Theorem 3A}
Let
$$f(u)=\inf_{z\ge 0}\{uz+h(z)\},\quad u\ge 0,$$
denote the monotone
conjugate of $-h(z)$. For $u\ge 0$, pick the
parameter $r=r(u)$ in the invariant distribution (3.1) of $\eta(\cdot)$
so that $E[\eta_i]=u$. Let $(\eta_i)$ be i.i.d. queues with
this common distribution, and
define the initial server configuration by
$z_0=0$, $z_i-z_{i-1}=\eta_i$ for
$i\in\mmZ$.
Because the queues remain in equilibrium,
$$E[z_0(\tau)]=-p\sum_{m=0}^{\tau-1} P[\eta_0(m)>0]
=-p\tau\biggl(1-\frac{p-r}p\biggr)=-\tau r.$$
By the reasoning of the
proof of Thm. 1A, and after some algebra,
$$f(u)=r=\frac12\lbrak 1+u-\bigl[(1+u)^2-4pu\bigr]^{1/2}\rbrak.$$
>From this the function $h(z)$ is recovered by
$$h(z)=-\inf_{u\ge 0}\{uz-f(u)\},\quad z\ge 0,$$
and $\tau(x_1,x_2)$ by homogeneity.
\qed
\enddemo
Theorem 3B follows exactly as Theorem 1B did.
\vskip .5in
%\input sec4ex
\head 4. Fourth Example
\endhead
\subhead 4.1. Results
\endsubhead
Our last growth model does not involve squares, but it too takes
place on the positive quadrant of the plane.
The parameters $n$ and $N$ of the
superadditive process and the scaling limit, respectively,
are now
identical, so we omit $N$ in favor of $n$. Thus the reader
should take $N=n$ when reference is made to formulas from
past sections. The vertical axis will serve
as the time axis for the particle model, so to anticipate this
we denote points in $\mmR\times[0,\infty)$ by $(x,t)$.
Let $\ga>0$ be a parameter. Give each vertical half-line
$\{r\}\times(0,\infty)$, $r\in\ga\mmZ$
$=\{\ldots,-2\ga,-\ga,0,\ga,2\ga,\ldots\}$, a rate $\ga$ Poisson
point process. We shall refer to this point process
as Poisson points on the $\ga$-grid.
For $-\infty< a**0$, the asymptotic
shape is given by
$$B_\ga=\{(x,t)\in\rplus: 0 0$. For each $z^n_i$ to the left of $z^n_k$,
a potential location for $z^n_k$ at time $t$ is given by the endpoint of
an up-right path of length $k-i$ starting at $(z^n_i,0)$ and staying
below the time-$t$ line. Of these
potential locations, $z^n_k$ chooses the leftmost.
Some restrictions have to be placed on $z^n$ to prevent
$z^n_k$ from reaching $-\infty$ in finite time. The
right choice for a state space is
$$\ZZ_n=\lbrak z\in(\ga_n \mmZ)^\mmZ: \text{$z_{k-1}\le z_k$ for all
$k\in\mmZ$, $\lim_{k\to-\infty}k^{-2}z_k=0$}\rbrak.
\tag 4.2
$$
(The superscript $n$ appears in $z^n$ whenever we refer to
the process, as opposed to a generic element of a state space.)
To rigorously define the above evolution, set
$$\Ga_{\ga_n}(r,\ell,t)=\inf\{b>0:
L_{\ga_n}((r,0),(r+b,t))\ge\ell\}
\tag 4.3
$$
for $r\in\mmR$,
$\ell\in\{0,1,2,\ldots\}$,
and $t>0$. And then
$$z^n_k(t)=\inf_{i\le k}\{ z^n_i+ \Ga_{\ga_n}(z^n_i,k-i,t)\}.
\tag 4.4
$$
Formula (4.4) defines the server process as a $\ZZ_n$-valued
Markov process. The rule that is realized is this:
A Poisson point appears at rate $\ga_n$ at each site
of $\ga_n\mmZ$, and whenever that happens, the nearest
server particle to the right is moved to the location
of the Poisson point. This particle
evolution was originally derived by Aldous
and Diaconis \cite{AD} from Hammersley's ideas in \cite{Ha},
so the process
$z^n(\cdot)$ could be called Hammersley's process on a lattice.
By homogeneity, to deduce Theorem 4A it suffices to
identify the function $h_\ga(x)=\psi_\ga(x,1)$, $x>0$. This
function $h_\ga(x)$ is nonnegative, concave and strictly increasing,
so it has a convex inverse $g_\ga(x)$. To have $g_\ga(x)$
defined and continuous for all $x\ge 0$, set $g_\ga(x)=0$ for
$0\le x\le h_\ga(0+)$. The limit (4.1) implies that
$$\lim_{n\to\infty}\frac 1n \Ga_{\ga_n}(r_n, [nx], nt)
=t\, g_\ga\biggl(\frac xt\biggr)
\tag 4.5
$$
holds in probability, for $x\ge 0$, $t>0$, and any sequence
$\{r_n\}$ of base points.
>From the server process we
construct the customer process by
$$\eta^n_i(t)=z^n_{i+1}(t)-z^n_{i}(t).
\tag 4.6
$$
The process $\eta^n(\cdot)$ is a coupled random walk where each
customer particle has `weight' $\ga_n$. At each site
$i\in\mmZ$ the size $\eta_i$ of the queue is a multiple
of $\ga_n$. Particles jump from $i$ to $i+1$ according
to the rule: At rate $\eta_i$, pick a number $k\ga_n$
uniformly at random from $\{\ga_n, 2\ga_n,\ldots,\eta_i\}$,
and move amount $k\ga_n$ from site $i$ to $i+1$.
The generator is
$$\LL_nf(\eta)=\sum_{i\in\mmZ} \ga_n
\sum_{k=1}^{\eta_i/\ga_n}[f(\eta^{k\ga_n,i,i+1})-f(\eta)],
\tag 4.7
$$
where
$$\eta^{r,i,i+1}_j=
\cases \eta_i-r, &j=i\\
\eta_{i+1}+r, &j=i+1\\
\eta_j, &j\ne i,i+1.
\endcases
%\tag
$$
We shall not enter into a discussion about the proper existence
of the generator, and offer it only as an intuition-enhancing
tool for readers who prefer to read the dynamics off
the generator. Equation (4.6) is the rigorous definition
of the process $\eta^n(\cdot)$.
Paralleling (4.2), the state space for $\eta^n(\cdot)$ becomes
$$\YY_n=\lbrakk\eta\in\{0,\ga_n,2\ga_n,3\ga_n\ldots\}^\mmZ:
\lim_{k\to-\infty}k^{-2}\sum_{i=-1}^k\eta_i=0\,\rbrakk.
$$
The distributions on $\YY_n$ under which the
$(\ga_n^{-1}\eta_i)_{i\in\mmZ}$ are geometrically
distributed i.i.d. random variables
are invariant for the process.
The initial macroscopic profile for the customer process
is a Radon measure $m_0$
on $\mmR$, assumed to satisfy
$$\lim_{x\to-\infty}x^{-2}m_0[x,0)=0.
\tag 4.8
$$
Pick a left-continuous function $U_0(x)$
such that
$$m_0[x,y)=U_0(y)-U_0(x)
\tag 4.9
$$
for all $-\infty**