\documentclass{article}
\begin{document}
\parindent=0pt
\parskip=10pt
\newcommand{\pa}{\partial}
\newcommand{\vphi}{\varphi}
\newcommand{\eps}{\varepsilon}
\newcommand{\G}{{\cal G}}
\newcommand{\I}{{\cal I}}
\newcommand{\K}{{\cal K}}
\newcommand{\grad}{\nabla}
\newcommand{\Ker}{{\rm Ker}}
\newcommand{\Ran}{{\rm Ran}}
\def\~#1{{\widetilde #1}}
\newfont{\smaller}{cmbx10 scaled1200}
\title{
LIE-POINT SYMMETRIES AND NONLINEAR DYNAMICAL SYSTEMS\\
{(Symmetry and approximate symmetries of nonlinear equations: bifurcations,
center manifolds, and normal form reduction)}
}
\author{
Giampaolo Cicogna\\[1.0ex]
{\em Dipartimento di Fisica, Universit\`a di Pisa}\\ {\em Piazza Torricelli 2,}
{\em 56126 Pisa, Italy}\\[1.0ex]
{\tt cicogna@ipifidpt.difi.unipi.it}\vspace{.2truein}\\ Giuseppe Gaeta\\[1.0ex]
{\em Department of Mathematics, Loughborough University}\\ {\em Loughborough
LE11 3TU, England}\\[1.0ex] {\tt g.gaeta@lut.ac.uk}
}
\date{}
\maketitle
\begin{abstract}
Nonlinear symmetries of finite dimensional dynamical systems are related to
nonlinear normal forms and center manifolds in the neighbourhood of a singular
point. Certain abstract results can be used algorithmically to construct the
normal forms and/or the center manifold up to a given order in the perturbation
expansion. We also argue that for this task, approximate symmetries are as
useful as exact ones. \end{abstract}
\section{Introduction}
The purpose of this paper is to review some recent results [1--4] (see also
[5] for a comprehensive
discussion) concerning the symmetry analysis of {\it nonlinear dynamical
systems}; more
specifically, these are connected with perturbative analysis in the
vicinity of a known solution. We
will discuss how (nonlinear) symmetries are related to the Normal Forms
[6--8] expansion in the
neighbourhood of a singular point, and how these results are connected with
the construction of
Center Manifolds [8--10]; in the algorithmic procedures, one can also take
advantages of the so
called {\it approximate symmetries}, and actually we will argue that these
are as useful as exact
ones in a variety of problems.
It should be stressed that, contrary to other contributions in the present
volume, we only deal with
{\it finite dimensional} systems, i.e., with ODEs. We would like to be able
to state that the results
outlined here extend to evolution PDEs -- and we believe this to be the
case -- but at the moment we
are not able to give such extensions, neither to judge to what extent one
could obtain
generalizations of the present results. On the other side, the results
presented in this paper are
relevant to evolution PDEs at least in one respect, i.e., through {\it
bifurcation theory} [10--12];
we will not discuss this here, but just refer to [13,14], where the basic
results of equivariant
bifurcation theory [15--19] are extended to the case of nonlinear
(Lie-point) symmetries.
The paper is organized as follows: first of all we will fix notations and
recall some basic
definitions, notions and their use; in section 3 we will recall some
general theorems obtained
recently on these objects for systems with (nonlinear) Lie-point
symmetries; section 4 is dedicated
to recalling definitions and results on approximate symmetries. We will
then shortly recall in
section 5 the averaging theorems, which set a firm standpoint for the use
of truncated expansions
(and symmetries) in view of rigorous results. We will then be ready to
discuss, in section 6, the
algorithmic study and construction of symmetries (exact and approximate),
of Normal Forms and of Center Manifolds. Finally, in section 7 we give some
simple examples of application of
the proposed procedure.
We will avoid to reproduce proofs available in the literature, and
concentrate instead on one side
on the meaning of the results, and on the other on their algorithmic
implementation.
\section{ Notation and basic definitions}
We deal with (smooth) dynamical systems, i.e., with systems of ODEs of the
form $$ {\dot x} = f(x)
\eqno(1) $$ where $x \in M$, $f:M \to TM$, and $M$ is a smooth manifold of
dimension $m$ that we
think as immersed in $R^n$. Thus, we can also write (1) in components as $$
{\dot x}^i
= f^i (x) ~~~ i = 1,\ldots,n \eqno(2)\;, $$ although we will use as much as
possible the vector notation (1). Notice that in (1),(2) we have implicitly
assumed that the
system is time-autonomous; in the following we will only consider this case.
It is sometimes convenient to say that (1) defines a vector field $X_f$ on
$M$, whose components are the $f^i$'s, i.e., $$ X_f = f^i (x) \frac{\pa }{
\pa x^i }\;. \eqno(3) $$
The (Lie-point) symmetry algebra of (1) is defined as usual [20--22] as the
algebra of vector fields
on $M$ whose first prolongation transforms solutions to (1) into --
generally, different --
solutions to (1). Here we are specially interested in the Lie-Point
Time-Independent (LPTI) symmetry
vector fields; their algebra will be denoted by $\G_f$. The LPTI vector
fields are those of the form
$$ X_\vphi = \vphi^i (x) \frac{\pa }{ \pa x^i} \eqno(4) $$ (see the above
remark for their generality),
and $$ \G_f = \{ X_\vphi \ : \ [X_\vphi , X_f ] = 0 \ \} \eqno(5) $$ where
$[\cdot,\cdot]$ is the
usual Lie bracket (commutator) of vector fields.
One can also introduce the Lie-Poisson bracket $\{\cdot,\cdot\}$ on
functions, defined as
$$ \{ f,g \}^i = (f^j \cdot \grad_j ) g^i - (g^j \cdot \grad_j ) f^i \equiv
X_f (g) - X_g (f)
\eqno(6) $$ which is obviously related to the commutator: indeed, $$
\{f,g\} = h \
\Longleftrightarrow \ [X_f , X_g ] = X_h \eqno(7) $$ Thus, we also have $$
\G_f = \big\{ X_\vphi \ : \ \{\vphi , f \} = 0 \ \big\} \eqno(8) $$
It should be stressed that in the case of first order ODEs as (1) one does
often encounter {\it
infinite dimensional} symmetry algebras. However, the set of symmetry
vector fields has naturally,
together with the algebra structure, the structure of a {\it module}.
Indeed, if $\rho : M \to R$ is
a constant of motion for (1), i.e., if
$$ X_f (\rho) = 0 \eqno(9) $$ and $X_\vphi \in \G_f$, it is immediate to
check -- from (5) or (8) --
that also $X_{\rho
\vphi} \in \G_f$. Thus, if we denote by $\I_f$ the algebra of constants of
motion of (1), i.e., of
real functions satisfying (9), we have that $\G_f$ is a module over the
algebra $\I_f$ [22]. Notice that $\G_f$ can -- and in general will - be
infinite dimensional as an algebra,
but finite dimensional as a module.
Let us now assume that there is a point $x_0$ such that $f(x_0 ) = 0$, and
that we are interested in
the behaviour of the system around $x_0$. This means in particular that we
can as well, for the sake
of local analysis, assume to have a system defined in $R^m$. We can then
expand $f$ around $x_0$ (we take $x_0 = 0$ for ease of notation, and with
no loss
of generality), and we write $$ {\dot x} = \sum_{k=0}^\infty f_k (x)
\eqno(10)$$ where $f_k$ is
homogeneous of degree $k$ in $x$, $f_0 \equiv 0$ (because of $f(0)=0$), and
we will find it useful
in the following to denote $f_1 (x) $ in a special way, i.e., $$ f_1 (x) =
Ax ~~~~ f_1^i = A_{ij} x^j
\eqno(11) $$
We will assume that the matrix $A$ is semisimple, i.e., that its
eigenvalues have the same algebraic
and geometric multiplicities. This also implies that $A$ is normal, i.e.,
that $[A,A^+ ] = 0$. Let us
consider the (linear) eigenspaces of $A$ spanned by eigenvalues with real
part, respectively,
smaller than zero, greater than zero, and zero. These are called,
respectively, the stable
eigenspace $E^s$, the unstable eigenspace $E^u$, and the center eigenspace
$E^c$. Under quite
general hypotheses -- which we do not want to discuss here, see e.g. [9,10]
for a general discussion
-- there also exist a local stable manifold $W^s$, a local unstable
manifold $W^u$, and a local
center manifold $W^c$, with the property that $W^i$ is tangent to $E^i$ in
the singular point $x_0$.
The stable and unstable manifolds are unique, while the center manifold is
not; however, if an
analytic center manifold exists (which is not the case in general), this is
unique.
While along the stable and unstable manifolds the local dynamics is
essentially trivial,
corresponding to exponential contraction and expansion, the ``interesting''
local dynamics takes
place on the center manifold. Also, in bifurcation problems [9--12] one is
reduced to study the
dynamics on the center manifold. Thus, for these and other reasons [8--12]
the determination of the
center manifolds is an important task in the study of nonlinear dynamics.
Unfortunately, this is not
always an easy task; we will see in the next section that $W^c$ is
necessarily invariant under
$\G_f$, which can help in its explicit construction.
More generally, we can aim at a classification of the possible (local)
behaviour of the dynamical
system (1) in the vicinity of the fixed point $x_0$, once the linear part
$A$ is fixed. That is, we
ask what behaviour is possible if we do not know all of $f$, but only its
linear (around $x_0$) part $f_1 (x) = Ax$. This problem is closely related
to that
of simplifying the (local) expression for $f$. In both cases, we ask to
transform $f$ into a --
simpler ! -- form in order to have classes of equivalence of vector fields
and behaviour of their
flows.
It was shown by Poincar\'e that an appropriate way to proceed is to
transform $f$ into {\it Normal
Form} by means of a series of (in general, only formal) near-identity
changes of coordinates; we
refer to [6--8] for a discussion of Normal Form theory and its applications.
More specifically, one considers changes of coordinates of the form $$ x =
y + h_m (y) \eqno(12) $$
with $h_m$ homogeneous of degree $m$. Under such a change of coordinates,
(10) is transformed into
$$ {\dot y} = \sum_k {\~f}_k (y) \eqno(13) $$ with ${\~f}_k = f_k$ for
$km$, and ${\~f}_m$ being
given by
$$ {\~f}_m (x) = f_m (x) + \{ Ax , f_m (x) \} \eqno(14) $$ where $\{
\cdot,\cdot \}$ is the
Lie-Poisson bracket defined above. It is customary to introduce the {\it
homological operator} $L_A$
associated to $A$, defined simply as
$$ L_A (\cdot) = \{ Ax ,\cdot \} \eqno(15) $$ so that (14) reads $$ {\~f}_m
= f_m + L_A ( f_m ) \eqno(16) $$ and one can eliminate terms in the range
of $L_A$ by
appropriately choosing the $h_m$ (this is done by solving the homological
equation). Thus, one
considers transformations of the form (12) successively for $m=2,3,\ldots$,
arriving at the {\it normal
form} for (1),
$$ {\dot x} = g(x) = Ax + \sum_{k=2}^\infty g_k (x) \eqno(17) $$ Notice
also that one can always add
to the ``appropriate'' $h_m$ a term $\delta h_m$ belonging to $\Ker (L_A
)$, leading to the same
${\~f}_m$; this freedom will be useful in the following.
The Normal Form is therefore characterized by the fact that necessarily $$
g_k \in \Ker (L_{A^+} ) =
\Ker (L_A ) \eqno(18) $$ where in the last equality we have used the
semisimplicity of $A$ (the
theory is more complicate for the Jordan decomposition of $A$ having a
nilpotent part).
Notice that (18) implies that $X_g$ commutes with the vector field $$ X_A
\equiv X_{Ax} = A_{ij} x^j
\frac{\pa }{ \pa x^i} \eqno(19) $$ so that the determination of the Normal
Form for (1) requires the
determination of $\G_A$, where $\G_A$ stands for $\G_{Ax}$. Notice also
that for $X_g$ to commute
with $X_f$, $g$ must vanish on the points on which $f$ does. As it will be
remarked in the
following, $\G_f \subseteq \G_A$.
Finally, we will often use for the symmetry vector fields $X_s$ the
representation
$$ X_s = s^i (x) \frac{\pa }{ \pa x^i } ~~;~~ s(x) = Bx + \sum_{k=2}^\infty
s_k (x)
\eqno(20) $$
\section{Some abstract theorems}
In this section, we recall some abstract theorems obtained in recent works.
The first group of
results concerns the Normal Form unfolding, i.e., the structure of the
possible Normal Forms for a
given linear part $Ax$ (see previous section). As remarked above, this
corresponds to the vector
fields $X_g$ in $\G_A$.
If the original dynamical system (1) admits some symmetry, this property is
conserved upon the
(formal) changes of variables needed to pass in normal forms. In
particular, let us focus on a given
symmetry $X_s \in \G_f$; the expansion (20) for $s(x)$ will also be
modified due to the changes of
coordinates generated by $h_m$, but in the transformation of $s(x)$ the
relevant homological
operator will be $L_B$ rather than
$L_A$. Thus, one can use the freedom to add a term in $\Ker (L_A)$ in order
to simplify $\~s$, at
least for the part in ${\rm Ran} (L_B ) \cap \Ker (L_A)$. One has the
following theorem, proved in
[1]:
{\bf Theorem 1}. {\it Let $X_f$ and $X_s$ commute, with $f(x) = Ax + \sum
f_k (x)$ and
$s(x) = Bx + \sum_k s_k (x)$; let $A$ and $B$ be semisimple. Then, by means
of a series of formal
coordinates transformation of the form (12), $f$ and $s$ can be taken into
a {\rm joint normal form}
(JNF) $\~f (x) = Ax + \sum_k g_k (x)$, $\~s = Bx + \sum_k r_k (x)$, such
that both $g_k$ and $r_k$
(with $k \ge 2$) are in $\Ker (L_A ) \cap \Ker (L_B ) $.}
This can actually be extended in two directions. First of all, one can
consider not a single
symmetry vector field $X_s$, but a whole algebra, say with generators
$X_{s_j}$ and $s_j (x) = B_j x
+ \sum_k r_k^{(j)}$. On the other side, one could remove the condition that
$A$ and $B$ are
semisimple. Here we consider these two extension at the same time, quoting
a result given in [2];
the notation $A_s$ denotes the semisimple part of $A$, and $L_j \equiv
L_{A_j}$.
{\bf Theorem 2}. {\it Let us consider a $d-$dimensional algebra $\G$ of VFs
spanned by $X_j = f^i_j
(x) \frac{\pa }{ \pa x^i}$ ($j=1,\ldots,d$), where $f_j (x) = A_{(j)} x +
\sum_k f_k^{(j)} (x) = A_{(j)}
x + F_{(j)} (x) $. Then:
{\rm i)} If the algebra $\G$ is {\rm solvable}, then all the nonlinear
terms $F_{(j)}$ can be put in
parallel NF, namely $$F_{(j)} \in \Ker \big(L_{(j)}^+\big) \qquad {\rm for\
each}\ j=1,\ldots,d\ $$
{\rm ii)} If the algebra $\G$ is {\rm nilpotent} (in particular: abelian),
then one can put all
$F_{(j)}$ into a JNF, precisely (with obvious notations): $$F_{(j)} \in
\Big(\bigcap_{b \not= a}
\Ker\big( L_{(j),s}\big) \Big) \cap \Ker \big(L^+_{(j)}\big) \qquad {\rm
for\ each}\ j=1,\ldots,d $$
{\rm iii)} In any solvable (resp.: nilpotent, or in particular abelian)
{\it subalgebra} of a
generic algebra $\G$, all nonlinear terms can be put in parallel NF as in
{\rm i} (resp.: in JNF as
in {\rm ii}).}
Notice that if (1) admits a LPTI symmetry $X_g$ with $g(x) = Bx + G(x)$,
and $f,g$ are in JNF, then
$X_g$ is also a symmetry for the linear semisimple part of $X_f$, i.e., for
${\dot x} = {A_s} x$; the
converse is not true, i.e., symmetries of this linear system are not
necessarily symmetries for the
full system. The linear semisimple part of $g$, $B_s$ provides another
symmetry (if $B_s\ne 0$) for
the DS, i.e., we have $\{A_s x,g\}=0 $ and $ \{B_s x,f\}=0 $. It is maybe
worth remarking that the
theorems quoted above also have the following consequence:
{\bf Theorem 3.} {\it If the system (1) can be linearized, then it admits
$m$ independent {\rm
commuting} symmetries, which can be simultaneously taken into {\rm linear}
form by a coordinate
transformation. If, in particular, the system has a diagonalizable $A$ with
real eigenvalues, then --
once it is linearized and $A$ is diagonal -- the dilations along each
direction $x^i$, are $m$ linear
commuting symmetries for the system. Conversely, if there is a coordinate
system where the system
admits $m$ independent linear commuting symmetries $X_{s_j}$ with $s_j =
B_j x$ such that all $B_j$
are semisimple, then the system can be linearized.}
We would like to mention that the relevance of the above results lies in
that -- for a system with
symmetry -- they greatly reduce the number of terms that can appear in the
Normal Forms. These
results extend and generalize the results known for the case of {\it
linear} symmetries; it turns
out that the presence of nonlinear symmetries is particularly effective in
the reduction of terms
(correspondingly, their presence is ``more exceptional'').
We do now pass to recall some results [3] concerning the relation between
Center Manifolds for the
system (1) and the symmetry algebra $\G_f$. With reference to the
Corollary, we recall that the
analytic Center Manifold, if it exists, is unique. We also recall that by
globally invariant
manifold it is meant a manifold which is transformed into itself, not
necessarily with each point
left fixed.
{\bf Theorem 4.} {\it All the LPTI symmetries $X_s \in \G_f$ leave globally
invariant both the
stable and unstable local manifolds $W^s , W^u $; they also transform any
Center Manifold into a
Center Manifold (possibly the same).}
{\bf Corollary.} The analytic Center Manifold, if it exists, is globally
invariant under all the
analytic LPTI symmetries in $\G_f$.
{\bf Theorem 5.} {\it Given any Center Manifold $W^c_0$ of (1) there is,
generically, some
nontrivial LPTI symmetry $X_s \in \G_f$ leaving globally invariant this
$W^c_0$; and conversely any
LPTI symmetry of (1) leaves globally invariant some Center Manifold $W^c_0$.}
\section{Approximate symmetries}
We would now like to recall some definitions and results concerning
approximate symmetries [4].
First of all, we recall that we say that $X_g$ is an approximate symmetry
of order
$n$ for the system (1) if
$$ [X_f , X_g ] = X_h \ {\rm with} \ h(x) = \sum_k h_k (x) \ {\rm where} \
h_k (x)
\equiv 0 \ {\rm for} \ k \le n $$
We also define in a similar way approximate constants of motion: we say
that $\rho (x)$ is an
approximate constant of motion of order $k$ for the system (1) if $$ X_f
(\rho ) = \sum_k r_k (x) \
{\rm where} \ r_k (x) \equiv 0 \ {\rm for} \ k \le n $$
For a given system (1), i.e., for given $f$, we will denote by $\G_f^{(n)}$
the set of approximate
symmetries of order $n$, and similarly by $\I_f^{(n)}$ the set of
approximate constants of motion of
order $n$.
It is easy to check that the following lemmas hold [4]:
{\bf Lemma 1.} The set $\G_f^{(n)}$ is a Lie algebra with the usual
commutator of vector fields. The
chain of Lie algebras $\G_f^{(k)}$ satisfies $\G_f^{(k+1)} \subseteq
\G_f^{(k)}$.
{\bf Lemma 2.} The set $\I_f^{(n)}$ is an (abelian) algebra. The chain of
algebras
$\I_f^{(k)}$ satisfies $\I_f^{(k+1)} \subseteq \I_f^{(k)}$.
{\bf Lemma 3.} The set $\G_f^{(n)}$ has, beyond the structure of Lie
algebra, the structure of a
module over $I_f^{(n)}$.
The theorems given above can then be generalized to consider approximate
symmetries; in particular
we have [4] as generalization of the above theorems 1 and 3:
{\bf Theorem 6}. {\it Let $X_s \in \G_f^{(n)}$, with $f(x) = Ax + \sum f_k
(x)$ and
$s(x) = Bx + \sum_k s_k (x)$; let $A$ and $B$ be semisimple. Then, by means
of a series of formal
coordinates transformation of the form (12), $f$ and $s$ can be taken into
a {\rm joint normal form}
(JNF) up to order $n$; that is, they can be transformed into $\~f (x) = Ax
+ \sum_k g_k (x)$, $\~s =
Bx + \sum_k r_k (x)$, such that both $g_k$ and $r_k$ are in $\Ker (L_A )
\cap \Ker (L_B ) $ for $k
\le n$.}
{\bf Theorem 7.} {\it If the system (1) can be linearized up to order $n$,
then it admits $m$
independent {\rm commuting} approximate symmetries of order $n$, which can
be simultaneously taken
into {\rm linear} form by a coordinate transformation. Conversely, if there
is a coordinate system
where the system admits $m$ independent linear commuting approximate
symmetries of order $n$,
$X_{s_j}$ with $s_j = B_j x$ such that all $B_j$ are semisimple, then the
system can be linearized
up to order $n$.}
Notice that the fact that the system can be linearized up to order $n$
means that in the Normal Form
expansion the nonlinear terms are all of order higher than $n$.
We could also have generalizations of results for the non-semisimple case;
these go along the same
lines and are not reported here for the sake of simplicity. Also, in
section 6 we will limit the
algorithmic implementation of the results reported here to the semisimple
case for the sake of
simplicity, but it will be quite clear how to generalize these -- at the
price of having rather more
complicate formulas and procedures to implement them, but no further
conceptual difficulty -- to
the non-semisimple case.
\section{Truncated equations and approximate solutions}
The Normal Form method provides, in principle, a way to completely classify
the possible local behaviour of a dynamical system around a fixed point
with given linear behaviour; it also provides a constructive way to reduce
the system to a simpler one, ${\cal C}^\infty$ equivalent to the original
one. On the other side, it has two major drawbacks.
First, it is only a formal method, in that all the series involved are in
general only formal, and
could fail to converge\footnote{It should be remarked, however, that the
symmetry properties
of the system can be sufficient to ensure, in some cases, the convergence
of the normalising
transformation, see [23--25]}; in general, we only have an {\it asymptotic}
series: that is, in
practice, the normalising procedure should be carried over only up to some
optimal order $N$ and not
to all orders. The second obvious drawback is that in any case, i.e., even
if we are ensured of the
convergence of all the relevant series, to obtain the complete Normal Form
we should operate an {\it
infinite} series of transformations.
Thus, for reasons of principle or just for practical ones, in actual
computations we are obliged to
perform the Normal Form expansion (or transformation) only up to some fixed
order. In doing this, we
will in general have an infinite series, and again in actual computations
we will have to consider a
truncation of such a series\footnote{Obviously, if we have some exact
symmetry and/or
conservation laws, we can take full advantage of them; e.g., we can make a
preliminary change of
variables so that some of the new ones -- corresponding to ``symmetry
adapted'' ones -- are
invariant under the time evolution.}. We have then to extract information
on the solutions to the
full system by the solutions to the truncated system. This is the
fundamental problem of
perturbation theory, and this is obviously not the place to discuss it (not
to say that such a
discussion would go beyond the expertise of the present authors); here we
do only want to recall
some results which are at the basis of such a theory, referring the reader
e.g. to [6--8, 26,27] for
a full discussion of averaging and related theories (such as averaging at
all orders, or Nekhoroshev
theory, and K.A.M. theory). The discussion and the theorems given below are
quoted from [8]. The
connection with the case at hand here is immediately obtained by the
rescaling $x = \eps \xi$.
We consider an ODE
$$ {\dot x} = \eps f(x,t) + \eps^2 g(x,t;\eps ) \eqno(21) $$ with initial
datum $x(0) = x_0$; we
also assume that $f$ is periodic in $t$ with period $T$ and denote the
average (in $t$) by $$ f^0
(y) = \frac{1 }{ T} \int_0^T f(y,t) dt \eqno(22) $$ This defines the
averaged equation
$$ {\dot y} = \eps f^0 (y) \eqno(23) $$
We denote by $\Phi (t; x_0 ) $ the flow of (21) with initial datum $x_0$,
%% here starts the second file:
and by $\Phi^0 (t; y_0 )$
the flow of the averaged equation (23) with initial datum $y(0) = y_0$.
{\bf Theorem 8.} {\it Let $f,g$ and $\pa f / \pa x$ be defined, ${\cal
C}^0$ and uniformly bounded
on $(x,t) \in D \times [0, \infty )$ for $D$ a domain in $R^n$; let $g$ be
Lipschitz continuous in
$x$ for $x \in D$. Let $x_0 \in D$, and let $y(t) = \Phi^0 (t;x_0 ) \in D_0
$ ($D_0$ an open
neighbourhood contained in $D$) for
$t \le (1/\eps )$. Then, $\vert \Phi (t; x_0 ) - \Phi^0 (t; x_0 ) \vert = O
(\eps )$ for $t \le O (1/\eps )$.}
Actually, this theorem also holds in the case the average of $f$ exists
only in a generalized sense,
i.e., as
$$ f^0 (y) = \lim_{T \to \infty} \frac{1 }{ T} \int_0^T f(y,t) dt \eqno(24)
$$ and the averaged
equation is again (23), with $f^0$ given by (24), and $\Phi^0$ corresponds
to its solution.
{\bf Theorem 9.} {\it Let the same hypotheses as in theorem 8 be verified,
with $f^0$ given by (24).
Then, $\vert \Phi (t; x_0 ) - \Phi^0 (t; x_0 ) \vert = O (\delta (\eps ) )$
for $t \le O (1/\eps )$,
with $\delta (\eps ) = \sup_{x \in D} \sup_{0 \le t \le C/\eps } \eps \cdot
\vert \left[ \int_0^t
\vert f(x,s) - f^0 (x) \vert ds \right] \vert $.}
We remark that the two above theorems deal with the case the
``unperturbed'' system -- i.e., the one
with $\eps = 0$ is just the trivial one, ${\dot x} = 0$. This is good
enough for the case we are
considering, as via the above mentioned scaling the case of a stationary
point is indeed mapped into
this class of equations; however, it can be worth recalling what happens
when we perturb a
one-frequency periodic motion (the multi-frequency case is considerably
more complicate, as
resonances and small denominators must be taken into account [6,27]).
Thus, we consider -- instead of (21) -- the equation $$ \begin{array}{rcl}
{\dot x}
&= & \eps f (x, \theta ) +
\eps^2 g(x, \theta ; \eps) \\
{\dot \theta} &= & \Omega (x) + \eps \nu (x, \theta ; \eps )
\end{array}
\eqno(25) $$
where $x \in D \subseteq R^n$ and $\theta \in S^1$. We consider now $$ f^0 (x) =
\int_0^{2 \pi} f(x,\theta ) d \theta \eqno(26) $$ and $\Phi , \Phi^0$
represent the solutions,
respectively, of (25) and of the averaged and truncated equation $$
\begin{array}{rcl}
{\dot x}& = & \eps f^0 (x)\\
{\dot \theta} &= & \Omega (x)
\end{array}\eqno (27) $$
{\bf Theorem 10.} {\it Let $f,g, \omega , \nu$ be ${\cal C}^1$ in $D \times
S^1$; let $\Phi^0 (t ;
x_0 ) \in D_0 $ for $t \in [0 , 1/ \eps )$, where $D_0$ is an open set
contained in the interior of
$D$. Then, $\vert \Phi (t; x_0 ) - \Phi^0 (t ; x_0 ) \vert = O (\eps ) $
for $t \le O (1/\eps )$. }
The above theorems -- and their generalizations [27] -- give a rigorous
foundation to the
consideration of truncated Normal Form expansion, as they state that, if we
are interested only in
solutions over a long but finite time $t \le 1/ \eps^k$ and known with an
error $O (\eps )$, we can
consider the normal form truncated at order $k$.
\section{Algorithmic implementation}
We want now to consider the algorithmic solution to three related problems,
considered above: 1)
transforming $f$ into its Normal Form; 2) determining its symmetry algebra
$\G_f$; 3) determining the invariant manifolds for the evolution under $f$.
We want to proceed in a perturbative way (but with analytical, not
numerical, procedures), and we
will work order by order in the expansions in powers of $(x-x_0 )$.
If we use the expansions (10) for $f$ and (17) for $g$, the commutation
condition $[X_f
, X_g ] = 0$ reads
$$ \sum_{j=0}^k \{ f_j , g_{k-j} \} = 0 ~~~ \forall k \eqno(28) $$ so that
in the case of an
approximate symmetry of order $n$ the equation (28) does only hold for $k
\le n$. Let us look more
closely to the terms in the sum of (28); as we know, $f_0 = g_0 = 0$, so
that the terms with $j=0$
and $j=k$ do not really matter. Isolating the terms with $f_1 = Ax$ and
with $g_1 = Bx$, we get $$
[A,B] = 0 \eqno(29) $$ in the case $k=2$, and in general, for $k>2$, $$ \{
Ax , g_{k-1} \} - \{ Bx ,
f_{k-1} \} = c_{k-2} (f,g) = - \sum_{j=2}^{k-2} \{ f_j , g_{k-j} \}
\eqno(30) $$ Notice that if
$f_j$ and $g_j$ are in $\Ker (L_A ) \cap \Ker (L_B )$ for all $j \le k-2$,
then we are guaranteed
that $c_{k-2} (f,g)$ is in $\Ker (L_A ) \cap \Ker (L_B )$ as well.
Let us look at the solution to our second problem, i.e., the determination
of $\G_f$ for given $f$.
Clearly, all we have to do is to solve (28) order by order; this means,
first determine the $B$
satisfying (29), then the $g_2$ satisfying (30) for $g_1 = Bx$, and so on.
Thus, we can solve our
problem recursively, just by solving at each order the equation $$ L_A
(g_{k-1} ) = L_B (f_{k-1} ) + c_{k-2} (f,g) \eqno(31) $$ The solutions to
this equation are
obviously determined up to a term in $\Ker (L_A )$; on the other side the
solution exists if and
only if $$ L_B (f_{k-1} ) + c_{k-2} (f,g) \in \Ran (L_A ) \eqno(32) $$ and
we are by no means
guaranteed that this condition is satisfied.
Thus, what we actually do is to determine $\G_f^{(1)}$; then, we determine
$\G_f^{(2)}$ using the
structure of $\G_f^{(1)}$, and so on. At any step, we find that in general
not all the $X_g \in
\G_f^{(k)}$ give raise to a $X_g \in \G_f^{(k+1)}$, the condition for this
to happen being precisely
(32). Some examples of explicit analytic computations (some of these
requiring the use of algebraic
manipulation programs) are given in [4].
We want to point out that it is actually convenient to solve problem 1) at
the same time as we
try to determine the symmetry algebra. We have seen above that a vector
field is in Normal Form if
and only if it commutes with its linear part (under the assumption this is
semisimple); similarly, a
set of vector fields (having semisimple linear parts) is in Joint Normal
Form if and only if each
one is commuting with the linear parts of all the other ones as well as its
own. Similar statements
hold for vector fields with non-semisimple linear part, but here we keep to
semisimple linear part
for ease of notation and discussion. We say that $X_f$ with $f(x) = Ax +
\sum_k f_k (x) $ is in Normal Form up to order $n$ if and only if $\{ Ax ,
f_k (x) \} = 0 $ for all
$k \le n$, and similarly for Joint Normal Forms.
Let us look again at our recursive procedure for determining (approximate)
symmetries, taking into
account the Normal Form construction as well. At the first step, we only
have to solve (28) for $B$.
With such a $B$, we have by definition that both $f_1 = Ax$ and $g_1 = Bx$
are in $\K \equiv \Ker
(L_A ) \cap \Ker (L_B )$.
We would like to recall that $[A,B] = 0 $ implies (under our blanket
hypothesis that
$A,B$ are semisimple) that $[L_A , L_B ] = 0$; also, the semisimplicity of
$A$ ensures that the set
$P_k$ of homogeneous polynomials (in $x \in R^m$) of degree $k$ has an
invariant splitting as
$$ P_k = [ \Ker (L_A ) \cap P_k ] \oplus [\Ran (L_A ) \cap P_k ] \eqno(33)
$$ (see e.g. [1] for a
more detailed discussion); i.e., $L_A : \Ran (L_A ) \to \Ran (L_A )$.
Obviously, the same
considerations apply to $L_B$.
At the second step, we have to solve
$$ L_A (g_2 ) = L_B (f_2 ) \eqno(34) $$ as an equation for $g_2$ (at this
stage $c_0 (f,g) = 0$).
However, we can first of all operate a change of variables of the form
(12), so that $f$
is brought to Normal Form at order 2, so that $f_2 \in \Ker (L_A )$. If a
solution
$g_2$ to (34) exists, the previous theorem 1 also ensures that we can, by
properly
choosing the $h_2$ which generates the transformation (12), actually also
ensure $g_2
\in \Ker (L_B )$; now, applying $L_A$ or $L_B$ on (34) we see that
necessarily also
$f_2 \in \Ker (L_B )$, $G_2 \in \Ker (L_A )$.
Thus, if we build Joint Normal Forms up to order two, we have also built
approximate symmetries of
order two, and vice versa.
At higher orders, the $c_{k-2} (f,g)$ terms come into play. However, as
remarked above, $\{
\cdot,\cdot \} : \K \times \K \to \K$; proceeding as above, we have that if
we can find a solution
for $g_k$, it has to be in $\K$.
Thus, we can solve -- algorithmically ! -- the two problems at the same
time (we suppose the fixed
point $x_0$ has already been mapped into the origin of $R^m$):
{\tt 1)} Fix a $X_f$, i.e. $f(x) = \sum_{k=1}^\infty f_k (x) $; {\tt 2)}
Determine $A$ =
$(Df)(0)$; check that it is semisimple; {\tt 3)} Determine the set $M (A)$
of matrices commuting with
$A$, i.e., $[A,B] = 0$; {\tt 4)} Choose a $B \in M (A)$, and
correspondingly $g_1 (x) =
Bx$; {\tt 5)} Solve the homological equation for $h_k$, with $k=2$; {\tt
6)} Implement the change of coordinates (12), using the $h_k$ determined
above; call again
$f_2$ the new expression for the second order term in the transformed
variables; {\tt
7)} Solve the equation (30) for $g_k$, with the $g_\ell$ chosen before for
$\ell < k$;
{\tt 8)} Solve the homological equation -- with $L_B$ in the place of $L_A$
and $g$
in that of $f$, $\delta h_k$ in that of $h_k$ -- requiring that $\delta h_k \in
\Ker (L_A ) $ (in this way, we get automatically that $f_k$ and $g_k$ are
in $\K$); call $S_k (f,g)$
the set of such solutions; {\tt 9)} Choose a $g_k \in G_k (f,g)$; {\tt 10)}
Repeat steps 5)--9) for
$k=3,4,\ldots,n$ up to the required order $n$.
Notice that step 7) could not be possible, i.e., at any level $k$ the
procedure could be forced to
stop.
In this way, we build recursively the (tree of the) approximate symmetries
of $X_f$ up to order $n$.
A slightly different procedure is also possible:
{\tt 1)} Determine $A$ = $(Df)(0)$ and correspondingly $f_1 (x) = Ax$;
check that $A$ is semisimple; {\tt
2)} Determine the set $M (A)$ of matrices commuting with $A$, i.e., $[A,B]
= 0$; {\tt 3)}
Choose a semisimple $B \in M (A)$, and correspondingly $g_1 (x) = Bx$; {\tt
4)} Determine
$\K_k \equiv \Ker (L_A ) \cap \Ker (L_B ) \cap P_k$ for all $k \le n$; {\tt
5)} Choose $f_k$ and $g_k$ in $\K_k$ such that $f^{(k)} = \sum_{m=1}^k f_m$
and
$g^{(k)} = \sum_{m=1}^k g_m$ satisfy eq.(30) up to order $k$, for $k=2$;
{\tt 6)} Repeat
step 5) up to $k=n$;
Notice that in step 4) we are actually building the Normal Form expansion
up to order
$n$, while steps 5)--6) give the construction of approximate symmetries.
Once again, we stress that
it could be impossible to construct approximate symmetries up to order $n$,
i.e., -- depending on the choices made for $f_k$ and $g_k$ -- the procedure
could stop at
$k=\ell < n$.
Let us now focus on the problem of determining invariant manifolds; if we
identify these manifolds
with common (zero) level curves of some set of functions, say $w_1 (x) =
w_2 (x) = \ldots = w_r (x) = 0$,
we are led to look for a set $\{w_1 ,\ldots, w_r \}$ of functions $w_i :
R^m \to R$ such that
$$ X_f (w_i ) = K_{i,j} w_j (x) \eqno(35) $$ with some suitable matrix $K$.
Obviously, a special and simple case of this condition is the one where
actually $$ X_f (w_i ) = 0. \eqno(36) $$
For the stable, unstable and center manifold, respectively, we require that
they pass through
$x_0$, and that in there they are tangent to the appropriate linear
subspaces; thus we can choose
the $w_i$ to be equal, at first order, to an appropriate set of
eigenvectors for $A$. We can then
build the $w_i$ order by order; if we write $$ w_i = C_{ij} x^j +
\sum_{k=2}^\infty w^{(i)}_k (x) \eqno(37) $$ we can then proceed to solve
(35)
-- or (36) -- order by order.
We do not want to discuss here the actual construction of invariant
manifolds in full generality,
and only consider the special case (36), i.e., nothing else than the
perturbative
construction of constants of the motion; in this case, we can actually
consider a single function
$\rho : R^m \to R$, and write $$ \rho (x) = \sum_{k=1}^\infty \rho_k (x)
\eqno(38) $$ By consider this series expansion and the one for $f$, the
condition $X_f (\rho ) = 0 $ reads as in (36). We can then proceed order by
order;
again, the task is simplified if at each order $k$ we first put $f$ in
Normal Form up
to order $k$, and then solve
$$ \sum_{j=1}^k ( f_j \cdot \grad ) \rho_{k-j+1} = 0 \eqno(39) $$ (where
$\rho_1,\ldots,\rho_{k-1}$
have been previously determined) for $\rho_k$. Indeed, by having $f$ in
Normal Form up to order $k$
we know automatically [5] that $$ I_f^{(k)} \subseteq I_A^{(k)} \eqno(40)
$$ so that we have to
consider only a relatively small space of functions\footnote{In some cases,
the presence of a
symmetry can actually simplify this problem further, see [5].}.
It should be mentioned that in concrete computations one would use an
explicit basis
for $P_k$, made of homogeneous monomials of order $k$; the use of such a
basis transforms the PDEs to determine $P_k \cap \Ker (L_A )$, $h_k$, and
so on, in {\it
algebraic} equations. Clearly, the dimension of $P_k$ increases rapidly
with $k$, so
that strong limitations exist to the viability of the procedure for higher
order computations. Notice also that at several points we have some freedom
left, e.g. in
the determination of $h_k$ (there could be free parameters), and any choice
produces a different branch of the algorithm.
\section{An explicit example}
In order to illustrate our discussion by following our procedure (up to
order three) in an
elementary example\footnote{Other explicit examples are given in [1--5].},
consider the system
$$ \cases{ {\dot x} = ax - y + 2 x y + (x^2 + y^2 ) y + x^3 & \cr {\dot y}
= x + ay + 2 y^2 - (x^2 +
y^2 ) x + y^3 & \cr} \eqno(41) $$ for which obviously $$ A = \pmatrix{a &
-1 \cr 1 & a\cr} \eqno(42) $$ and $M(A)$ corresponds to matrices of the
form
($b,c$ real constants)
$$ B = \pmatrix{b & c \cr - c & b \cr}\; . \eqno(43) $$ It is easy to check
explicitly that (for any value of $a$) $ P_2 \cap \Ker (L_A ) = \{ 0 \}$,
so
that we can solve (uniquely !) the homological equation at order two, $$
L_A (h_2 ) = - f_2 = \pmatrix{ 2 xy \cr 2 y^2 \cr}\; . \eqno(44) $$ The
explicit form of the solution
is
$$ \cases{
\displaystyle h_2^1 = \frac{2 }{ 1 + a^2} x^2 - \frac{2 a }{ 1 + a^2} x y &
\cr\cr \displaystyle h_2^2 = \frac{2 }{ 1 + a^2} x y - \frac{2 a }{ 1 +
a^2} y^2 & \cr} \eqno(45) $$
In the new coordinates, corresponding to the transformation generated by
$h_2$, we get
$f_2 = 0$, and the symmetry equation (30) at order two is simply $$ \{ A x
, g_2 \} + \{ f_2 , Bx \} \equiv \{ Ax , g_2 \} \equiv L_A (g_2 ) = 0
\eqno(46) $$ which
yields $g_2 = 0$.
In these coordinates, we expect that the term $f_3$ assumes a more
complicate expression. However,
notice that $P_3 \cap \Ker (L_A )$ is empty for $a \not=0$ (so that we get
$g_3 = 0$ as well), while
for the more interesting case $a=0$ it is spanned by $$ \alpha = \pmatrix{
(x^2 + y^2 ) x \cr (x^2 + y^2 ) y \cr} ~~,~~ \beta= \pmatrix{ - (x^2 + y^2
) y
\cr (x^2 + y^2 ) x \cr}\; , \eqno(47) $$ i.e., by terms of the form $(x^2 +
y^2 ) Bx$ (this corresponds
to a general result, see [5]).
In the present case, when we consider the new coordinates, solve the
homological equation at order
three, and pass again to new coordinates corresponding to the $h_3$, we get
$$ f_3 = (x^2 + y^2 ) \pmatrix{y \cr -x\cr} \eqno(48) $$ This can also be
seen without explicit
computations: both $f_2$ and $h_2$ are in $\Ran (L_A )$, and so is
therefore $\{ h_2 , f_2 \}$,
which is the term added to $f_3$ as a result of the change of coordinates
corresponding to $h_2$.
The symmetry equation at order three is therefore $$ \{ Ax , g_3 \} + \{
f_2 , g_2 \} + \{ f_3 , Bx \} = L_A (g_3 ) - L_B (f_3 ) = 0 \eqno(49) $$ If
in $B$, see (43), we choose $b \not= 0$, this has no solutions (i.e., the
corresponding approximate
symmetry is of order two only); for $b=0$ we get $L_B (f_3 ) = 0$, and
$g_3$ can be chosen as any
linear combination of the $\alpha , \beta$ in (47). Notice that if $g_3 = -
b \beta$, we get $X_\vphi
= - b X_f$ (up to order three).
Pursuing the computations to higher orders, we would be going through the
same situation for even and odd orders, respectively.
\begin{thebibliography}{27}
\bibitem{1} G. Cicogna and G. Gaeta: "Poincar\'e normal forms and Lie point
symmetries",
{\it J. Phys. A (Math. Gen.)} {\bf 27} (1994), 461.
\bibitem{2} G. Cicogna and G. Gaeta, ``Normal forms and nonlinear
symmetries'', {\it J. Phys. A (Math. Gen.)} {\bf 27} (1994), 7115.
\bibitem{3} G. Cicogna and G. Gaeta, ``Symmetry invariance and center
manifolds for dynamical systems'',
{\it Nuovo Cimento B} {\bf 109} (1994), 59.
\bibitem{4} G. Cicogna and G. Gaeta, ``Approximate symmetries in dynamical
systems'',
{\it Nuovo Cimento B} {\bf 109} (1994), 989.
\bibitem{5} G. Cicogna and G. Gaeta, ``On symmetry and normal form
theory'', Preprint 1995.
\bibitem{6} V.I. Arnold, {\it Geometrical methods in the theory of
differential equations};
Springer, Berlin, 1988.
\bibitem{7} V.I. Arnold and Yu.S. Il'yashenko, ``Ordinary differential
equations''; in
{\it Dynamical Systems - I}, D.V. Anosov and V.I. Arnold eds., E.M.S.,
Springer, Berlin, 1988.
\bibitem{8} F. Verhulst, {\it Nonlinear differential equations and
dynamical systems}, Springer, Berlin 1990.
\bibitem{9} J. Guckenheimer and P. Holmes, {\it Nonlinear oscillations,
dynamical systems, and bifurcations of vector fields}, Springer, New York,
1983.
\bibitem{10} D. Ruelle, {\it Elements of differentiable dynamics and
bifurcation theory}, Academic Press, London.
\bibitem{11} S.N. Chow and J. Hale, {\it Methods of bifurcation theory},
Springer, New York, 1982.
\bibitem{12} J.D. Crawford, ``Introduction to bifurcation theory'', {\it
Rev. Mod. Phys.} {\bf 63} (1991), 991.
\bibitem{13} G. Cicogna and G. Gaeta, ``Lie-point symmetries in bifurcation
problems'', {\it Ann. Inst. H. Poincar\'e} {\bf 56} (1992), 375.
\bibitem{14} G. Cicogna and G. Gaeta, ``Nonlinear symmetries in bifurcation
theory'', {\it Phys. Lett. A} {\bf 172} (1993), 361.
\bibitem{15} D.H. Sattinger,
{\it Group Theoretic Methods in Bifurcation Theory}, LNM 762, Springer,
Berlin, 1979;
{\it Branching in the Presence of Symmetry}, S.I.A.M., Philadelphia, 1984.
\bibitem{16} L. Michel, ``Symmetry defects and broken symmetry.
Configurations. Hidden symmetry'', {\it Rev. Mod. Phys.} {\bf 52} (1980),
617.
\bibitem{17} M. Golubitsky, D. Schaeffer and I. Stewart, {\it Singularities
and groups in bifurcation theory - vol. II}, Springer, Berlin, 1988.
\bibitem{18} G. Gaeta, ``Bifurcation and symmetry breaking'', {\it Phys.
Rep.} {\bf 189} (1990), 1.
\bibitem{19} J.D. Crawford and E. Knobloch, ``Symmetry and
symmetry-breaking bifurcations in fluid mechanics'', {\it Ann. Rev. Fluid
Mech.} {\bf 23} (1991), 341.
\bibitem{20} P.J. Olver, {\it Applications of Lie groups to differential
equations}, Springer, Berlin, 1986.
\bibitem{21} G.W. Bluman, S. Kumei, {\it Symmetries and differential
equations}, Springer, Berlin, 1989.
\bibitem{22} G. Gaeta, {\it Nonlinear symmetries and nonlinear equations},
Kluwer, Dordrecht, 1994.
\bibitem{23} A.D. Bruno, {\it Local methods in nonlinear differential
equations}, Springer, Berlin, 1989.
\bibitem{24} A.D. Bruno and S. Walcher, {\it J. Math. Anal. Appl.} {\bf
183} (1994), 571.
%
\bibitem{25} G. Cicogna, ``Symmetries in dynamical systems and convergent
normal forms'', {\it J. Phys. A} {\bf 28} (1995), L179. %
\bibitem{26} J. Sanders and F. Verhulst, {\it Averaging methods in
nonlinear dynamical systems}, \ Springer, Berlin, 1985. %
\bibitem{27} V.I. Arnold, ed., {\it Dynamical Systems - III: Classical
Mechanics}, E.M.S., Springer, Berlin, 1993.
\end{thebibliography}
\end{document}