\documentclass{article}
\begin{document}
\def\giorno{9 January 2001}
\def\sn{**}
\def\pa{\partial}
\def\a{\alpha}
\def\b{\beta}
\def\la{\lambda}
\def\s{\sigma}
\def\De{\Delta}
\def\Ga{\Gamma}
\def\F{{\cal F}}
\def\L{{\cal L}}
\def\M{{\cal M}}
\def\Y{{\cal Y}}
\def\X{{\cal X}}
\def\V{{\cal V}}
\def\W{{\cal W}}
\def\Y{{\cal Y}}
\def\H{{\cal H}}
\def\h{{\cal H}}
\def\G{{\cal G}}
\def\sse{\subseteq}
\def\ss{\subset}
\def\ker{{\rm Ker}}
\def\ran{{\rm Ran}}
\def\({\left(}
\def\){\right)}
\def\[{\left[}
\def\]{\right]}
\def\~#1{{\widetilde #1}}
\def\^#1{{\widehat #1}}
\def\=#1{{\widetilde #1}}
\def\frac#1#2{{#1 \over #2}}
\def\hot{{\rm h.o.t.}}
\def\eb{{\bf e}}
\def\vb{{\bf v}}
\def\xb{{\bf x}}
\def\C{{\bf C}}
\def\N{{\bf N}}
\def\R{{\bf R}}
\def\Q{{\bf Q}}
\title{Poincar\'e renormalized forms and regular singular points of vector fields in the plane}
\author{Giuseppe Gaeta \\
{\it Dipartimento di Fisica, Universit\`a di Roma ``La Sapienza''} \\
{\it P.le A. Moro 5, I--00185 Roma (Italy)} \\
{\tt giuseppe.gaeta@roma1.infn.it} }
\maketitle
{\bf Summary.} We discuss the local behaviour of vector fields in the plane $\R^2$ around a singular point (i.e. a zero), on the basis of standard (Poincar\'e-Dulac) normal forms theory, and from the point of view of Poincar\'e renormalized forms \cite{IHP}. We give a complete classification for regular singular points and provide explicit formulas for non-degenerate cases. A computational error for a degenerate case of codimension 3 contained in previous work is corrected. We also discuss an alternative scheme of reduction of normal forms, based on Lie algebraic properties, and use it to discuss certain degenerate cases.
\bigskip
\section*{Introduction}
\def\sn{0}
The theory and method of normal forms \cite{Arn1,Arn2,CGs,Elp,Gle,IoA,Ver,Wal}, whose
origins go back to the work of Poincar\'e at the end of XIX century, constitute a
fundamental tool to study the behaviour of dynamical systems locally near a known solution.
Here we will focus on the local study near a stationary solution, and on systems in two dimensions. We will thus consider systems of the type
$$ {\dot \xi} \ = f (\xi ) \ \equiv \ A \xi \, + \, \sum_{k=1}^\infty f_k (\xi ) \ , $$
where $\xi = (x,y) \in \R^2$, $A$ is a $(2 \times
2)$ real matrix, and $f_k (\xi)$ are two dimensional vectors whose components are polynomials homogeneous of degree $(k+1)$ in the $x,y$ variables (this can be thought of as a Taylor series).
Equivalently, we will consider the vector fields ($f$ as above)
$$ X_f \ := \ f^{(i)} (\xi) (\pa / \pa \xi^i ) \ \equiv \ f^{(1)} (x,y) \pa_x + f^{(2)} (x,y) \pa_y \ . $$
The normal form of the dynamical system (or equivalently of the vector field) given above depends on the properties of the linear part $A \xi$, and in particular on the eigenvalues of the matrix $A$.
As well known, the normal form is unique -- and given simply by the linear part of the system -- when the eigenvalues are nonresonant (the definition of this and other notions will be recalled below in section 1), while for resonant eigenvalues the normal form is in general not unique and can depend on infinitely many arbitrary constants.
Needless to say, this richness of normal forms unfolding for systems with given linear part $A \xi$ reflects the richness of possible different behaviours of nonlinear systems sharing the same linear part; however, it is also well known that this is to some extent redundant: indeed, a single system with resonant linear part does not have a unique normal form.
This lack of uniqueness is related to some freedom in the choice of the generating functions $h_1 , h_2 , ...$ for the coordinate transformations needed to take the system in normal form following the Poincar\'e normalization algorithm.
Indeed, such functions are determined up to elements of $\ker (\L_0 )$, where the operator $\L_0$ -- known as the homological operator -- is defined by the matrix $A$ and has a nontrivial kernel for resonant systems.
Thus several authors have tried to devise ways to reduce this
redundancy of the normal form classification, and on the other side to take advantage of the freedom in the choice of $h_k$ mentioned above; in this respect one should quote \cite{Bai,BaC,BaS,Bro,BrT,Brus,CDD,Kum,vdM,Tak,Ush}. This problem was actually already emntioned by Dulac \cite{Dul}.
One of these attempts, which I proposed in \cite{LMP,IHP}, is based
on a direct generalization of the Poincar\'e algorithm so to control the effect of normalizing transformations at higher orders; this is obtained by considering higher order homological operators and the related homological equations (details on this approach will be given below in section 2).
As this is essentially based on repeated Poincar\'e normalizations, the resulting ``further simplified'' normal form has been called {\bf Poincar\'e renormalized form } (PRF).
It should be stressed that this approach is completely algorithmic and
constructive, i.e. we can -- as easily (or more precisely, with the same kind of computational difficulties) as in the standard normal form (NF) approach -- determine explicitely, by completely standardized computations\footnote{These are easily implemented via a symbolic manipulation computer language, such as Mathematica or Maple.}, the changes of coordinates needed to take the system in PRF.
On the other hand the PRF of a given system is not guaranteed, in general terms, to be unique.
In this respect, we should recall that other (previous)
approaches were able to obtain a unique normal form \cite{Bai,KOW}; however, these are of quite difficult practical implementation.
It should also be recalled that the PRF approach owes much to the Broer's approach \cite{Bro}, which sets normal forms theory in the frame of Lie algebras; see also \cite{BrT,Tak}. This was also developed by Baider and coworkers \cite{Bai,BaC,BaS}, and indeed the algebra $\G = \X \oplus \Y$ which will be central to our study below was already considered by Baider (who called this a ${\cal A} {\cal B}$ algebra).
In this note I want to use PRFs to analyze the behaviour of vector
fields (dynamical systems) in the plane $\R^2$ locally near singular points (equilibria). In particular I will focus on regular singular points (equilibria where the linearization of the system has at least one nonzero eigenvalue), as for non-regular ones normal forms theory does not produce relevant results, and one has to resort to other tools of singularity theory (see e.g. \cite{Arn2,Ily}).
This analysis will be on the formal level only (I will give
convergence results when possible, but this will not cover cases with nontrivial normal form). I recall that this is standard in normal
forms theory, and is however useful for the analysis of the system in a way I will not discuss here; see e.g.
\cite{Arn1,Arn2,CGs,Gle,IoA,Ver,Wal} for this matter.
In some cases -- that is, for some classes of linear parts -- the standard normal form is unique (trivial) and thus standard theory gives a completely defined answer; in some other cases, the standard NF is not unique, and PRF theory is
able to improve the classification provided by the standard theory. We will discuss this matter in section 4 on the basis of a linear part classification.
Together with general results, I will also give detailed computations up to some finite order (typically up to terms of order six in the $x,y$ variables) with explicit identification of the transformations needed to take a system in
PRF, including closed-form expression of the numerical coefficients. This will
show that the required computations are actually easy to implement in practice.
\bigskip
The first part (sections 1-4) is devoted to general discussion of normal forms, their structure and reduction. The second part (sections 5 - 10) discusses the two dimensional case in full detail. Some conclusions and appendices are also presented.
The detailed {\bf plan of the paper } is as follows.
In the next section 1 I will briefly recall some basic aspects of (standard) normal forms, mainly to fix notation;
in section 2 I will recall some basic aspects and formulas of Poincar\'e renormalized forms, again fixing the notation to be freely used afterwards.
In section 3 we discuss some qualitative features of vector fields in NF and of the PRF reduction; it is remarked that when the linear part is semisimple and its spectrum satisfies a certain condition (which is the case for the linear parts we have to consider), the structure of the Lie algebra of vector fields in normal form is severely constrained and is indeed the same for all nontrivial two-dimensional cases. We also remark, in subsection 3.3, that this Lie algebraic structure can be used to obtain a more effective reduction of the NF than with the ``generic'' PRF algorithm discussed in \cite{LMP,IHP}; we call the normal form obtained in this way -- which is not necessarily a PRF -- a ``Lie renormalized form'' (LRF). In subsection 3.4 it is shown how to generalize the construction to more general cases. The discussion in this section is original, although strongly related to work by Broer and Takens, and Baider, Churchill and Sanders.
In section 4 I will give the (elementary) classification of linearization of a
vector field around a singular point (a zero). Here I will also discuss the known results for each of these cases, concerning normal forms and PRF, thus identifying the cases to be discussed to complete the existing results; it will turn out that we need to discuss only the three cases {\bf S2 -- S4} in the classification. In two of them ({\bf S3,S4}) the NF is nontrivial and the PRF has not been studied in previous work, while in case {\bf S2} the PRF has been studied previously \cite{LMP,IHP} but the results contained there contained a computational error in a codimension three degenerate case. In cases {\bf S1,N1} the PRF is trivial (i.e. identical to the standard NF), and in the non-regular case {\bf N2} it has been discussed in previous work \cite{IHP}. The following sections are then devoted to discuss the two cases {\bf S3} and {\bf S4} of the classification given in section 4; in sections 5, 6 and 7 we discuss case {\bf S3}, considering first the standard NF in section 5, then the PRF in section 6, and finally explicit formulas for the normalizing transformation up to order five (i.e. for the PRF up to order six) in section 7. Sections 8 and 9 are devoted to the study of case {\bf S4}, according to the same scheme. In section 10 we briefly recall, for the sake of completeness, the results previously obtained for the other cases where the PRF is nontrivial, i.e. {\bf S2} and {\bf N2}; we also correct the error mentioned above for case {\bf S2}.
We also add three appendices; Appendix A is devoted to discuss the Lie-Poincar\'e changes of coordinates we actually perform, and the determination of analyticity domain for the transformation to PRF up to a finite order $k$. Appendix B is devoted to a system considered by Bruno and Petrovich and the application of the PRF scheme to this. Finally in Appendix C we notice that the discussion of section 3 allows to apply our present (two-dimensional) computations to higher dimensional cases as well, and identify the three dimensional cases to which they directly apply.
\medskip
We use frequently the abbreviations {\bf NF}(s) for normal form(s), and {\bf PRF}(s) for Poincar\'e renormalized form(s). We also use, starting from subsection 3.3, the abbreviation {\bf LRF}(s) for Lie renormalized form(s). Equations are consecutively numbered in each section, and we omit the section number when referring to equations of the same section.
{\bf Remark 1.} It should be stressed that the computations given in \cite{LMP,IHP} for the case where the linear part of the vector field at the singular point is a pure rotation, and the nonlinear part is degenerate (the dilation part being more degenerate than the rotation one), could induce the reader in confusion for two reason: first, they were actually employing the LRF rather than the PRF scheme\footnote{Indeed, in the application sections of \cite{LMP,IHP} the term PRF was employed to denote general reduced normal forms obtained through transformations generated by solutions of higher homological equations, independently of the precise reduction scheme followed; in some codimension three (or higher) degenerate cases, this was not coherent with the definition given in the theory parts. This point is discussed in detail in \cite{GaL}.}, and second they contained a computational mistake: some of the coefficients cannot be eliminated. This point (which is not the point raised by Bruno, see below) is corrected in section 10. $\odot$
{\bf Remark 2.} The PRF approach was subject to some criticism by Bruno. I will discuss this matter, and the example discussed by him, in Appendix B. $\odot$
{\bf Remark 3.} The explicit formulas obtained below were all computed by using a simple {\it Mathematica } code, very far from being optimized, running on a AMDK6 processor (these processors are now obsolete and the CPU sells for around 50 dollars) in a Toshiba laptop computer with 64 MB RAM. Each of the cases considered required a CPU time of less than one minute for computing, formatting, and displaying (this of course does not take into account the -- machine and human -- time spent for developement of the {\it Mathematica} programs). It is therefore clear that the method can be implemented, going to high order, without requiring big computational apparatus. $\odot$
\subsection*{Acknowledgements}
I would like to thank Dario Bambusi and Giampaolo Cicogna for bringing
\cite{Brep} to my attention, and prof. A.D. Bruno for E-mailing me a copy of his second review and for sending me his preprint \cite{Bprep}. Last but not least, warm thanks go to prof. Todor Gramchev for translating the relevant parts of \cite{Bprep}.
\vskip 2 truecm
\section{Poincar\'e normal forms}
\def\sn{1}
We will consider vector fields in $\R^2$; we will use coordinates $(x,y)$ in
$\R^2$, corresponding to the basis $(\eb_1 , \eb_2)$, and denote a generic
vector as $\xi = (\xi^1 , \xi^2)$. We will also write $\pa_i \equiv \pa / \pa
\xi^i$.
We will denote by $\F$ the set of polynomial vector functions, i.e. of
polynomial functions $f : \R^2 \to \R^2$, having a zero in the origin; we
denote by $\F_k \ss \F$ ($k \ge 0$) the set of polynomial vector functions
homogeneous of degree $k+1$ in the $\xi$.
We denote by $\W$ the Lie algebra of polynomial vector fields in $\R^2$
equipped with the commutator operation.
If we focus on the coordinate expression of vector fields, the role of the
commutator is taken by the Lie-Poisson bracket $\{ . , . \}$ defined as
$$
\{ f,g \} \ := \ (f^j \pa_j ) g - (g^j \pa_j ) f \ . \eqno(\sn.1) $$
Indeed,
writing $X_f = f^i \pa_i$, $X_g = g^i \pa_i$, we have
$$ [ X_f , X_g ] \ = \
X_{\{f,g\} } \ . \eqno(\sn.2) $$
The set $\F$ equipped with the bracket $\{.,.\}$ is a Lie algebra.
Notice that $\{ ., . \} : \F_k \times \F_m \to \F_{k+m}$.
We will also, with an abuse of notation, denote by $\W_k$ the set of vector fields whose components are homogeneous of degree $k+1$ in the $\xi$, and by $W_k$ the homogeneous part of order $k+1$ of the vector field $W$. Obviously
these are not intrinsically defined notions, but depend on the coordinates we use; thus if we consider a vector field $W$, when we change coordinates the $W_k$ will also change (but near identity changes of coordinates $\xi^i \to \=\xi^i = \xi^i + \psi^i (\xi)$, where $\psi \in \F_m$, will preserve the $W_k$ with $k < m$).
To the linear part $A \xi$ of a vector function $f \in \F$ we associate the homological operator $\L_0 = \{ A \xi , . \}$.
Notice that $\L_0 : \F_k \to \F_k$.
We can also define the homological operator in $\W$ rather than in $\F$, as follows. If the linear part of $f$ is given by $A\xi$, we will denote the linear part (in the $\xi$ coordinates) of
$X_f$ as $X_A$. To the linear part $X_A$ of a vector field $X_f$ we associate the homological operator $\L_0 = [X_A , . ]$; note that $\L_0 : \W_k \to \W_k$.
We will equip $\F_k$ (and thus all of $V = \F_0 \oplus \F_1
\oplus....$) with the Bargmann scalar product \cite{Elp,IoA}; this is defined as follows:
$$ ( x^{\mu_1} y^{\mu_2} \eb_\a \, , \, x^{\nu_1} y^{\nu_2}
\eb_\b ) \ := \
\delta_{\a,\b} \, \langle \mu , \nu \rangle \ := \delta_{\mu
, \nu} \ {
\pa^{\mu_1 + \mu_2}\, x^{\nu_1} y^{\nu_2} \over \pa x^{\mu_1} \,
\pa y^{\mu_2} }
\eqno(\sn.3) $$
With this choice\footnote{Using the standard scalar product \cite{Arn1} would differ, here and below, only in some coefficients.} of scalar product in $V$, the adjoint of $\L_0$ is given by
$\L_0^+ = \{ A^+ \xi , . \} $, where $A^+$ is the adjoint of $A$: $A^+_{ij} = A^*_{ji}$.
The operators $\L_0$ and $\L_0^+$ play a crucial role in discussing the properties of $f$ under Poincar\'e transformations, i.e. under near-identity changes of coordinates in $\R^2$,
given by $ {\=\xi}^i = \xi^i + h^i (\xi )$, with $ h \equiv h_k
\in \F_k$.
It is well known that by a careful use of Poincar\'e
transformations, i.e.
performing them for $k=1,2,...$ successively and
choosing the $h_k$'s as
solution to the homological equations (see below),
one can eliminate all terms
in the range of $\L_0$.
That is, one can pass to coordinates $\eta$ which
reduce the coordinate expression of $f$ to a form (the {\it Poincar\'e-Dulac normal form}, or simply normal form) ${\^f}$, where $\^f (\eta) = A \eta + \^F
(\eta )$ and $\^F \in \ker (\L_0^+)$.
It is also well known that $\ker (\L_0^+)$ belongs to (and coincides with for $A$ semisimple) the set of resonant vectors, which are defined as follows. Consider a basis in $\R^2$ such that $A$ is in Jordan normal form, and let $\la_1 , \la_2$ be its eigenvalues (possibly equal).
Then a resonant monomial vector $\xi^\mu \eb_\b$ of order $|\mu|$
is a vector $\vb$ with components (we write the
vector indices as lower ones for ease of notation) $\vb_\a = \xi_{(1)}^{\mu_1}
\xi_{(2)}^{\mu_2} \delta_{\a,\b} = x^{\mu_1} y^{\mu_2} \delta_{\a,\b}$,
where $|\mu| = \mu_1 + \mu_2 > 1$ and the
$\mu_i$ are non-negative integers satisfying the {\it resonance relation}
$$ \mu_1 \la_1 \, + \, \mu_2 \la_2 \ = \ \la_\b \ . \eqno(\sn.4) $$
The linear span of resonant monomial vectors is the space of resonant vectors, i.e. $\ker (\L_0^+) \backslash [ \ker (\L_0^+) \cap \F_0]$.
The {\it homological equation} for $h_k$
is given as follows: let $\~f$ be the expression of $f$ obtained after
operating the previous Poincar\'e transformations, and let $\pi_k$ be the projection operator $\pi_k : \F_k \to \ran (\L_0) \cap \F_k$; then
$$ \L_0 (h_k ) \ = \ \pi_k \, {\~f}_k \eqno(\sn.5) $$
is the required homological equation for $h_k$; notice that the solution to this is uniquely defined up to elements of $\ker (\L_0 )$.
We refer e.g. to \cite{Arn1,Arn2,CGs,Elp,Gle,IoA,Ver,Wal,Wal3} for
further detail on standard normal forms and the normalizing transformation.
\section{Poincar\'e renormalized forms}
\def\sn{2}
In order to discuss PRFs, it is convenient to use Lie-Poincar\'e -- rather than Poincar\'e -- transformations. Let us first of all briefly discuss these, referring to e.g. \cite{BGG,Dep,MiL,Wal3} or \cite{CGs,IHP} for further detail.
The function $h : \R^2 \to \R^2$, or
equivalently the vector field $H = h^{(1)} (x,y) \pa_x + h^{(2)} (x,y) \pa_y$,
generates a Lie-Poincar\'e transformation given by the time-one flow of $H$.
Thus under the Lie-Poincar\'e transformation generated by the vector field
$H$, the vector field $W$ is transformed into
$$ {\widetilde W} = e^H W
e^{-H} ; \eqno(\sn.1) $$
this can be computed up to any desired order by
means of the
classical Baker-Campbell-Haussdorff formula as
$$ {\widetilde
W} \ = \ \sum_{s=0}^\infty {1 \over s!} [[ H , W ]]^s ,
\eqno(\sn.2) $$
where we have defined the iterated commutators as
$$ [[H,W]]^0 := \ W \ \ ; \ \
[[H,W]]^s := \ \[ \, H \, , \, [[H,W]]^{s-1} \, \] \ (s\ge 1) \ .
\eqno(\sn.3) $$
If $h = h_k
\in \F_k$, from the above we have, denoting by $[a]$ the
integer part of
$a$ and with $\h (f) := \{ h , f \}$, that
$$ {\widetilde f}_m \ = \
\sum_{s=0}^{[m/k]} {1 \over s!} \, \h^s (f_{m-sk} )
\ . \eqno(\sn.4) $$
We define the higher homological
operators $\L_k$ as $\L_k := \{ f_k , .
\}$; note that these make good sense only
after $f_k$ has been stabilized in
the procedure, as discussed in
\cite{LMP,IHP}.
We define the spaces $H^{(p)} \sse \F$ ($p\ge 0$) by
$H^{(0)} = \F$, and $H^{(p+1)} = H^{(p)} \cap \ker (\L_p)$ for $p\ge 0$. This
implies that $H^{(p+1)} \sse H^{(p)}$, and
$$ H^{(p)} \ = \ \ker (\L_0 )
\cap ... \cap \ker (\L_{p-1} ) \ \equiv \ {\bigcap }_{s=0}^{p-1} \, \ker
(\L_s ) \ . \eqno(\sn.5) $$
The restriction of $\L_p$ to $H^{(p)}$ will be
denoted as $\M_p$.
With this definition, we have $H^{(p+1)} = \ker (\M_p)$
We
also define the spaces $F^{(p)} \sse \F$ ($p\ge 0$) as $F^{(0)} = \F$ and
$F^{(p)} = F^{(p-1)} \cap \ker (\M_p^+ )$ for $p \ge 1$. This implies that
$F^{(p+1)} \sse F^{(p)}$, and\footnote{The orthogonal complement must be understood in $\F$ equipped with the scalar product.}
$$ F^{(p)} \ = \
{\bigcap }_{s=0}^p \, \left[ \ran (\M_s ) \right]^\perp \ = \ {\bigcap
}_{s=0}^p \, \ker ( \M_s^+ ) \ . \eqno(\sn.6) $$
We also have $F^{p+1)} =
F^{(p)} \backslash [\ran (\M_p) \cap F^{(p)} ]$.
We can also define the
projection operators $\pi_k : \F \to \ker (\L_k )$, and $\Pi_s = \pi_{s-1}
\circ ... \circ \pi_0$ for $s>0$, $\Pi_0$ the identity operator. Similarly, we
define the projection operators $P_s : \F \to \ran (\M_s )$. Notice that with
these $H^{(p)} = \mu_p \F$, $\M_p = \L_p \circ \mu_p$.
The function $f \in \F$ (the associated vector field $X_f \in \W$) is said
to be in PRF if $f_k \in
F^{(k)}$ (and then obviously $f^k \in F^{(k)}_k :=
F^{(k)} \cap \F_k$).
It can be shown
that any
function $f \in \F$ (any vector field $W \in \W$)
can be taken to
PRF by a sequence of suitably chosen Lie-Poincar\'e
transformations.
Let us now briefly describe two possible schemes for constructing the sequence of ``suitably chosen'' transformations; these were discussed in \cite{LMP,IHP}.
In the first case, denote by $f_k^{(0)}$ the term obtained after completing the procedure up to order $k-1$. Then operate a series of transformations with generators\footnote{The lower index will keep track of the subspace $\F_k$ to
which $h$ belongs; the upper index will keep track of the transformations already operated.} $h_k^{(0)} , h_{k-1}^{(1)},
... , h_1^{(k-1)}$, with $h_p^{(s)} \in H^{(s)} \cap \F_p$. These should be chosen as solutions to the higher order homological equations
$$ P_s f_k^{(s)} \ = \ \M_s \left( h_{k-s}^{(s)} \right) \ ; \eqno(\sn.7) $$
in other words,
$$ h_{k-s}^{(s)} \ = \ \Pi_s \circ \M_s^+ \circ P_s ( f_k^{(s)} ) \ .
\eqno(\sn.8) $$
Other schemes of further normalization are also possible; in particular, rather than putting $f_k^{(s)}$ in $F_k^{(s)}$ for $s=1,2,...,k$, and doing
this for all $k=1,2,...$, we can invert the order of iterations, i.e. put $f_k^{(s)}$ in $F_k^{(s)}$ for all $k \ge s$, and do that for all $s=1,2,...$;
in this case for $s=1$ we obtain the standard NF. Notice that the equations to
solve, and the spaces to which the functions belong, are the same in the two
cases; however, the form to which $f_k$ has been taken by previous parts of
the procedure when we deal with $f_k^{(s)}$ can be different.
Due to non-unicity of PRF, these two procedure can indeed in principles give
different PRFs, i.e. the arbitrary coefficient which appear in the general
form of PRFs for a given system can take different values depending on the procedure we have followed.
We refer to \cite{CGs,IHP} for further detail concerning Poincar\'e
renormalized forms and related matters, including the Hamiltonian version of the theory and the role of (linear) symmetries.
{\bf Remark 4.} The idea of using $\L_k$ with the same role as $\L_0$ was already contained in \cite{Tak}; at the time of writing \cite{LMP,IHP} I had not realized this, and did not give proper credit. $\odot$
{\bf Remark 5.} The schemes mentioned here are ``generic'', i.e. do not take into account the Lie algebraic structure of the set $\G$ of vector fields in normal form (with respect to a given linear part). This point will be considered in section 3, where a ``$\G$-adapted'' procedure is discussed; this will also make transparent the relation between this approach and Broer's one \cite{Bro}.
\section{Reduction of normal forms and Lie algebras}
\def\sn{3}
We want to comment on the qualitative aspects of the reduction to PRF of a vector field already in standard NF. We will first discuss these in general, and then focus on the two-dimensional case. It will turn out that, making use of the Lie algebraic structure of the set of vector fields in normal form, one can obtain a better reduction of the normal form.
\subsection{General considerations}
There are several (equivalent) algebraic characterizations for the standard normal form, and we want here to use one of them, given by \cite{Elp}; see also \cite{CGs,Wal}.
We will rewrite a VF in the form
$$ X \ = \ X_0 \ + \ Z \eqno(\sn.1) $$
where $X_0$ is the linear part of $X$ in the initial coordinates on $\R^2$, and $Z$ is the nonlinear part of $X$ in these coordinates (this splitting is invariant under Poincar\'e or Lie-Poincar\'e changes of coordinates, although the form of $Z$ will change as we change coordinates).
As mentioned above, $Z$ is a resonant vector field, and is in the kernel of $\L_0^+$. If the matrix $A$ identifying the linear party of $X$ is semisimple or normal, then $\ker (\L_0^+) = \ker (\L_0)$ and $X,X_0,Z$ all belong to the same set $\ker (\L_0)$, otherways the linear and nonlinear part will belong to the kernel of different operators on $\W$.
The considerations to be presented here will have to be applied, in the following sections, only to vector fields with semisimple linear part. Thus {\it we assume, in this section only, that $A$ is semisimple}\footnote{This point is inessential as long as we only want to describe the nonlinear part of $X$: we could write the linear part as $A^+$, and the result of this subsection would still apply to $Z$. However, to be able to use these in the PRF context -- see next subsections -- we have to assume $Z$ and the vector fields $H_k$ giving the Lie-Poincar\'e transformations are both in the Lie algebra $\G_A$ defined below; the assumption of semisimplicity becomes then relevant.}. We will thus denote by $X_A$ the vector field given by $X_A = (Ax)^i \pa_i$, and assume $[X_A,X] = [X_A,Z]=0$.
We denote by $\G_A \ss \W$ the set of vector fields in $\W$ commuting with $X_A$, $\G_A := \{ W \in \W :\, [X_A , W]=0\}$.
Let $I (A)$ be the set of (formal power series) constants of motion for the vector field $X_A$, i.e. of formal power series $\phi : \R^n \to \R$ such that $X_A (\phi) = 0$; let $I^* (A)$ be the set of meromorphic (fractional or formal power series) such constants of motion. Let $I_k (A)$ (respectively, $I_k^* (A)$) be the subset of the $\phi \in I (A)$ [respectively, of the $\phi \in I^* (A)$] which are homogeneous of degree $k+1$ in the $x$, $\phi (ax) = a^{k+1} \phi(x)$.
Let $C (A)$ be the centralizer of $X_A$ in the algebra $W_0$ of linear vector fields in $\R^n$, and let $\{ K_1 , .... , K_c \}$ be a basis for this, say with $K_1 = A$. We write $X^{(s)} = (K_s x)^i \pa_i$.
{\bf Theorem} \cite{Elp,Wal}. {\it The set $\G_A^{(k)} \ss \W_k$ of vector fields in $\W_k$ commuting with $X_A$ is given by vector fields $W$ of the form
$$ W \ = \ \sum_{s=1}^c \, \mu_s (x) X^{(s)} \ \equiv \ f^i (x) {\pa \over \pa x^i} , \eqno(\sn.2) $$
with $\mu (x) \in I^* (A)$ and such that $f^i (x) = \mu_s (x) (K_s)^i_j x^j$ are polynomials. }
Thus, the homogeneous part of degree $k$ in $W$, which we denote as $W_k$, will be given by $W_k = \sum_{s=1}^c \a_s^{(k)} (x) X^{(s)}$ with $\a_s^{(k)} \in I_k (A)$.
{\bf Remark 6.} In many cases of interest (and in those of interest here), we actually have $\mu_s \in I (A)$, $\a_s^{(k)} \in I_k (A)$; we say then that the normal form is {\it quasilinear}. In this case the (possibly infinite dimensional) Lie algebra of vector fields in normal form with respect to a given linear part has also the structure of a finitely generated ($c$ generators) module over $I (A)$ \cite{CGs,Elp,Wal}. $\odot$
This theorem implies that the structure of normal forms becomes specially simple when $I^* (A)$ is simple, in the sense we specify below.
It is clear that if the $\la_i$ satisfy a relation of the kind
$$ \sum_{i=1}^n \ m_i \la_i \ = \ 0 \eqno(\sn.3) $$
with the $m_i$ relatively prime among them (and where $m_i \in \N$ and $|m| = \sum_i m_i \ge 1$), then the $\la_i$ satisfy an infinity of resonance relations
$$ \sum_{i=1}^n \ \mu_i \la_i \ = \ \la_r \ \ \ \ (\mu_i = \kappa m_i + \delta_{i,r}, \ \kappa \in \N; \ |\mu | \ge 2 ). \eqno(\sn.4) $$
We say therefore that (3) is a {\it master resonance}\footnote{This is also called a {\it simple resonance} in part of the literature.} and the resonances (4) are associated to this. Notice that in a finite dimensional space this is the only way to have infinitely many resonance relations satisfied.
If (3) is satisfied, then there is a monomial, which in the coordinates where $A$ is diagonalized is simply
$$ \Psi \ = \ x_1^{m_1} ... x_n^{m_n} \ = \ \prod_{i=1}^n \ x_i^{m_i} \ , \eqno(\sn.5)$$
which is a constant of motion for $X_A$; we say this is a {\it basic invariant} for $X_A$.
If the $\la_i$ satisfy a master resonance relation, and there is no resonance relation between the $\la_i$ apart from those associated to the master resonance, then the set $\G_A$ reduces to vector fields of the form
$$ W \ = \ \sum_{k=0}^\infty \left[ \sum_{s=1}^c \ a_k^{(s)} \Psi^k X^{(s)} \right] \ := \ \sum_{k=0}^\infty \sum_{s=0}^c a_k^{(s)} X_k^{(s)} \eqno(\sn.6) $$
where we have defined
$$ X_k^{(s)} \ := \ \Psi^k X^{(s)} \ \in \ \W_k \ . \eqno(\sn.7)$$
Notice that now the algebraic structure of $\G_A$ is immediately read by the structure of $C(A)$ and by computing $X^{(s)} (\Psi)$.
\subsection{The two-dimensional case}
When we work in $\R^2$, $C(A)$ is two dimensional\footnote{We can choose $A$ and the identity matrix $E$ as generators of $C(A)$, provided $A \not= E$; if $A=E$ there are no resonances at all.}. Also, in $\R^2$ there can be (unless $A=0$) at most one master resonance, i.e. at most one basic invariant; if there is a master resonance, then all resonances must be associated to this.
In this case we can easily determine the structure of the infinite dimensional Lie algebra $\G_A$: indeed,
$$ [ X_k^{(1)} , X_m^{(2)} ] \ = \ \left( (m+1) X^{(1)} (\Psi) \cdot X_{k+m}^{(2)} - (k+1) X^{(2)} (\Psi) \cdot X_{k+m}^{(1)} \right) \ ; \eqno(\sn.8)$$
but, by definition, $X^{(1)} (\Psi) = 0$: therefore
$$ [ X_k^{(1)} , X_m^{(2)} ] \ = \ (1-k) \, [ X^{(2)} (\Psi) ] \ X^{(1)}_{k+m} \ . \eqno(\sn.9) $$
Similarly one obtains
$$ [ X_k^{(1)} , X_k^{(1)} ] = 0 \ \ {\rm and} \ \ [ X_k^{(2)} , X_m^{(2)} ] = (m-k) \, [X^{(2)} (\Psi) ] \ X_{k+m}^{(2)} \ . \eqno(\sn.10)$$
Thus all the situations in which we have a basic invariant are expected to be equivalent from the point of view of (infinite dimensional) Lie algebras.
We will indeed find this structure in our discussion. We will therefore study in great detail the first considered case {\bf S3}, while for the other ones it will be enough to study the way the results for ${\bf S3}$ are mapped in terms of them.
\subsection{PRF and Lie algebraic structure: dimension two}
This structure suggests also another consideration. The general ``further reduction'' procedure used to take a system into PRF sketched in section 2 and discussed in \cite{LMP,IHP} does not take into account the specific structure of the algebra $\G_A$, and is thus a ``generic'' algorithm.
On the other hand, eqs. (9),(10) show that (in the interesting case where there is a master resonance) the algebra $\G_A$ has a well specific structure, which in this case is given by $\G_A = \X \oplus \Y$, where the infinite dimensional Lie algebras $\X$, $\Y$ are spanned respectively by the $X^{(2)}_k$ and $X^{(1)}_k$ vector fields.
Notice that (9) means that $\Y$ is an abelian ideal in $\G_A$.
As mentioned above, the generators for Lie-Poincar\'e transformations in any further normalization procedure should be chosen to be in $\ker (\L_0)$, so that we remain within the class of vector fields in normal form; that is, further normalization will be concerned with inner automorphisms of the algebra $\G_A$.
If the generator is in $\Y$, this will produce an action on $\Y$ alone, not on the $\X$ part of $\G_A$. On the other side, generators
in $\X$ will produce effects on both the $\X$ and $\Y$ parts. One can then first further normalize the $\X$ part of the normal form, up to any desired order, by Lie-Poincar\'e transformations with generators in $\X$ (this will also change the $\Y$ part of the normal form). Once this has been done, one can pass to consider transformations with generators in $\Y$; these will be able to further reduce the $\Y$ part of the normal form, without affecting the part (already further normalized) in $\X$. Notice that, due to the abelian nature of $\Y$, this will be done via the action of homological operators ``associated to vector fields in the $\X$ part of $\G$'' only; hence, elimination of $X_k^{(k)}$ terms will reduce the possibility of eliminating $X_k^{(1)}$ terms.
It should be stressed that the reduced normal form obtain in this way is {\bf not} necessarily a PRF in the sense of the definition discussed in section 2. We will use therefore the name ``Lie renormalized form'' (LRF) to emphasize the fact it is obtained using the Lie algebraic properties of the set of vectors in normal form and at the same time the main idea behind the PRF procedure.
A concrete application of this ``$\G$-adapted procedure'' will be given below when considering certain subcases, see subsections 6.3 and 7.4; in this case, indeed, the generic PRF procedure given in \cite{LMP,IHP} would produce an infinite PRF (as shown in subsections 6.1 and 6.2), while the $\G$-adapted one will produce a finite LRF (as shown in subsection 6.4). In this case, it will turn out that the LRF is not a PRF.
{\bf Remark 7.} It should also be emphasized that this procedure can be seen as an implementation of Broer's idea on reduction of normal forms as filtration of Lie algebras; see also the work of Baider and coworkers. Needless to say, these authors should not be held responsible of any shortcomings of the LRF procedure. $\odot$
\subsection{Reduction of NFs and Lie algebraic structure in general}
The procedure sketched in the previous section can be generalized to any finite dimension, as briefly discussed in this subsection. Here we assume the reader has some familiarity with basic concepts from Lie algebras; the subsection is not needed in the following of the paper.
Consider also the descending central series \cite{Kir,NaS} of $\G$, i.e. the series of $\G_k$ given by
$$ \G_0 \equiv \G \ \ , \ \ \G_{k+1} := [\G , \G_k ] \ \ ; \eqno(\sn.12)$$
recall that if this terminates in zero (after $q$ steps), we say that $\Ga$ is a {\it nilpotent } algebra (of rank $q$; in the present case $\G$ is infinite dimensional and in general $q = \infty$).
We write $\De_k = \G_k / \G_{k+1}$, and $\rho_k : \G \to \De_k$ will denote the projection operator on $\De_k$; it is well known that $\De_k$ is abelian. Recall also that any nilpotent algebra is also solvable\footnote{This helps in making contact with symmetry reduction for general symmetric ODEs \cite{CGs,GaK,Olv}; the structure employed here for the reduction of the normal form is also of use to study its solutions.}. If $\G$ is solvable, there is a (generally, complex) representation in terms of triangular matrices.
Thus, assume $\G$ is nilpotent; we can then consider the infinite series $\De_k$ ($k=1,2,....$). Let $W$ be the normal form we want to simplify; decompose it as $W = \sum_{k=0}^\infty W_k$, where $W_k \in \De_k$ (no confusion should be made with the $W_k$ arising from the decomposition in homogeneous terms considered in other sections; the same holds for other quantities with indices $k$ used below). Then we can proceed to renormalize $W$ following the sequence $\De_k$: that is, we consider at each step a normal form $W^{(k)} \in \G$ obtained from $W$ via the previous $k$ further simplifications; we consider generators $H_k \in \G_k$, and try to eliminate (as far as possible) $W_k$ via the linear ``homological equation''
$$ \rho_k \ \( \[ W^{(k)} , H_k \] \) \ = \ \pi^k (W_k ) \eqno(\sn.13)$$
where $\pi^k$ is the projection from $\G$ to the range of the operator $L^{(k)}$ associated to $W^{(k)}$, $L^{(k)} (Y) := [W^{(k)} , Y ]$. This should be seen as an equation for $H_k$, determined up to an element in $\ker (L^{(k)} )$.
Proceeding recursively in this way for $k=0,1,2,....$ we arrive at a reduced normal form, which we will call the {\bf Lie renormalized form} (LRF).
Notice that in practical situations it can be convenient to consider truncated algebras, i.e. fix a homogeneity order $N$ up to which we want to compute quantities, and perform this procedure modulo vector fields in $\W_N$ and higher.
A particularly convenient situation is the one where $\G = \X_1 \oplus ... \oplus X^{(c)}$ (with the notation introduced above in this section), and the subalgebras
$$ \Ga_k = \bigoplus_{p=k}^c \X_p $$
satisfy the relation (12). In this case (notice that $\Ga_k / \Ga_{k+1} = \X_{k}$) we can proceed by blocks, i.e. reduce recursively the component of $W$ in $\X_1, \X_2 , ... , \X_c$; each of the reduction of $\X_p$ components will be performed with generators in $\Ga_{p+1}$, and thus will not touch terms in the $\X_q$ components with $q < p$. The generators can be determined by linear equations as in the PRF procedure.
This is precisely the situation encountered in our study of two dimensional cases, and is more general than one would think at first sight.
Notice that such a structure cannot be immediately deduced from the corresponding structure of the matrix algebra $G=C(A)$, as we now briefly discuss in the general case of dimension $n$, with $r$ independent master resonances and hence $r$ independent invariants $\psi_i$, with no resonances apart from those associated to these.
We assume that $G^{k+1}/G^k = X^{(k)}$.
In this case the most general resonant vector field will be in the form
$$ X \ = \ \sum_{\a=1}^n \ \mu_\a (\psi_1 , ... , \psi_r ) \ X^{(\a)} \ := \ \mu_\a (\Psi) X^{(\a)} \ . \eqno(\sn.12)$$
We will write $\X_\a$ for the infinite dimensional algebra of the $\mu_\a (\Psi) X^{(\a)}$. As mentioned above, this is a module over $I^* (A)$ with generator $X^{(\a)}$.
Let us now consider $[\G , \G_p]$; by direct computation we have:
$$ \begin{array}{l}
\[ \mu_\a (\Psi ) X^{(\a)} , \s_\b (\Psi ) X^{(\b)} \] \ = \\
\ \ = \ \( \mu_\a (\Psi) \, {\pa \s_\b \over \pa \psi_i } \, X^{(\a)} (\psi_i) \) X^{(\b)} \ - \
\( \s_\b (\Psi) \, {\pa \mu_\a \over \pa \psi_i } \, X^{(\b)} (\psi_i) \) X^{(\a)} \ + \\
\ \ + \ \( \mu_a (\Psi) \s_\b (\Psi) \) \, \[ X^{(\a )} , X^{(\b)} \] \ . \end{array} \eqno(\sn.13)$$
Thus, in general $[\X_\a , \X_\b ]$ does not reduce to a module generated by $[X^{(\a)} , X^{(\b)} ]$. In particular, if here $\a = 1,...,c$, $\b = p,...,c$ (i.e. $X^{(\a)} \in \G$, $X^{(\b)} \in \G^p$), then the last term in (13) belongs to $\G_{p+1}$, but in general the first two do not, and therefore the commutator does not reduce to terms in $\G_{p+1}$.
Thus the derived and descending central series of $G$ are not automatically\footnote{More precise results could be obtained by an analysis of the algebra of invariants $\Psi$ and its interrelation with $\G$; this would however lead us too far away from the subject of the present paper, and will be presented elsewhere.} mapped into corresponding series for $\G$.
{\bf Remark 8.} We stress that the LRF procedure is well defined and implementable without requiring $A$ to be in Jordan normal form; see \cite{ScW} for the relevance of this point. $\odot$
\section{Singular points of vector fields in the plane: the basic
classification of linear parts.}
\def\sn{4}
Let $A = (Df)(x_0)$ be the linear part of $X = f^i \pa_i$ at the equilibrium point $x_0$; we can and will always shift coordinates in $R^2$ so that $x_0$ is in the origin.
After reduction to Jordan normal form, and up to
permutation of
coordinates, the following cases are possible for $A$ (all the
constants $\mu , \mu_i$ below are understood to be real and nonzero):
$$
\begin{array}{ll}
A = \pmatrix{\mu_1 + i \mu_2 & 0 \cr 0 & \mu_1 - i \mu_2 \cr} &
(S1) \\
A = \pmatrix{i
\mu & 0 \cr 0 & - i \mu \cr} & (S2) \\
A =
\pmatrix{0 & 0 \cr 0 & \mu \cr} & (S3) \\
A = \pmatrix{\mu_1 & 0 \cr 0 &
\mu_2 \cr} & (S4) \\
A = \pmatrix{\mu & 1 \cr 0 & \mu \cr} & (N1) \\
A =
\pmatrix{0 & 1 \cr 0 & 0 \cr} & (N2) \\
A = \pmatrix{0 & 0 \cr 0 & 0 \cr} &
(V) \end{array} $$
In cases {\bf S1}-{\bf S4} the matrix $A$ is semisimple, in case {\bf N1} it
has a semisimple part and a nilpotent one, in case {\bf N2} it is nilpotent, with zero semisimple part. The case {\bf V} corresponds to a vanishing linear part. Thus, cases {\bf N2} and {\bf V} correspond to non-regular singular points \cite{Arn2}; it is known that in this case normal forms theory is not very effective \cite{Arn2,Wal}, and we will not deal with them. We recall that case {\bf N2} is studied from the point of view of PRFs in \cite{IHP}; however, the results that can be obtained are very poor\footnote{This singularity is better studied with different methods, see \cite{Tak} and \cite{BaS,KOW}.}.
\bigskip
We will now briefly recall the results obtained by standard NF theory
for each of the cases listed above. In several of them the PRFs are
either trivial (i.e. coincide with the standard NF) or have been studied in [1,2], as we also briefly mention below.
In the generic case {\bf (S1)} no resonance can be present (recall we assumed $\mu_i \not=0$), so the NF is linear; moreover the eigenvalues belong to a Poincar\'e domain, and we are thus guaranteed \cite{Arn1,Arn2} that the normalizing transformation is convergent in some sufficiently small neighbourhood of the origin.
The case {\bf (S2)}, which is generic for hamiltonian system, has infinitely many resonances. The NF is written as $W = [ 1 + \a (|x|^2 ) ] Ax + \b(|x|^2) Ex$, where $|x|^2 = x_1^2 + x_2^2$, $\a$ and $\b$ are arbitrary polynomial functions with zero constant part, and $E$ is the identity matrix;
in the hamiltonian case we obviously have $\b \equiv 0 $. The further reduction of this NF has been considered by Siegel and Moser \cite{SiM} in the hamiltonian case (see \cite{FoM} for higher dimensions), while the generic case $\b \not\equiv 0$ has been studied via PRFs in \cite{LMP,IHP}; the results for this are recalled in section 10 below. We note that for $\b \equiv 0 $ the NF satisfies ``condition A'' and thus, provided the linear part satisfies the arithmetic condition known as ``condition $\omega$'' \cite{Bru,CGs}, the transformation to NF is guaranteed to be convergent on the basis of Markhashov-Bruno-Walcher-Cicogna theory \cite{BrW,Cic,CGs,Mar,Wal4}, while no convergence result is available in the generic case.
In case {\bf {\bf S3}} the eigenvalues cannot belong to a Poincar\'e domain, and it is easy to see that the NF will depend on two infinite sequences of real constants,
$$ \begin{array}{rl} {\dot x} =& \sum_{k=1}^\infty a_k x^{k+1} \\
{\dot y} =& \sum_{k=1}^\infty b_k x^k y \ . \end{array} \eqno(\sn.1) $$
The PRF in this case will be studied in detail in sections 6 and 7 (see also appendix B).
In the case {\bf {\bf S4}} we should distinguish several subcases according to two criteria: first, if $\mu_1 / \mu_2$ is rational or irrational, and second according to the sign of $\mu_1 \mu_2$.
For $\mu_1 \mu_2 > 0$ the eigenvalues belong to a Poincar\'e domain, and the transformation to NF is guaranteed to be convergent on the basis of the Poincar\'e criterion; if $\mu_1 / \mu_2$ is irrational, the NF is linear, otherways it can include resonant nonlinear terms (see subsection 8.1 for limitations on these).
For $\mu_1 \mu_2 < 0$ the eigenvalues are not in a Poincar\'e domain and we are not guaranteed of the convergence of the normalizing transformation on the basis of the Poincar\'e criterion. If $\mu_1 / \mu_2$ is irrational, the NF reduces to the linear form, and no further normalization is needed; moreover, convergence can be guaranteed on the basis of Pliss theorem \cite{Pli}.
If $\mu_1 / \mu_2 = p/q \in {\bf Q}$, there can be resonances; in this case Sternberg theorem \cite{Arn2,Bel,BeK,Che,Ste} guarantees that the NF is smoothly (but in general, not analytically) equivalent to the original system.
The PRF for this case has not been studied so far, and we study it later on in section 8.
In case {\bf N1} the eigenvalues are in a Poincar\'e domain, as $\mu \not= 0$; moreover there are no resonances, and thus we have a linear NF, with a convergent normalizing transformation.
Finally, in the nonregular case {\bf N2} the standard normal form is given by $$ \begin{array}{rl} {\dot x} =& \sum_{k=1}^\infty a_k x^{k+1} \\ {\dot y} =& \sum_{k=1}^\infty a_k x^k y + b_k x^{k+1} \ . \end{array} \eqno(\sn.2) $$
As already mentioned, the PRF for this case was studied in detail in \cite{IHP}. We briefly recall the (poor) results concerning this in section 10.
\bigskip
It follows from the above summary of known results that we need to study PRFs only in the cases {\bf S3} and {\bf S4} (and, as mentioned in remark 1, to correct a formula in case {\bf S2}). Actually, as discussed in the previous section, formal computations in one of these cases can be mapped to other cases as well. We will thus study the case {\bf S3} in full detail.
\section{The S3 case: standard normal forms}
\def\sn{5}
Let us consider a linear part given by
$$ A = \pmatrix{0&0\cr0&1\cr} \eqno(\sn.1)$$
i.e. corresponding to the vector field
$ y \pa_y $.
We note immediately that here $A = A^+$, so that the homological operator associated to $f_0 = Ax$ will satisfy $\L_0 = \L_0^+$ (recall we are using the Bargmann scalar product).
It is easy to see that the kernel of $\L_0$
is spanned by the arrays of vector fields
(with $k \ge 0$)
$$ X_k := x^{k+1} \, \pa_x \ \in \W_k \ \ {\rm and} \
\ Y_k := x^k y \, \pa_y \ \in \W_k \eqno(\sn.2) $$
(with this notation the linear part considered here is given by $Y_0$). These vector fields satisfy the commutation relations
$$
[ X_k , X_m ] = (m-k) \, X_{k+m} \
, \ \
[ X_k , Y_m ] = m \, Y_m \
, \
\
[ Y_k , Y_m ] = 0 \ . \eqno(\sn.3) $$
We denote by $\G$ the (infinite dimensional) Lie algebra spanned by the $X_k$'s and the $Y_k$'s; and by $\X$ the algebra spanned by the $X_k's$, by $\Y$ the
algebra spanned by the $Y_k$'s.
Obviously $\G = \X \oplus \Y$. Note that $\Y$ is an abelian ideal,
actually the maximal abelian ideal, in $G$.
The (standard) normal form corresponding to the linear
part considered in this section will thus be given by a vector field
$$ W = Y_0 + \sum_{k=1}^\infty ( a_k X_k + b_k Y_k ) \eqno(\sn.4) $$
depending on the two infinite sequences of real constants $a_m , b_m$.
This is precisely the structure considered in section 3.
{\bf Remark 9.} The vector fields $Z_- := \pa_x$, $Z_0 := x \pa_x$ and $Z_+ := x^2 \pa_x$ act in each of $\X$ and $\Y$, as respectively a lowering operator, a counting one, and a raising one: that is, $[Z_-,X_n] = (n+1) X_{n-1}$, $[Z_-,Y_n] = n Y_{n-1}$; $[Z_+,X_n] = (n+1) X_{n+1}$,
$[Z_+,Y_n] = n Y_{n+1}$; $[Z_0,X_n] = (n+1) X_{n}$, $[Z_0,Y_n] = n Y_{n}$. $\odot$
\section{The S3 case: Poincar\'e renormalized forms}
\def\sn{6}
We want now to consider the PRF corresponding to the linear part given by
(4.1). In the spirit of PRF, we should act on the NF (5.4) with
Lie-Poincar\'e transformations generated by homogeneous functions $h_m \in \ker
(\L_0) \cap V_m$. These will correspond to the action of vector fields of the
form $ H_m = \alpha X_m + \beta Y_m$.
We will first, in this section, consider the spaces defined in the PRF
procedure, and thus obtain the general form of the PRF in this case. Later on, in the next section, we will perform explicit computations up to order $N=5$ and show how the reduction of the standard NF operates explicitely (in non-degenerate cases).
We recall that $\ker (\L_0 )$ corresponds to the sum of the algebras $\X$ and $\Y$, and that here $\ker (\L_0 ) = \ker ( \L_0^+ )$.
We have then to consider $\L_1$; this depends on the coefficients of the quadratic part $W_1$ of the vector field $W$, which we write as
$$ W_1 = a_1 X_1 + b_1 Y_1 \ . \eqno(\sn.1) $$
The cases to be considered are
$$ \cases{
a_1 \not= 0
\, , \, b_1 = 0 \, ; & (a) \cr
a_1 = 0 \, , \, b_1 \not= 0 \, ; & (b) \cr
a_1
\not= 0 \, , \, b_1 \not= 0 \, ; & (c) \cr
a_1 = 0 \, , \, b_1 = 0 \, . & (d) \cr }
\eqno(\sn.2) $$
We refer to cases {\bf (a)},{\bf (b)},{\bf (c)} as nondegenerate
(although properly speaking only {\bf (c)} is such), and to {\bf (d)} as the degenerate (properly speaking, completely degenerate) case.
\subsection{The nondegenerate cases}
In case {\bf (a)} we have $W_1 = a_1 X_1$; we notice that
$$ [X_1 , X_k ] = (k-1) X_{k+1} \ \ , \ \ [X_1 , Y_k ] = k
Y_{k+1} \eqno(\sn.3) $$
and therefore for $\M_1$ -- the restriction of $\L_1$
to $\ker (\L_0)$ -- we have that $ \ker (\M_1)$ reduces to the linear span of $\{ Y_0 , X_1 \}$, i.e. of $W_0$ and $W_1$, so no further normalization employing $\L_2, \L_3,...$ is possible.
We also have that the range of $\M_1$ (the most relevant space for our
discussion) is the whole linear span of the
$\{ X_k \}$ (with $k>2$) and of the $\{ Y_ k \}$ (with $k \ge 2$).
{\bf Remark 10.} \def\rna{10}
We stress that for the sake of the present computations (which aim at identifying linear subspaces) we can
as well assume $a_1 =1$; the same remark would apply to other cases. Such a trivial remark will be of use later on in section 8. $\odot$
\bigskip
In case {\bf (b)} we have $W_1 = b_1 Y_1$.
We notice that
$$ [Y_1 , X_k ] = - Y_{k+1} \ , \ [Y_1 , Y_k ] = 0 \ , \eqno(\sn.4) $$
and therefore $\ker (\M_1 ) = \Y$. On the other hand,
$\ran (\M_1)$ also is given by $\Y$, and $\ker (\M_1^+ ) = \X $.
In this case we also have to consider higher order parts of $W$; the first
step of the PRF procedure can eliminate all terms in $\ran (\M_1 )$ and thus
we will only consider terms in $\ker (\M_1^+)$. Let $p$ be the first integer
for which $a_p \not= 0$, and let $W_p = a_p X_p$ (all the $Y_k$ parts with $k
\ge 2$ can be eliminated, as just recalled). Now $\M_p$ is the restriction of
$\L_p$ to $\ker (\M_1 ) = \ker (\L_0 ) \cap \ker (\L_1)$: indeed the $\L_m$
with $1 < m
< p$ are zero and put no restriction. We have
$$ [X_p , Y_k ] =
k Y_{k+p} \eqno(\sn.5) $$
and thus $\ker (\M_p ) = \{ 0 \} $: no further
normalization is possible.
\bigskip
In case {\bf (c)} the situation is quite similar to the one met in case {\bf (a)}: we have indeed $W_1 = a_1 X_1 +
b_1 Y_1 $ with nonzero constants $a_1 , b_1$;
we have immediately that
$$ [ W_1 , \a X_k + \b Y_k ] \ = \ a_1 \a (k-1)
X_{k+1} \, + \, (k a_1 \b - b_1 \a ) Y_{k+1} \eqno(\sn.6) $$
This shows that $\ker (\M_1)$
is just given by $\{ Y_0 , a_1 X_1 + b_1 Y_1 \}$, i.e. by the linear span of
$W_0$ and $W_1$: so again no further normalization using operators $\L_2, \L_3, ...$ is possible.
As for $\ran (\M_1)$, this is the linear span of $\{ X_k
\}$ with $k >2$, and of $\{ Y_k \}$ with $k \ge 2$.
\bigskip
We summarize the results of this discussion as follows, with $\^W$ the vector
field after the whole PRF procedure and omitting case {\bf (d)}. The hat on
constants $\^a_k$ will
indicate that coefficients are not the same as those of
the initial NF (5.4).
$$ \^W = \cases{
Y_0 + a_1 X_1 + \^a_2 X_2 & (a) \cr
Y_0 + b_1 Y_1 +
\sum_{k=2}^\infty \^a_k X_k & (b) \cr
Y_0 + a_1 X_1 + b_1 Y_1 + \^a_2 X_2 &
(c) \cr } \eqno(\sn.7) $$
We anticipate that a different reduction scheme (see section 3) can give a finite dimensional NF in case {\bf (b)}; we discuss this later on in subsection \sn.3.
\subsection{The degenerate case}
The discussion of the degenerate case {\bf (d)} would require to
consider the first $q$ such that $a_q^2 + b_q^2 \not=0$, and then repeat the considerations presented above in cases {\bf (a),(b),(c)}, obviously with the role of $a_1,b_1$ taken by $a_q,b_q$. This is done in the following lines.
We denote by $\mu > 1$ the first $k$ such that $a_k \not= 0$, and by $\nu > 1$ the first $k$ such that $b_k \not= 0$; at least one of these has to exist and be finite, or the system would already be linear and thus trivial.
We will have to consider three cases
$$ \cases{
\mu < \nu & (da) \cr
\mu > \nu & (db) \cr
\mu = \nu & (dc) \cr} \eqno(\sn.8) $$
\bigskip
In case {\bf (da)} the NF will be given by
$$ W \ = \ Y_0 \ + \ \sum_{k=\mu}^{\nu-1} a_k X_\mu \ + \
\sum_{k=\nu}^\infty (a_k X_k + b_k Y_k ) \ . \eqno(\sn.9) $$
We write again a $H_k \in \ker (\L_0 ) \cap \W_k$ as
$H_k = \a_k X_k + \b_k Y_k$, and we have
$$ \begin{array}{rl}
\L_\mu (H_\nu ) \ := & \ \[ W_\mu , H_k \] \ = \ a_\mu \[
X_\mu , \a_k X_k + \b_k Y_k \] \ = \\ & \ = \ a_\mu \a (k-\mu ) X_{\mu+k} \
+ \ a_\mu \b_k k Y_{\mu+k} \ . \end{array} \eqno(\sn.10) $$
Thus it suffices to operate successively transformations generated by $H_k$
(with $k=1,2,...$) and choose at each step
$$ \a_k \ = \ {\=a_{\mu+k} \over
(k-\mu) a_\mu } \ \ , \ \ \b_k \ = \ {\=b_{\mu+k} \over k a_\mu} \ ,
\eqno(\sn.11) $$
where $\=a_{\mu+k}, \=b_{\mu+k}$ denote the coefficient of
$X_{\mu+k}, Y_{mu+k}$ in $\=W$, i.e. after the action of previous
transformations.
Notice that in this way we can eliminate all terms except
the $X_{2 \mu}$ one ($k = \mu$). Thus, the PRF in case {\bf (da)} results to be
$$ \^W \ = \ Y_0 \ + \ a_\mu \, X_mu \ + \ \eta \, X_{2\mu} \eqno(\sn.12) $$
where $a_\mu$ is the same as in the NF and $\eta$ is a real number.
\bigskip
In case {\bf (db)} the NF is
$$ W \ = \ Y_0 \ + \ \sum_{k=\nu}^{\mu-1} b_k Y_\mu \ + \
\sum_{k=\nu}^\infty (a_k X_k + b_k Y_k ) \ . \eqno(\sn.13) $$
Now we have for $\L_\nu (H_k)$ that
$$ \L_\nu (H_k) \ = \ b_\nu \[ Y_\nu , \a_k X_k + \b_k Y_k \] \ = \ - \nu b_\nu \a_k \, Y_{\nu+k} $$
and therefore we can eliminate all the $Y_{\nu+k}$
terms simply by choosing, with the same notation as before,
$$ \a_k \ = \ { -
\=b_{\nu+k} \over \nu b_\nu} \ ; \eqno(\sn.15) $$
we cannot eliminate any of
the $X_k$ terms.
Thus, the PRF in case {\bf (db)} is
$$ \^W \ = \ Y_0 \ + \ b_\nu Y_\nu \ + \ \sum_{k=\mu}^\infty \=a_k X_k \ . \eqno(\sn.16) $$
Similarly to what happens for the nondegenerate case {\bf (b)}, a different reduction scheme, discussed below, gives better results in this case.
\bigskip
In case {\bf (dc)} we have $\mu=\nu$; the NF is
$$ W \ = \ Y_0 \ + \ \sum_{k=\mu}^\infty (a_k X_k + b_k Y_k ) \ . \eqno(\sn.17) $$
In this case
$$ \begin{array}{rl}
\L_\mu (H_k) \ = & \ \[a_\mu X_\mu + b_\mu Y_\mu , \a_k X_k + \b_k Y_k \] \ =
\\ & = \ a_\mu \a_k (k-\mu) X_{\mu+k} \ + \ (k a_\mu \b_k - \mu b_\mu \a_k )
Y_{\mu+k} \ . \end{array} \eqno(\sn.18) $$
Thus for $k \not= \mu$ it
suffices to choose
$$ \a_k \ = \ {\=a_{\mu+k} \over (k-\mu) a_\mu} \ \ , \ \
\b_k \ = \ { (k-\mu) a_\mu \=b_{\mu+k} + \mu b_\mu \=a_{\mu+k} \over a_\mu^2 k
(k-\mu) } \eqno(\sn.19) $$
to eliminate both the $X_{\mu+k}$ and the
$Y_{\mu+k}$ terms.
For $k = \mu$, we choose $\a_k = 0$ and $\b_k = \=b_{\mu+k}
/ (k a_\mu)$ and eliminate the $Y_{2 \mu}$ term.
Thus, the PRF in case {\bf (dc)} is
$$ \^W \ = \ Y_0 \ + \ a_\mu X_\mu \ + \ b_\mu Y_\mu \ + \ \eta X_{2 \mu} \ . \eqno(\sn.20) $$
\subsection{A different further reduction scheme \\ for cases (b) and (db): LRF}
In the previous computations, we have followed the general PRF scheme for further normalizing the standard NF (5.4); this gave an infinite PRF in case {\bf (b)} and in the corresponding degenerate case {\bf (db)}.
However, as discussed in subsection 3.3, one can take advantage of the specific Lie algebraic structure of $\G = \X \oplus_\to \Y$ (here ``$\oplus_\to$'' recalls that $\X$ is acting on $\Y$, but not the other way round, by inner automorphisms) to obtain a more drastical reduction: indeed, one can obtain a reduction to a finite normal form (the Lie renormalized form), as we now discuss.
We use the same notation as in discussing the
case {\bf (db)} above.
We first operate a sequence of normalizations with
generators $h_k^{(a)} = \a_k X_k$, which we choose so as to eliminate higher order $X_k$ terms, i.e. $X_k$ for $k > \mu$ (as we know, this is not possible for $k = 2 \mu$). Notice this will change not only the (coefficients of the) $X_k$
terms, but the (coefficients of the) $Y_k$ terms as well; however, no terms of degree $k < \nu$ will be produced.
In this way, we arrive at a form of the
type (as usual the tilde indicates that the coefficients are not the same as the initial ones, but not yet final)
$$ \=W \ = \ Y_0 \ + \ a_\mu X_\mu + \=a_{2 \mu} X_{2 \mu}
\ + \ \sum_{k=\nu}^\infty \=b_k Y_k \ . \eqno(\sn.21) $$
Once this has been done, we pass to consider a second sequence of normalizations with
generators $h_k^{(b)} = \b_k Y_k $. As $\Y$ is an ideal in $\G$, in this way only $Y_k$ terms are generated, i.e. the $X_k$ terms are unaffected. On the
other side, $\Y$ is abelian, and so only the $X_\mu$ and $X_{2 \mu}$ are actually active in these transformations: that is, we can only eliminate terms
$Y_{\mu + 1}$ and higher (it is clear by the commutation relations that these can always be eliminated).
In this way, we arrive at the LRF: this is a NF depending on $(\mu - \nu + 3)$ constants\footnote{This agrees with the number of constants predicted by Bruno in sect. III.2.3 of \cite{Bru}; see also \cite{Brus}.}, of the form
$$ \^W \ = \ Y_0 \ + \ a_\mu X_\mu + \^a_{2 \mu} X_{2 \mu} \ + \
\sum_{k=\nu}^\mu \^b_k Y_k \ . \eqno(\sn.22) $$
It is also clear by this discussion that $\^b_k = \=b_k$, $\^a_{2 \mu } = \=a_{2 \mu}$.
{\bf Remark 11.} It should be stressed that the LRF (22) is {\it not } a PRF. Indeed, in this case the spaces $F^{(k)}_k := F^{(k)} \cap \W_k$ with $\nu < k \le \mu$ reduce, as seen in subsection \sn.2, to multiples of $X_k$. Here we have therefore $W_k \not\in F^{(k)}_k$ for $\nu < k \le \mu$, and thus (see section 2) the LRF cannot be a PRF.
$\odot$
\section{The S3 case: explicit computations}
\def\sn{7}
As stressed in \cite{LMP,IHP}, the PRF procedure is completely constructive; indeed, the PRF procedure gives an algorithm (which is easy to implement on a computer, as I have indeed done in order to obtain the formulas reported in this section) to determine the coefficients $\a_k , \b_k$ of transformations needed to take
the system (5.4) into its PRF.
I want here to follow these computations (which are not needed if we are only interested in the most general PRF form) in at least this case of the classification given in the Introduction (i.e. for this linear part $A$). I
will follow computations up to order six in $x$ and $y$, i.e. put $W$ in PRF up to terms $W_5$.
I will consider only nondegenerate cases; explicit formulas for a specific degenerate case are given in appendix B (up to order ten in $x$ and $y$).
I will always assume that a first Poincar\'e normalization has already been
performed, taking the system into its standard normal form $f^{(1)}$ ($W =
W^{(1)}$).
In order to display the rather long explicit formula we obtain, it will be convenient to use the notation introduced above, with $W_k$ being the part of
the (coordinate expression of the) vector field $W$ homogeneous of degree $k+1$ in the coordinate we are using. As already stressed, these $W_k$ are not vectorn -- as they depend on the coordinates under use -- and indeed will change under the changes of coordinates; however, they provide a convenient compact notation.
The computations presented in this section have been performed using
Mathematica. We recall that these explicit expressions are computed using
the Baker-Campbell-Haussdorff formula (2.2),(2.4); i.e. by considering
Lie-Poincar\'e transformations, and not simply Poincar\'e ones.
\bigskip
\subsection{Case (a)}
We first operate a transformation with $h_1 \in \ker (\L_0) \cap \F_1$, i.e. with $H_1 = \a_1 X_1 + \b_1 Y_1 \in \W_1
\cap \ker (\L_0)$; after this, the quadratic part $W_1$ of the vector field is
unchanged, $W_1^{(2)} = W_1^{(1)}$, while the cubic one is given by
$$ W_2^{(2)} \ = \ a_2 X_2 \, + \, (b_2 - a_1 \b_1) Y_2 \ . \eqno(\sn.1)$$
We know from our previous general discussion that -- as indeed obvious from the
above formula -- the first component of this cannot be eliminated; to
eliminate the second, we have to choose $$ \b_1 \ = \ b_2 / a_1 \ ;
\eqno(\sn.2)$$ we can choose
$\a_1$ as we like, say $$ \a_1 \ = \ 0
\eqno(\sn.3)$$ for simplicity.
This choice of $\a_1,\b_1$ fixes the PRF after
the first renormalization,
i.e. $f_k^{(2)}$. We do not give the explicit
formulae.
Let us now operate a transformation with $h_2 \in \F_2 \cap \ker (\L_0)$;
the terms $f_0,f_1,f_2$ are unaffected. Using the explicit formulae for
$f_k^{(2)}$, we have that $W_3^{(2)}$ is changed into
$$ W_3^{(3)} \ = \
(a_3 - a_1 \a_2) \, X_3 \ + \
(b_3 - 2 a_1 \b_2 - a_2 b_2/a_1 ) \, Y_3 \ . \eqno(\sn.4)$$
From our previous discussion
we know we should be able to eliminate both
components of this vector; this
can indeed be obtained by choosing
$$
\a_2 \, = \, \frac{a_3}{a_1} \ \ , \ \
\b_2 \, = \, \frac{\left( a_1 b_3 - a_2 b_2 \right) }{2
a_1^2} \eqno(\sn.5)$$
This choice of $\a_2,\b_2$ fixes the PRF after the second renormalization,
i.e. the $f_k^{(3)}$. Again we do not give the explicit formulae.
Let us now operate a transformation with $h_3 \in \F_3 \cap \ker (\L_0)$;
the terms $f_0,...,f_3$ are unaffected. Using the explicit formulae for
$f_k^{(2)}$ and $f_k^{(3)}$, we have that $W_4^{(3)}$ is changed into
$$ \begin{array}{rl}
W_4^{(4)} \ =& \ \left( a_4 - 2 a_1 \a_3 \right) \ X_4 \ + \\
& \ + \ \left( \left[ a_2^2 b_2 - a_1 a_2 b_3 + a_1 \left( - a_3 b_2 + a_1
\left( b_4 - 3 a_1 \b_3
\right) \right) \right] / (a_1^2) \right) \ Y_4 \end{array} \eqno(\sn.6)$$
Again we know apriori that this can be eliminated, and indeed the above
formula shows that this is the case if we choose
$$ \begin{array}{ll}
\a_3 =& \ a_4/(2 a_1) \\
\b_3 =& \ [ a_2^2 b_2 - a_1 a_3 b_2 - a_1 a_2 b_3 + a_1^2 b_4 ] / (3 a_1^3)
\end{array} \eqno(\sn.7)$$
This choice of $\a_3,\b_3$ fixes $f_k^{(4)}$.
Let us now operate a transformation with $h_4 \in \F_4 \cap \ker (\L_0)$;
the terms $f_0,...,f_4$ are unaffected. Using the explicit formulae
for $f_k^{(2)}$, $f_k^{(3)}$ and $f_k^{(4)}$, we have that $W_5^{(4)}$ is changed into
$$ \begin{array}{ll}
W_5^{(5)} \ =& \
[ ( a_3^2 - a_2 a_4 + 2 a_1 ( a_5 - 3 a_1
\a_4 ) ) \, / \, (2 a_1) ] \ X_5 \\
& + \ [ ( a_1^2
( - a_4 b_2 + a_3 b_3 - a_2 b_b + a_1 b_5 - 4 a_1^2 \b_4 ) \\
& \ \ - a_2^3 b_2 + a_1 a_2^2 b_3 ) \, / \, (a_1^3) ] \ Y_5 \end{array}
\eqno(\sn.8)$$
which goes to zero if we choose
$$ \begin{array}{ll}
\a_4\ =& \ \left( a_3^2 - a_2 a_4 + 2 a_1 a_5 \right) \, / \, ( 6 a_1^2 ) \\
\b_4\ =& \ - \left( a_2^3 b_2 + a_1^2 a_4 b_2 -
a_1 a_2^2 b_3 - a_1^2 a_3 b_3 + a_1^2 a_2 b_4 -
a_1^3 b_5 \right) \, / \, (4 a_1^4) \ . \end{array} \eqno(\sn.9)$$
Clearly, the computation could be performed up to any desired order,
compatibly with the computational power at our disposal, producing more and more complex but still completely explicit formulae; we will stop at this order.
\subsection{Case (b)}
Let us now consider the (slightly more complex) case {\bf (b)}.
This was subject to some controversy (see remark 2 and appendix B), so that we will discuss it in full detail, following the transformation of the coordinate expression
of the vector field $W$ step by step.
With a first transformation generated by $H_1 = \a_1 X_1 + \b_1 Y_1$ we have
that $\~W_2^{(2)}$ is given by $a_2 X_2 +
(b_2 + \a_1 b_1) Y_2$; requiring the coefficient of $Y_2$ to vanish, we get
$$ \a_1 \ = \ - b_2 / b_1 \ , \eqno(\sn.10)$$ and for the sake of simplicity we
will take $\b_1 = 0$.
In this way we get
$$ \begin{array}{rl}
\~W^{(2)} \ = &\
Y_0 \ + \ b_1 \, Y_1 \ + \ a_2 \, X_2 \ + \\
& + \ [ a_3 - a_2 b_2 / b_1 ] \, X_3 \
+ \ [ b_3 - b_2^2 / b_1 ] \, Y_3 \ + \\
& + \ [ a_4 - 2 a_3 b_2 / b_1 + a_2 b_2^2 / b_1^2 ] \, X_4 \ + \\
& + \ [ b_4 + 2 b_2^3 / b_1^2 - 3 b_2 b_3 / b_1 ] \, Y_4 + \\
& + \ [ a_5 - 3 a_4 b_2 / b_1 + 3 a_3
b_2^2 / b_1^2 - a_2 b_2^3 / b_1^3 ] \, X_5 + \\
& + \ [ b_5 -3 b_2^4/b_1^3 + 6
b_2^2 b_3/b_1^2 - 4 b_2 b_4 /b_1 ] \, Y_5 \\ & + \ O(6) \end{array}
\eqno(\sn.11)$$
Here and in the following $O(6)$ denote terms in $\W_6$ and
higher.
We will now operate a transformation generated by $H_2 = \a_2 X_2 + \b_2 Y_2$. This leaves lower order terms unaffected, while $W_3$ reads after this
$$ \~W_3^{(3)} =
( a_3 - (x^4 a_2 b_2)/b_1 ) X_3 +
( (-((x^3 y b_2^2)/b_1)) + x^3 y b_3 +
x^3 y b_1 \a_2) ) Y_3 \eqno(\sn.12)$$
Requiring the vanishing of the coefficient of the $Y_3$ term we get
$$ \a_2 \ = \ (b_2^2 - b_1 b_3) \, / \, (b_1^2) \ ; \eqno(\sn.13)$$
we will set again $\b_2 = 0$ for the sake of simplicity.
With these, we get
$$ \begin{array}{rl}
\~W^{(3)} \ = & \
Y_0 \ + \ b_1 \, Y_1 \ + \ a_2 \, X_2 \ + \
[ a_3 - a_2 b_2/b_1 ] \, X_3 \ + \\
& + \ [ a_4 - 2 a_3 b_2 / b_1 + a_2 b_2^2 / b_1^2 ] \, X_4 \ + \\
& + \ [ 2 b_2^3 / b_1^2 - 3 b_2 b_3 / b_1 + b_4 ] \, Y_4 \ + \\
& + \ [ a_5 - 3 a_4 b_2 / b_1 + 3 a_3 b_2^2 / b_1^2 - a_2 b_2^3 / b_1^3 + \\
& \ \ \ \ + \ (a_3 b_1 - a_2 b_2) (b_2^2 - b_1 b_3) / b_1^3 ] \, X_5 \ + \\
& + \ [ 9 b_2^4 / b_1^3 + 9 b_2^2 b_3 / b_1^2 - 3 b_3^2 / b_1 - 4 b_2 b_4
/ b_1 + b_5 ] \, Y_5 \ + \ O(6) \ . \end{array} \eqno(\sn.14)$$
Let us now consider a transformation with generator $H_3 = \a_3 X_3 + \b_3
Y_3$. Now lower order terms are unaffected, while we get that the coefficient
of the $Y_4$ term is changed to
$$ (2 b_2^3)/(b_1^2) \, - \, (3 b_2
b_3)/(b_1) \, + \,
b_4 \, + \, b_1 \a_3 \ . \eqno(\sn.15)$$
Requiring this to vanish, we get
$$ \a_3 \ = \ - \, (2 b_2^3 - 3 b_1 b_2 b_3 + b_1^2 b_4) \, / \, (b_1^3) \ .
\eqno(\sn.16)$$
We will, as by now usual, set $\b_3 = 0$; with these we
obtain
$$ \begin{array}{rl}
~W^{(4)} \ = &\
Y_0 \ + \ b_1 \, Y_1 \ + \ a_2 \, X_2 \ + \
[ a_3 - a_2 b_2/b_1 ] \, X_3 \ + \\
& + \ [ a_4 - 2 a_3 b_2 / b_1 + a_2 b_2^2 / b_1^2 ] \, X_4 \ + \\
& + \ [ a_5 - 3 a_4 b_2 / b_1 + 4 a_3 b_2^2 / b_1^2 -
a_3 b_3 / b_1 \ + \\
& \ \ \ \ - 2 a_2 b_2 b_3 / b_1^2 +
a_2 b_4 / b_1 ] \, X_5 \ + \\
& + \ [ - 9 b_2^4 /(2 b_1^3) + 9 b_2^2 b_3 / b_1^2 \ + \\
& \ \ \ \ - 3 b_3^2 / (2 b_1) - 4 b_2 b_4 / b_1 + b_5) ] \, Y_5 \ + \ O(6)
\ . \end{array} \eqno(\sn.17)$$
We now operate with $H_4 = \a_4 X_4 + \b_4 Y_4 $; the coefficient of $Y_5$
results to be
$$ -(9 b_2^4)/(2 b_1^3) \, + \, (9 b_2^2 b_3)/(b_1^2 ) \, - \,
(3 b_3^2)/(2 b_1) \, - \, (4 b_2 b_4)/(b_1) \, + \, b_5 \, + \, b_1 \a_4 \ .
\eqno(\sn.18)$$
Requiring this to vanish, we get
$$ \a_4 \ = \ (9 b_2^4 - 18 b_1 b_2^2 b_3 + 3 b_1^2 b_3^2 +
8 b_1^2 b_2
b_4 - 2 b_1^3 b_5) \, / \, (2 b_1^4) \ ; \eqno(\sn.19)$$ we also set $\b_4 = 0
$.
We now have
$$ \begin{array}{rl}
\~W^{(5)} \ = &\ Y_0 \ + \ b_1 \, Y_1 \ + \ a_2 \, X_2 \ + \
[ a_3 - a_2 b_2 / b_1 ] \, X_3 \ + \\
& + \ [ a_4 - 2 a_3 b_2 / b_1 + a_2 b_2^2 / b_1^2 ] \, X_4 \ + \\
& + \ [ a_5 - 3 a_4 b_2 / b_1 + 4 a_3 b_2^2 / b_1^2 + \\
& \ \ \ \ - a_3 b_3 / b_1 - 2 a_2 b_2 b_3 / b_1^2 + a_2 b_4 / b_1 ] \, X_5
\ + \ O(6) \ . \end{array} \eqno(\sn.20)$$
Again we will stop at this order; the result of this explicit computation fits in the general result obtained in the previous section.
\subsection{Case (c)}
We could analyze the other case {\bf (c)} and produce explict formulas
proceeding in the same way as in the previously considered cases {\bf (a)} and
{\bf (b)}; however, the procedure is by now clear and for the sake of brevity
we will just give the final formulas.
The coefficients $\a$ are chosen as
$$ \begin{array}{rl}
\a_1 \ =\ & 0 \\
\a_2 \ =\ & {a_3}/{a_1} \\
\a_3 \ =\ & {a_4}/(2 a_1) \\
\a_4 \ =\ & \left( a_3^2 - a_2 a_4 + 2 a_1 a_5 \right) \, / \, ( 6 a_1^2) \ ;
\end{array} \eqno(\sn.21)$$
the coefficients $\b$ are chosen as
$$ \begin{array}{rl}
\b_1 \ =\ & b_2 \, / \, a_1 \\
\b_2 \ =\ & \left( a_3 b_1 - a_2 b_2 + a_1 b_3 \right) \, / \, ( 2 a_1^2) \\
\b_3 \ =\ & -\left( 2 a_2 a_3 b_1 - a_1 a_4 b_1 - 2 a_2^2 b_2 + 2 a_1 a_3
b_2 + 2 a_1 a_2 b_3 - 2 a_1^2 b_4 \right) \, / \, ( 6 a_1^3) \\
\b_4 \ =\ &
( 3 a_2^2 a_3 b_1 - a_1 a_3^2 b_1 - 2 a_1 a_2 a_4 b_1 + a_1^2 a_5 b_1 - 3
a_2^3 b_2 - 3 a_1^2 a_4 b_2 + \\
& \ \ \ + 3 a_1 a_2^2 b_3 + 3 a_1^2 a_3 b_3
- 3 a_1^2 a_2 b_4 + 3 a_1^3 b_5 ) \, / \, ( 12 a_1^4 ) \ . \end{array}
\eqno(\sn.22)$$
It this way, we arrive to a PRF given by
$$ \=W^{(5)} \
= \ Y_0 \, + \, a_1 X_1 \, + \, b_1 Y_1 \, + \,
a_2 X_2 \, + \, O(6) \ .
\eqno(\sn.23)$$
This corresponds to the result obtained by our general
discussion above (moreover the coefficient of the $X_2$ term is
unchanged).
\subsection{The alternative scheme for case (b)}
As mentioned in subsection 6.4, in case {\bf (b)} the alternative scheme adapted to the structure of $\G$ described there permits to obtain a finite NF (the LRF) and is thus to be preferred to the general one. Here we briefly discuss
the explicit computation to be performed according to this. We deal with the nondegenerate (properly speaking, not completely degenerate) case, which means $\mu= 2$, $\nu=1$; see subsection 6.4.
With a transformation $h_1 = \a_1 X_1$, the $W_3$ term reads
$$ \=W_3 \ = \ \[ a_3 + a_2 \a_1 \] X_3 \ + \ \[ b_3 + 2 b_2 \a_1 + b_1 \a_1^2 \] Y_3 \ . \eqno(\sn.24) $$
We disregard the $Y_3$ term and choose $\a_1$ so to eliminate the $X_3$ term, i.e. $\a_1 = - a_3 / a_2$.
After computing the effect of this on higher order terms, we could perform a transformation with generator $h_2 = \a_2 X_2$. However, we know that there will be no way to eliminate the $X_4$ term, so we set $\a_2 = 0$. We perform a transformation with generator $h_3 = \a_3 X_3$. With this, the $W_5$ term reads
$$ \begin{array}{rl}
\=W_5 \ =& \ [ 2 a_3^3 /a_2^2 - 3 a_3 a_4 / a_2 + a_5 - a_2 \a_3 ] X_5 \ + \\
& + \ [ a_3^4 b_1 / a_2^4 - 4 a_3^3 b_2 / a_2^3 +
6 a_3^2 b_3 / a_2^2 - 4 a_3 b_4 / a_2 + \\
& + \ \ \ b_5 - 2 a_3 b_1 \a_3 / a_2 + 2 b_2 \a_3 ] Y_5 \ .
\end{array} \eqno(\sn.25) $$
Again we only aim at eliminating the $X_5$ term, and thus we choose
$$ \a_3 \ = \ { 2 a_3^3 - 3 a_2 a_3 a_4 + a_2^2\ a_5 \over a_2^3} \ . \eqno(\sn.26) $$
We will be satisfied with this order of normalization for the $X_k$ terms,
and take now care of the $Y_k$ ones.
We operate a transformation with generator $h_1 = \b_1 Y_1$; with this we
have that
$$ \=W_3 \ = \ \[ a_3^2 b_1 / a_2^2 - 2 a_3 b_2 / a_2 + b_3 - a_2
\b_1 \] \ Y_3 \ . \eqno(\sn.27) $$
By choosing
$$ \b_1 \ = \ { a_3^2 b_1 - 2 a_2 a_3 b_2 + a_2^2 b_3 \over a_2^3} \eqno(\sn.28) $$
we eliminate this. We compute the effect on higher order term, and then consider a transformation with generator $h_2 = \b_2 Y_2$; with these, we have
$$ \begin{array}{rl}
\=W_4 \ =& \ \[ a_4 - a_3^2 / a_2 \] \ X_4 \ + \\
& \ + \ (1/a_2^3) \ [ a_3^3 b_1 + 3 a_2 a_3^2 b_2 -
3 a_2 a_3 (a_4 b_1 + a_2 b_3) + \\
& \ \ + a_2^2 (a_5 b_1 + a_2 (b_4 - 2 a_2 \b_2)) \, ] \ Y_4 \ .
\end{array} \eqno(\sn.29)$$
We want to eliminate the $Y_4$ term, and thus we choose
$$ \b_2 \ = \ {1 \over 2 a_2^4} \ ( a_3^3 b_1 - 3 a_2 a_3 a_4 b_1 + a_2^2 a_5 b_1 +
3 a_2 a_3^2 b_2 - 3 a_2^2 a_3 b_3 + a_2^3 b_4) \eqno(\sn.30)$$
Again we take into account the effect of this on higher order terms, and pass to consider a transformation with generator $h_4 = \b_4 Y_4$; we get
$$ \begin{array}{rl}
\=W_5 \ =& \ [ - 2 a_3^4 b_1 / a_2^4 +
5 a_3^2 a_4 b_1 / a_2^3 - 2 a_3 a_5 b_1 / a_2^2 -
2 a_3^3 b_2 / a_2^3 - 4 a_3 a_4 b_2 / a_2^2 + \\
& \ + 2 a_5 b_2 / a_2 + 7 a_3^2 b_3 / a_2^2 -
a_4 b_3 / a_2 - 4 a_3 b_4 / a_2 + b_5 - 3 a_2 \b_3 ] \ Y_5
\end{array} \eqno(\sn.31)$$
which can be eliminated by choosing
$$ \begin{array}{rl}
\b_3 \ =& \
- \ (1 /( 3 a_2^5 )) \
(2 a_3^4 b_1 - 5 a_2 a_3^2 a_4 b_1 +
2 a_2^2 a_3 a_5 b_1 + 2 a_2 a_3^3 b_2 + \\
& \ + 4 a_2^2 a_3 a_4 b_2 - 2 a_2^3 a_5 b_2 -
7 a_2^2 a_3^2 b_3 + a_2^3 a_4 b_3 +
4 a_2^3 a_3 b_4 - a_2^4 b_5 ) \ . \end{array} \eqno(\sn.32)$$
Summarizing, and having taken into account all higher order effects (up to order six), we have reached the LRF
$$ \^W \ = \ Y_0 + b_1 Y_1 + a_2 X_2 + [ b_2 - (a_3 b_1 / a_2 ) ] Y_2 + [a_4 - (a_3^2 / a_2 ) ] X_4 + \ O(6) \eqno(\sn.33) $$
We will be satisfied with this order of normalization.
\section{The S4 case: standard normal forms}
\def\sn{8}
We consider now the case {\bf S4}, i.e. the linear part of our vector field is now
given by
$$ A \ = \ \pmatrix{ \la & 0 \cr 0 & \mu \cr } \eqno(\sn.1) $$
with $\la \not= \mu$, $ \la \not= 0$ and $\mu \not= 0$.
As remarked in section 3, if $(\la \cdot \mu) > 0$ (both
eigenvalues have the same sign), we are in a Poincar\'e domain, so the
convergence of the transformation to NF is guaranteed; on the other
hand, if $\la \mu < 0$ (i.e. we have an hyperbolic saddle point in the
origin), we are not in a Poincar\'e domain. However, the Chen-Sternberg theorem \cite{Arn2,Bel,BeK,Che,Ste} guarantees the system is $C^\infty$ conjugated to its normal form; as for the analytic conjugacy in this case, this is guaranteed if $|\la / \mu|$ is irrational, due to Pliss' theorem \cite{Pli}.
We also noticed that if $\la / \mu $ is irrational, there are no
resonances, i.e. the NF is linear; in this case we do not need (nor it makes sense) to consider PRFs.
Let us thus focus on the rational case.
We will
assume $|\la / \mu| = p/q$, i.e. $ |\la|
= c q$, $|\mu| = c p$, with $p$ and
$q$ positive integers relatively prime.
\subsection{Eigenvalues having the same sign}
We should first of all notice that for $\la \mu > 0$, no resonances are
actually possible unless one of the eigenvalues is a multiple of the other,
and in this case we have only one
resonant term.
Indeed for $\la \mu > 0$ the only
possible resonant terms are given by
$$ \vb \ = \ \pmatrix{y^{\la / \mu}\cr 0\cr} \ (\la / \mu \in \N) \ \ {\rm or} \ \
\vb \ = \ \pmatrix{0\cr x^{\mu / \la}\cr} \ (\mu / \la \in \N) \ .
\eqno(\sn.2)$$
In order to see this, recall the resonance
relations are now
$m_1 \la + m_2 \mu = \la $ for the $x$ component, and $m_1
\la + m_2 \mu = \mu $ for the $y$ component. These give
$(m_1 -1)
\la + m_2 \mu = 0$ for the $x$ component, and $m_1 \la + (m_2 - 1)
\mu = 0$ for the $y$ component; here
$m_1 , m_2$ are non-negative integers, and $m_1 + m_2 \ge 2$. As $\la ,
\mu$ have the same sign, the only possibility in the $x$ case is $m_1 = 0$,
and $ \la = m_2 \mu$ with $m_2 \ge 2$. Similarly, in the $y$ case it must be
$m_2 = 0$, and
thus $ \mu = m_1 \la $ with $m_1 \ge 2$.
We have thus proven that for $\la \mu > 0$ the standard NF for the case {\bf
S4} is given by
$$ \cases{
{\dot x} = \la x + \a y^k \ , \ {\dot y} = \mu y & for $\la/\mu = k \in \N$, $k
\ge 2$ \cr
{\dot x} = \la x \ , \ {\dot y} = \mu y + \b x^k & for $\mu/\la = k
\in \N$, $k \ge 2$ \cr
{\dot x} = \la x \ , \ {\dot y} = \mu y & otherways
\cr} \eqno(\sn.3)$$
where $\a,\b$ are arbitrary real constants.
In each of these cases the PRF is trivial, i.e. it coincides with the
standard NF. We will thus give no further consideration to the case $\la \mu >
0$.
\subsection{Eigenvalues with opposite signs}
Consider now the rational case with $\la \mu < 0$. Assume
$$ \la = c q \ , \ \mu = - c p \eqno(\sn.4)$$
with $p,q$ positive integers, relatively prime (no common factor), and $c \not= 0$ a real number (notice we could have $p=q=1$, corresponding to $\mu = - \la$). For the sake of our discussion, we could as well take $c = 1$.
The resonance relations give now
$ (m_1 - 1) \la = - m_2 \mu$ for the $x$
component, and $m_1 \la = (1 - m_2) \mu$ for the $y$ component.
Hence we
must have, for the $x$ component,
$ m_2 / (m_1 -1) = - \la / \mu = q/p $, i.e.
$$ m_1 = k p + 1 \ , \ m_2 = k q \ . \eqno(\sn.5)$$
Similarly, for the $y$ component we have
$ (m_2 -1)/ m_1 = - \la / \mu = q/p $,
and therefore
$$ m_1 = k p \ , \ m_2 = k q + 1 \ . \eqno(\sn.6)$$
Thus, the resonant vectors are of two types:
$$ \vb_k^{(x)} \ = \pmatrix{(x^p y^q)^k \, x \cr 0 \cr}\ \ , \ \
\vb_k^{(y)} \ = \ \pmatrix{0\cr (x^p y^q)^k \, y \cr} \ . \eqno(\sn.7)$$
Correspondingly we consider vector fields
$$ \Phi_k \ = \ [(x^p y^q)^k \, x ] \, \pa_x \ \ , \ \
\Psi_k \ = \ [(x^p y^q)^k \, y ] \, \pa_y \ . \eqno(\sn.8)$$
The most general NF will be in the form
$W = W_0 + \sum( c_k^{(1)} \Phi_k +
c_k^{(2)} \Psi_k )$, i.e.
$$
{\dot x} \ = \ \la x \, + \, \sum_{k=1}^\infty \, c_k^{(1)} (x^p y^q)^k \, x
\ \ , \ \
{\dot y} \ = \ \mu y \, + \, \sum_{k=1}^\infty \, c_k^{(2)} (x^p
y^q)^k \, y \ . \eqno(\sn.9)$$
The corresponding vector field will be
denoted as $W$; its linear part is given by $W_0 = q \Phi_0 - p \Psi_0$
For our discussion it will actually be more convenient to consider linear combinations of the $\Phi_k , \Psi_k$, defined as
$$ \begin{array}{rll}
X_k \ = \ & \( { 1 \over 2 pq }\) \ (q \Phi_k + p \Psi_k) \ = \ & {1 \over
2pq} \, (x^p y^q)^k \, ( q x \pa_x \, + \, p y \pa_y ) \ , \\
Y_k \ = \ & \( { 1 \over 2 pq }\) \ (q \Phi_k - p \Psi_k) \ = \ &
{1 \over 2pq} \, (x^p y^q)^k \, ( q x \pa_x \, - \, p y \pa_y ) \ ;
\end{array} \eqno(\sn.10)$$
with this notation, the linear part $W_0$ of the
vector field $W$ corresponds to $W_0 = 2cpq Y_0 := \zeta Y_0$.
We also rewrite the corresponding vector field $W$, in view of the use of the vector fields
$X_k$ and $Y_k$ and for further discussion, as
$$ W \ = \ \zeta Y_0 \ + \ \sum_{k=1}^\infty ( a_k X_k + b_k Y_k )
\eqno(\sn.11)$$
where
$$ a_k \ = \ (p c_k^{(1)} + q c_k^{(2)}) \ \ , \ \
b_k \ = \ (p c_k^{(1)} - q c_k^{(2)}) \ . \eqno(\sn.12)$$
{\bf Remark 12.} \def\rnn{12}
Notice that $\Phi_k , \Psi_k \in \W_k$, but,
with $z = p + q$, we have instead $X_k , Y_k \in \W_{kz}$. $\odot$
The vector fields $\Phi_k$ and $\Psi_k$ satisfy the commutation relations
$$ \begin{array}{lll}
\[ \Phi_k , \Phi_m \] & \ = \ & p (m-k) \, \Phi_{k+m} \\
\[ \Psi_k , \Psi_m \] & \ = \ & q (m-k) \, \Psi_{k+m} \\
\[ \Phi_k , \Psi_m \] & \ = \ & m p \, \Psi_{k+m} \ - \ k q \, \Phi_{k+m}
\end{array} \eqno(\sn.13)$$
and from these it follows that
$$ \begin{array}{lll}
\[ X_k , X_m \] & \ = \ & (m-k) \, X_{k+m} \\
\[ X_k , Y_m \] & \ = \ & m \, Y_{k+m} \\
\[ Y_k , Y_m \] & \ = \ & 0 \ . \end{array} \eqno(\sn.14)$$
Notice that these are the same as those encountered in discussing the case
{\bf S3}: we have thus to deal again with the Lie algebra $\G = \X \oplus_\to \Y$. Thus, provided we take into account remark \rnn, the algebraic computations
considered there will immediately apply to this case as well.
This correspondence between cases {\bf S3} and {\bf S4} is, of course, the one discussed in section 3 above.
\section{The S4 case: Poincar\'e renormalized forms}
\def\sn{9}
As remarked above, the algebra $\ker (\L_0 )$ is spanned by vector fields $\{ X_k , Y_k \}$ (with $k \in \N$) which generate the same Lie algebra $\G = \X \oplus_\to \Y$ encountered in discussing the case {\bf S3}, as also discussed in section 3. The fact that the
linear part is now given by $\zeta Y_0 = 2cpq Y_0$, rather than simply by $Y_0$, has no consequence on the discussion of linear subspaces, since the constants $c,p,q$ are all nonzero and thus $\zeta \not= 0$: see remark \rna.
We can thus just repeat the discussion conducted in case {\bf S3}, modulo remark {\rnn} above; we write again $z := p + q$.
\subsection{General results}
We will thus consider the terms in $\W_z$, given by
$$ W_{z} = a_1
X_1 + b_1 Y_1 \ , \eqno(\sn.1)$$
and consider the different cases
$$
\cases{
a_1 \not= 0 \ , \ b_1 = 0 \ ; & (a) \cr
a_1 = 0 \ , \ b_1 \not= 0
\ ; & (b) \cr
a_1 \not= 0 \ , \ b_1 \not= 0 \ ; & (c) \cr
a_1 = 0 \ , \
b_1 = 0 \ . & (d) \cr } \eqno(\sn.2)$$
Notice that, considering a system which is already in standard normal form, the operators $\L_1 , ... , \L_{z-1}$ vanish; the first nontrivial higher homological operator is
$$ \L_z := [ W_z , . ] \equiv [ a_1 X_1 + b_1 Y_1 , \, . \, ] \ .
\eqno(\sn.3)$$
With exactly the same argument as in the discussion of the
case {\bf S3} we have the following results.
In case {\bf (a)}, where $W_z = a_1 X_1$, $\ker(\M_1) , ... , \ker
(\M_{z-1})$ do just coincide with $\ker (\L_0)$ (that is, the whole space on
which the trivial operators $\M_1 , ... \M_{z-1}$ are defined), while $\ker
(\M_z)$ reduces to the linear span of $W_0$ and $W_1$, i.e. of $Y_0$ and
$X_1$. The range of $\M_z$ is the whole
linear span of the $\X , \Y$, except
the subspace spanned by $X_1 , X_2 , Y_1$. As vector fields $Z_1,Z_2$ which
are one in $\ran (\M_z)$ and one in $\ker (\M_1 )$ commute, no further
normalization is possible.
Thus, we obtain the same result as in case {\bf S3(a)}, with the role of
$\M_1$ now effectively played by $\M_z$ (which is the operator associated to
$X_1$, and more in general to $a_1 X_1 + b_1 Y_1$).
In cases {\bf (b)}, {\bf (c)} we do similarly reproduce the discussion of the
corresponding cases of {\bf S3}, again with the role of $\M_1$ now effectively
played by $\M_z$.
Thus we obtain the following expressions for the PRF when $W_z \not= 0$:
$$ \cases{
\^W \ = \ \zeta Y_0 + a_1 X_1 + \^a_2 X_2 & (a) \cr
\^W \ = \ \zeta Y_0 + b_1 Y_1 + \sum_{k=2}^\infty \^a_k X_k & (b) \cr
\^W \ = \ \zeta Y_0 + (a_1 X_1 + b_1 Y_1 ) + \^a_2 X_2 & (c)
\cr}
\eqno(\sn.4)$$
In case {\bf (b)}, the $\G$-adapted procedure described in section 3 and subsection 6.3 will actually give a more reduced NF (the LRF), see below.
In the degenerate case {\bf (d)} we should again proceed as in case {\bf S3(d)}. With the same meaning for $\mu, \nu$ as there and the same splitting in
subcases {\bf (da)}, {\bf (db)}, {\bf (dc)}, we would obtain exactly the same expressions for the PRF as in (6.12), (6.16) and (6.20), at the exception of the linear part being given by $\zeta Y_0$ rather than by $Y_0$.
In case {\bf (db)}, using the $\G$-adapted procedure one would get, see (6.22),
$$ \=W \ = \ \zeta Y_0 \, + \, a_\mu X_\mu \, + \, \^a_{2 \mu} X_{2 \mu} \, + \, \sum_{k=\nu}^\mu \, \^b_k Y_k \ ; \eqno(\sn.4') $$
this also applies to the case {\bf (b)}, with $\nu = 1$.
\subsection{Explicit computations}
The results of the explicit computations performed in the case {\bf S3} would
also extend to the present case. Indeed, once we have transformed the original
system into standard NF, the linear part $W_0$ does not enter in the PRF algorithm any more, and thus the presence of the constant $\zeta$ (instead than one) cannot affect the computations in any way. Again, when translating
the results obtained in case {\bf S3} to the present case, one has to take into account remark \rnn.
We will thus just follow the first steps of the computation in case {\bf (a)} to illustrate this.
We start from a system $W^{(1)}$ which has already been brought to
standard
NF. Transformations generated by $h_k \in \F_k$ for $0 < k < z$ are
necessarily trivial, as for such $k$ we have $\ker (\L_0) \cap \F_k = \{ 0
\}$, i.e. the Lie-Poincar\'e ``transformations'' reduce to the identity. With a
transformation generated by $h_z = \a_1 X_1 + \b_1 Y_1$ the term $W_z$ is
unchanged, while $W_{2z}^{(z)}$ is taken into
$$ W_{2z}^{(z+1)} \ = \
a_2
\, X_2 \ + \ (b_2 - a_1 \b_1 ) \, Y_2 \eqno(\sn.5)$$
and we can of
course eliminate the
$Y_2$ component by choosing $\b_1 = b_2 / a_1 $; we also
choose $\a_1 = 0 $.
Transformations with $h_k$, $z < k < 2z$ are trivial; thus we have
$W_m^{2z} \equiv W_m^{(z+1)}$ for all $m \ge 0$.
With a transformation generated by $h_{2z} = \a_2 X_2 + \b_2 Y_2 $ (so that
$h_{2z} \in \ker ( \L_0 ) \cap \F_{2z}$) the terms $W_k$, $k < 3z$, are
unchanged; the term $W_{3z}^{(2z)}$ is taken into
$$ W_{3z}^{(2z+1)} \ = \
[a_3 - a_1 \a_2 ] \, X_3 \ + \
[ ( a_1 b_3 - a_2 b_2 - 2 a_1^2 \b_2 ) / a_1
] \, Y_3 \ . \eqno(\sn.6)$$
This can be eliminated by choosing
$$ \a_2 \ = \ a_3 / a_1
\ \ , \ \ \b_2 \ = \
(a_1 b_3 - a_2 b_2)/(2 a_1^2) \ . \eqno(\sn.7)$$
Again, the transformations with generator $h_k \in \ker(\L_0 ) \cap \F_k$ are
necessarily trivial for $2z < k < 3z$, and thus $W_m^{(3z)} = W_m^{(2z+1)}$
for all $m \ge 0$.
The explicit formulas obtained can be compared with those of section 6,
they show the complete correspondence with the subcase {\bf S3(a)}; we
believe there is no need to give further explicit formulas for the present
case {\bf S4}, as they can be read from the corresponding ones for case {\bf
S3}.
Notice that also the expression of the PRF in terms of the vector fields $X_k
, Y_k$ will be (except of course for the linear term $W_0$, where the constant
$\zeta$ appears) exactly the same as in the case {\bf S3}.
\section{Summary of results for other cases}
\def\sn{10}
In this section we briefly recall, for the sake of completeness, the results
obtained in \cite{LMP,IHP} for the other cases where the PRF is nontrivial.
These are cases {\bf S2} and {\bf N2} of our basic classification of linear parts.
An error contained in \cite{LMP,IHP} for one
degenerate {\bf S2} subcase is also corrected.
\subsection{PRFs and LRFs for the case S2}
In case {\bf S2} one could work in ${\bf C}^2$, but we will stay within the framework of the present discussion and work in $\R^2$, i.e. we will deal with real matrices, and thus write
$$ A \ = \ \pmatrix{ 0 & -1 \cr 1 & 0 \cr} \ . \eqno(\sn.1) $$
Notice that in order to map this case into {\bf S3}, it suffices to pass to polar coordinates.
It is well known that, with $r^2 = x^2 + y^2$, the standard NF is then
$$ \begin{array}{rl}
{\dot x} \ =& \ - y \ + \ \sum_{k=1}^\infty r^{2k} ( a_k x - b_k y ) \\
{\dot y} \ =& \ \ x \ + \ \sum_{k=1}^\infty r^{2k} ( b_k x + a_k y )
\end{array} \eqno(\sn.2) $$
If the system is hamiltonian, then all the $b_k$ are zero; conversely, if all
the $b_k$ are zero, the NF (2) is hamiltonian. Further reduction of the NF in
this case has been studied by Siegel and Moser \cite{SiM} a long time ago
(their results have recently been shown to generalize to higher dimensions \cite{FoM}). We will consider the general (non-hamiltonian) case.
Let $\mu \ge 1$ be the smallest $k$ such that $a_k \not=0$, and $\nu \ge 1$ the
smallest $k$ such that $b_k \not=0$ (we assume both $\mu$ and $\nu$ are finite).
If $\mu < \nu$, then (see subcases {\bf (a)} and {\bf (da)} for {\bf S3}) the PRF is given by
$$ \begin{array}{rl}
{\dot x} \ = & \ - y \ + \ a_\mu r^{2\mu} x + \a r^{4\mu} x \\
{\dot y} \ = & \ \ x \ + \ a_\mu r^{2\mu} y + \a r^{4\mu} y \ . \end{array} \eqno(\sn.3)$$
If $\mu = \nu$, then (see subcases {\bf (c)} and {\bf (dc)} for {\bf S3}) the PRF is given by
$$ \begin{array}{rl}
{\dot x} \ = & \ - y \ + \ r^{2\mu} (a_\mu x - b_\mu y) + \a r^{4\mu} x \\
{\dot y} \ = & \ \ x \ + \ r^{2\mu} (b_\mu x + a_\mu y) + \a r^{4\mu} y \ . \end{array} \eqno(\sn.4)$$
Here $a_\mu \not= 0$ and $b_\mu \not= 0$ are the same as in (2),
and the coefficient $\a$ is a real number. A detailed proof of this result is contained in section 12 of \cite{IHP}; a shorter proof is also given in \cite{LMP}.
If $\nu < \mu$, then (see cases {\bf (c)} and {\bf (dc)} for {\bf S3})
the PRF is given by
$$ \begin{array}{rl}
{\dot x} \ = & \ - (1 + b_\nu r^{2 \nu} ) \, y \ + \ \sum_{k=\mu}^\infty r^{2k} \^a_k \, x \\
{\dot y} \ = & \ \ (1 + b_\nu r^{2 \nu} ) x \ + \ \sum_{k=\mu}^\infty r^{2k} \^a_k \, y \ . \end{array} \eqno(\sn.5)$$
In this same case, the LRF is given by
$$ \begin{array}{rl}
{\dot x} \ = & \ - y \ + \ a_\mu r^{2\mu} x + \a r^{4\mu} x \ - \ \sum_{k=\nu}^\mu b_k \, y \\
{\dot y} \ = & \ \ x \ + \ a_\mu r^{2\mu} y + \a r^{4\mu} y \ + \
\sum_{k=\nu}^\mu b_k \, x . \end{array} \eqno(\sn.6)$$
Here $a_\mu \not= 0$ and $b_k$ (for $\nu \le k \le \mu$) are the same as in (2), and the coefficient $\a$ is a real number.
The computation for this case given in \cite{LMP,IHP} contained a mistake: the coefficients $b_k$ cannot be changed (via a PRF-like transformation) to eliminate the corresponding rotation terms without producing radial terms. It should also be stressed that the reduced NF obtained in these papers, even after correction of this mistake, is the LRF (and is not obtained with the generic PRF procedure); in particular, in the case $\nu < \mu$ this is {\it not } a PRF according to our definition.
It should be noticed that these results can be obtained, i.e. the case {\bf S2} can be studied, more easily using the approach of the present paper, as we now briefly indicate.
We define, as in \cite{LMP,IHP}, dilation and rotation linear vector fields
$$ D = x \pa_x + y \pa_y \ \ , \ \ R = - y \pa_x + x \pa_y \eqno(\sn.7) $$
and with this compact notation, writing also $r^2 := (x^2 + y^2)$, we define
$$ \Psi_k \ := \ r^{2k} \, D \ \ , \ \ \Phi_k := r^{2k} \, R \ . \eqno(\sn.8)
$$
It is immediate to check that these vector fields satisfy the commutation relations
$$ [ \Psi_k , \Psi_m ] = 2 (m-k) \Psi_{k+m} \ , \ [ \Phi_k , \Phi_m ] = 0 \ ,
\ [ \Psi_k , \Phi_m ] = 2 m \Phi_{k+m} \ . \eqno(\sn.9) $$
That is, we have the same algebraic structure as the one encountered in analyzing previous cases. We can make it identical, including coefficients, see (4.3), by defining $$ X_k = (1/2)^{1/3} \, \Psi_k \ \ , \ \ Y_k = (1/2)^{1/3} \, \Phi_k \ . \eqno(\sn.10) $$
In this way, the explicit computations performed for the case {\bf S3} can immediately be applied to this case as well. We write now the standard NF as $$ W \ = \ \zeta \, Y_0 \ + \ \sum_{k=0}^\infty (a_k X_k + b_k Y_k ) \eqno(\sn.11) $$
where $\zeta = (2)^{1/3}$. Formulas for the PRF can be read off the discussion of the {\bf S3} case. In particular, (6) corresponds to (6.22); see also (7.33).
\subsection{PRFs for the case N2}
In the (nonregular) case N2, we have
$$ A \ = \ \pmatrix{0&1\cr0&0\cr} \eqno(\sn.12) $$
and the standard NF is given by
$$ \begin{array}{rl}
{\dot x} \ = & \ y + \sum_{k=1}^\infty b_k x^{k+1} \\
{\dot y} \ = & \sum_{k=1}^\infty (a_k x^{k+1} + b_k x^k y )
\end{array} \eqno(\sn.13) $$
Let $\mu \ge 1$ be the smallest $k$ for which $(a_k^2 + b_k^2) \not= 0$; then
the PRF is given by
$$ \begin{array}{rl}
{\dot x} \ = & \ y + b_\mu x^{\mu+1} + \a x^{\mu+2}
\ + \ \sum_{k=\mu+2}^\infty b_k x^{k+1} \\
{\dot y} \ = & \ a_\mu x^{\mu+1} + b_\mu x^\mu y + \a x^{\mu+1} y \ + \
\sum_{k=1}^\infty (a_k x^k y + b_k x^{k+1} ) \end{array} \eqno(\sn.14) $$
This represents a very poor simplification of the standard NF; such a poor performance of the algorithm is related to the vanishing of the semisimple part of $A$, i.e. to the fact the singular point is nonregular.
A detailed proof of (14) is contained
in section 15 of \cite{IHP}. For a discussion of this singularity, see \cite{Tak} and \cite{BaS,KOW}.
\section{Conclusions}
We have studied vector fields in the plane around a singular point by means of normal forms theory, discussing in detail all possible cases when the singular point is a regular one. In doing this we have assumed the linearization of the vector field has been preliminarly taken into Jordan normal form.
We have also shown that when the standard normal form (NF) is nontrivial, the Poincar\'e renormalized forms (PRFs) approach permits to substantially simplify the expression of the vector field in normal form.
Thanks to constraints on the structure of the infinite dimensional Lie algebra of two-dimensional vector fields in normal forms \cite{Elp,Wal}, there is a substantial correspondence between different cases where the NF is nontrivial, and computations performed in one case can be mapped into any other one. We have taken advantage of this property, and performed explicit and detailed computations in one case (S3), using them for other cases as well.
Considering the Lie algebra structure of vector fields in normal form also allow to define a different reduction scheme, designed to take advantage of this structure. The reduced normal form thus obtained, and called Lie renormalized form (LRF), is not necessarily a PRF; actually we have seen that in some of our subcases -- i.e. cases {\bf (b)} and {\bf (db)} for all the semisimple linear parts -- the LRF is finite while the PRF is infinite, and the LRF is not a special instance of PRF. The LRF approach is directly related to Broer's approach \cite{Bth,Bro} and to Baider's work \cite{Bai}.
The local behaviour of vector fields in $\R^2$ around regular singular points is of course very well studied, so that the real interest of our discussion is not in the expressions obtained themselves.
Rather, we have shown that the PRF approach is viable to obtain a very explicit description of further reduced normal forms, even with the use of limited computing facilities: the very explicit formulas obtained here required only a few seconds of CPU time of a low-cost processor.
The detailed analysis given here also led to implement considerations based on the Lie algebraic structure of vector fields in normal forms, and to define a $\G$-adapted reduction procedure, the LRF procedure, conjugating Lie-algebraic considerations {\it a la Broer } and the PRF algorithmic approach. This is of much wider use than the limited one considered here.
We have also corrected a computational error contained in previous work, and clarified (see also appendix B) some confusion on the issue of PRFs present in the literature.
\vfill\eject
\section*{Appendix A. Changes of coordinates}
In section 7 we have given completely explicit formulas for the generators of Lie-Poincar\'e transformations and for the PRF which can be obtained in this way for case {\bf S3}; these are also applied to other cases, as discussed in section 3 and also in sections 9 and 10.
It should be mentioned that in this simple case, one can describe exactly the change of coordinates generated by the vector field $H_k = \a_k X_k + \b_k Y_k$; we will consider the realization of case {\bf S3} for definiteness.
In this case the evolution under the vector field $H_k$ ($k \ge 1$) is described by
$$ {d x \over d s} = \a_k x^{k+1} \ \ , \ \ {d y \over ds} = \b_k x^k y \eqno(A.1)$$
with initial datum $x(0)= x_0 , y(0)=y_0$.
The first of these is solved by elementary methods to give
$$ x(s) \ = \ { x_0 \over ( 1 - \a_k k s x_0^k )^{1/k} } \ \ . \eqno(A.2)$$
Using this expression for $x(s)$, the second of (A.1) is rewritten
$$ {dy \over y} \ = \ \b_k \ {x_0^k \over ( 1 - \a_k k s x_0^k )} \ ds \ , \eqno(A.3)$$
which gives
$$ y(s) \ = \ y_0 \ { 1 \over ( 1 - \a_k k s x_0^k )^{\b_k / (\a_k k)} } \eqno(A.4) $$
We are interested in the mapping $(x,y) = (x_0,y_0) \to (x(1),y(1) ) := (\=x , \=y )$, and from the above we have that
$$ \=x \ = \ { x \over ( 1 - \a_k k x^k )^{1/k} } \ \ , \ \
\=y \ = \ y \ { 1 \over ( 1 - \a_k k x^k )^{\b_k / (k \a_k)} } \eqno(A.5)$$
with the inverse change of coordinates given by
$$ x \ = \ {\=x \over 1 + \a_k \=x} \ \ ; \ \
y \ = \ \left( 1 - {\a_k \=x \over 1 + \a_k \=x } \right)^{\b_k / \a_k} \ \=y \ . \eqno(A.6)$$
These allow to obtain explicitely the changes of coordinates performed in passing from NFs to PRFs in case {\bf S3}, and can be mapped to consider the other cases as well. The explicit formulas, however, would contain rational power and be quite involved.
In order to make contact with the explicit formulas in section 7, notice that if we are acting with $H_k$, we are actually considering the change of coordinates from $x^{k}$ to $x^{(k+1)}$.
This map is defined only for $x^k < (\a_k k)^{- 1}$; this allows therefore to explicitely compute the domain of analyticity of the change of coordinates.
Let us, as an example, consider the case dealt with in appendix B below. Here $\b_k = 0$ for all $\b$, so that the mappings do not act on $y$. It is easy to obtain (with the help of an algebraic manipulation program) explicit expressions for the changes of coordinates and thus for the domain of analyticity of the overall transformation up to step $k$.
We write the combined effect of the first $k$ changes of coordinates as $x \to \=x^{(k)} = x / B_k (x)$; the denominators $B_k (x)$ can
be written in recursive terms as
$$ B_k (x) \ = \ \[ \( B_{k-1} (x) \)^k \, - \, (-1)^k \, \gamma_k \, x^k \]^{1/k} \ . \eqno(A.7)$$
The first numbers of the sequence $\gamma_k$ ($k=1,2,...$) are given by 1, 2, 6, 18, 60, 198, 693; obviously $B_0 (x) \equiv 1$. We will omit the derivation of this recursion formula.
This also allow to explicitely determine the domain of analiticity of the transformations, which can be read from the roots of $B_k (x)$; but the expression so obtained quickly become extremely involved and of little interest.
Thus I have computed analytically the $x_-^{(k)} , x_+^{(k)} $ such that the overall transformation up to step $k$ (i.e. $x \to x^{(k)}$) is analytic in the strip $x_-^{(k)} < x < x_+^{(k)}$, but just report here their numerical value. These are:
$$ \begin{array}{l}
x_-^{(1)} = -1 \ , \ x_-^{(2)} = -0.333333 \ , \ x_-^{(3)} = -0.270929 \ , \ x_-^{(4)} = -0.244594 \ , \\
x_-^{(5)} = -0.228796 \ , x_-^{(6)} = -0.21915 \ , \ x_-^{(7)} = -0.21224 \ ; \\
x_+^{(1)} = \infty \ , \ x_+^{(2)} = 1. \ , \ x_+^{(3)} = 1. \ , \ x_+^{(4)} = 0.668534 \ , \\
x_+^{(5)} = 0.668534 \ , \ x_+^{(6)} 0.561419 \ , \ x_+^{(7)} = 0.561419 \ .
\end{array} \eqno(A.8)$$
These numerical data are maybe of little interest, but it is interesting, in view of application of the method, to notice that $x_\pm^{(k)} $ can be easily determined algebraically, and that determining algebraically $x_\pm^{(k)} $ and then evaluating these numerical values required very little computational effort.
\vfill\eject
\section*{Appendix B. The Bruno system}
As mentioned in remark 2 at the end of the Introduction, in his reviews of my papers \cite{LMP,IHP} for {\it Mathematical Reviews} \cite{Brep}, and again in his recent book \cite{Bru2} (section V.22) and in a preprint \cite{Bprep} which he was so kind to send me, Bruno has claimed that the main result of my works \cite{LMP,IHP} is wrong; he also gave a ``counterexample'' to my result. This falls in subcase {\bf S3(b)} of the classification considered here and was given in \cite{Brep,Bru2} as
$$ \begin{array}{rl}
{\dot x} \ = \ & x^3 \\
{\dot y} \ = \ & y + xy + x^2 y \ \equiv \ (1+x+x^2) y \ ; \end{array}
\eqno(B.1) $$
according to Bruno \cite{Brep}, the PRF for this
system would be given by
$$ \begin{array}{rl} {\dot x} \ = \ & x^3 \\
{\dot y} \ = \ & y + xy \ \equiv \ (1+ x) y \end{array}
\eqno(B.2') $$
with no higher order terms. In \cite{Bru2}, this is changed to
$$ \begin{array}{rl} {\dot x} \ = \ & a_2 x^3 + \a x^5 \\
{\dot y} \ = \ & y + \b xy \ \equiv \ (1+ \b x) y \ , \end{array}
\eqno(B.2'') $$
again with no higher order terms, where $a_2 , \a , \b$ are some constants (no mention is given to this discrepancy in \cite{Bru2}; in both cases no computation is reported to explain how these are obtained). More recently, Bruno and Petrovich \cite{Bprep} also considered a slightly generalized form of this ``counterexample'', see below.
The discussion of section 5, and the very explicit computations of section 6, show that (B.2) -- in either one of its versions -- is {\it not } the PRF for system (B.1).
Actually, in his reviews, book, and preprint, Bruno quotes my result in a form which does not correspond to -- and is not equivalent to -- the statements I gave in \cite{LMP,IHP}; thus his sweeping assertion that ``it is easy to see that the statement of Gaeta is wrong'' (see p. 275, \cite{Bru2}) does refer to an incorrectly reported version\footnote{Notice that according to Bruno's definition (but with the notations of the present paper) the PRF would be characterized by the property that $\L_{j}^+ (W_k ) = 0$ for $j < k$ in the first review \cite{Brep}, and for $j = k-1$ in the second of \cite{Brep} and in \cite{Bru2}.} of my result.
The key difference is given by the fact that the role played by the $\M_k$ operators in my construction is, in the version reported by Bruno \cite{Brep,Bru2,Bprep}, taken by the $\L_k$ ones (with no restriction to kernels of $\L_s$ with $s < k$; see section 2). Thus, the result ``quoted'' by Bruno should not be attributed to my papers (incidentally, I agree the statement given in \cite{Brep,Bru2,Bprep} is wrong in an obvious way).
To avoid any confusion about PRF for the system (B.1), let us
perform the Poincar\'e renormalization algorithm up to terms in $\W_9$. I stick to the proper general PRF procedure as stated in \cite{LMP,IHP}, i.e. not consider the alternative ($\G$-adapted, or LRF) scheme given in section 3 (see also section 6); actually the system (B.1) is already in LRF, as can be checked by comparing (6.22).
I will freely use the notation introduced in discussing the case {\bf S3}.
We write $W = x^3 \pa_x + y(1+x+x^2) \pa_y $. The system is already in NF, so we write $W^{(1)} = W$; the system is taken into PRF by operating successive
transformations with generators $H_k = \a_k X_k + \b_k Y_k$ (to avoid any possible misunderstandings, let us specify there is no sum on $k$).
It results that choosing $\b_k = 0$ and with
$$ \begin{array}{l}
\a_1 = \
-1 \ , \
\a_2 = \ 1 \ , \
\a_3 = \ -2 \ , \
\a_4 = \ 9/2 \ , \\
\a_5 = \ - 12 \ , \
\a_6 = \ 33 \ , \
\a_7 = \ - 99 \end{array} \eqno(B.3)$$
the system takes successively the forms (where $O(9)$
denotes terms in $\W_9$ and higher)
$$ \begin{array}{rl}
\=W^{(2)} \ = & \ Y_0 + Y_1 + X_2 - X_3 - Y_3 + X_4 + 2 Y_4 + \\
& - X_5 - 3 Y_5 + X_6 + 4 Y_6 - X_7 - 5 Y_7 + X_8 + 6 Y_8 + O(9) \ ; \\
\=W^{(3)} \ = & \ Y_0 + Y_1 + X_2 - X_3 +
X_4 + 2 Y_4 - 2 X_5 - (9/2) Y_5 + \\
& + 3 X_6 + 12 Y_6 - (11/2) X_7 -
25 Y_7 + 9 X_8 + 54 Y_8 + O(9) \ ; \\
\=W^{(4)} \ = & \ Y_0 + Y_1 + X_2 - X_3 + X_4 - (9/2) Y_5 + \\
& + 3 X_6 + 12 Y_6 - (15/2) X_7 -
33 Y_7 + 13 X_8 + 99 Y_8 + O(9) \ ; \\
\=W^{(5)} \ = & \ Y_0 + Y_1 + X_2 - X_3 + X_4 - 6 X_6 + \\
& + 12 Y_6 - 3 X_7 - 33 Y_7
+ 13 X_8 + 99 Y_8 + O(9) \ ; \\
\=W^{(6)} \ = & \ Y_0 + Y_1 + X_2
- X_3 + X_4
- 6 X_6 + 33 X_7 + \\
& - 33 Y_7
- 11 X_8 + 99 Y_8 + O(9) \ ; \\
\=W^{(7)} \ = & \ Y_0 + Y_1 + X_2 - X_3 + X_4
- 6 X_6 + 33 X_7
- 143 X_8 + 99 Y_8 + O(9) \ ; \\
\=W^{(8)} \ = & \ Y_0 + Y_1 + X_2 - X_3 + X_4
- 6 X_6 + 33 X_7 - 143 X_8 + O(9) \ . \end{array} \eqno(B.4)$$
The latter is the PRF, up to terms $O(9)$, for (B.1).
In \cite{Bprep}, Bruno and Petrovich consider general systems with linear part corresponding to our case {\bf S3}. The standard normal form for these is, as discussed above, of the form
$$ \begin{array}{rl}
{\dot x} \ = & \ \sum_{k=1}^\infty \, a_k \, x^{k+1} \\
{\dot y} \ = & \ y \, + \, \sum_{k=1}^\infty \, b_k \, x^k y \ ; \end{array}
\eqno(B.5)$$
we denote by $m \ge 1$ the smallest $k$ such that $a_k \not=0$, and by $\ell
\ge 1$ the smallest $b_k$ such that $b_k \not=0$. That is, we assume $a_k = 0$
for $k < m$, and $b_k = 0 $ for $k < \ell$. Bruno and Petrovich suggest to
consider the case $\ell \le m < \infty$, and give as an example the case $\ell
= 1$, $m=2$.
They claim, see formula (5.2) of \cite{Bprep}, that the PRF in this case is
$$ \begin{array}{rl}
{\dot x} \ = & \ a_m x^{m+1} \, + \, \a x^{2m+1} \\
{\dot y} \ = & \ y \, + \, \b x^\ell y \ ; \end{array}
\eqno(B.6)$$
with $a_m \not= 0$, and $\a,\b$ real coefficients.
Once again this formula does not correspond to the PRF computed in sections 5 and 6, so that their proof that the original system (B.5) cannot be conjugated -- even formally -- to system (B.6), does not
concern PRFs.
It should be remarked that if one maps (as discussed in section 3, or simply passing to polar coordinates) case {\bf S2} to case {\bf S3}, then the wrong result given in \cite{LMP,IHP} for the degenerate case (with the notation of the present paper, ${\bf S2(db)}$) with $\nu < \mu$ would map exactly to the wrong PRF (B.2') given by Bruno in \cite{Brep}. Notice however that there it is claimed that the error lies with the general statement and not with the computations of the example; also, (B.2') is not claimed to be derived from the results for {\bf S2} given in \cite{LMP,IHP}, but just to be the PRF according to the definition reported there. Thus, such a statement appears somehow mysterious; however, as discussed above, it does not involve PRFs according to their definition given in \cite{LMP,IHP} and in this paper, so that we don't have to deal with it.
\medskip
I hope the discussion of this note, and of this appendix, clarifies any confusion caused by my regrettable computational mistake in \cite{LMP,IHP} (also reported in \cite{CGs}) and by the discussion by Bruno of an imprecisely reported version of my statements \cite{Brep,Bru2,Bprep}.
\vfill\eject
\section*{Appendix C. A glimpse into three dimensions}
As discussed in section 3, the results of our computations are common to all cases where we have a two dimensional $C(A)$, one (and only one) master resonance, and all ordinary resonances are associated to this master resonances.
Thus the Lie algebra of vector fields in normal form can be the same as the one discussed here (i.e. $\G = \X \oplus \Y$) also in higher dimension, provided these conditions are satisfied. In this appendix we want to briefly discuss which (real) three dimensional cases will also be covered by our discussion, based on classification of Jordan normal forms for the linear part $A$. The computations we have performed for the two-dimensional case {\bf S3} will immediately apply -- {\it mutatis mutandis} -- to these three dimensional cases as well.
The constants $\mu,\mu_i$ appearing here will be supposed to be real, possibly zero; and we will use coordinates $(x,y,z)$ in $\R^3$.
For a three-dimensional Jordan block, i.e. for
$$ A \ = \ \pmatrix{\mu & 1 & 0 \cr 0 & \mu & 1 \cr 0 & 0 & \mu \cr} \eqno(C.1)$$ we have no resonance for $\mu \not= 0$, and a non-regular singular point for $\mu = 0$.
For a (2,1) structure of Jordan normal form, i.e. for
$$ A \ = \ \pmatrix{ \mu_1 & 1 & 0 \cr 0 & \mu_1 & 0 \cr 0 & 0 & \mu_2 \cr} \eqno(C.2)$$
we have several possibilities depending on vanishing of $\mu_i$, on their relative sign, and on (the absolute value of) their ratio being rational or not.
In particular, if $\mu_1 \not= 0 $, $\mu_2 = 0$, we have only one master resonance and basic invariant, given by $\Psi = z$, and all resonances are associated to this; the matrices spanning $C (A)$ can be chosen to be $A$ and $B = {\rm diag} (0,0,1)$. We will then have the same structure discussed here, with $X^{(1)} = (\mu_1 x + y) \pa_x + \mu_2 y \pa_y$, $X^{(2)} = z \pa_z$; and
$$ X_k \ = \ z^k X^{(2)} \ \ , \ \ Y_k = z^k X^{(1)} \ . \eqno(C.3)$$
Other subcases do not have the required structure.
For a Jordan normal form of type ($1,1^* ,1$), i.e. $A = {\rm diag} (\mu_1 + i \mu_2 , \mu_1 - i \mu_2 , \mu_3)$, or equivalently (in real form) for
$$ A = \pmatrix{ \mu_1 & - \mu_2 & 0 \cr \mu_2 & \mu_1 & 0 \cr 0 & 0 & \mu_3 \cr} \eqno(C.4)$$
we also have to consider various subcases; only two of them have the required structure of resonances. These correspond to: (i) $\mu_1 = 0 $, $\mu_2 \not= 0 $, $\mu_3 \not= 0$; and (ii) $\mu_1 \not= 0$, $\mu_2 \not= 0$, $\mu_3 = 0$.
In case (i) the basic invariant is $\Psi = x^2 + y^2$, and in case (ii) it is $\Psi = z$. However here $C(A)$ is three dimensional, being spanned e.g. by $A$ and by $B_1 = {\rm diag}(1,1,0)$, $B_2 = {\rm diag}(0,0,1)$ (in both cases). Thus we will have a different Lie algebraic structure for vector fields in normal form (see however below).
Finally, for a (1,1,1) Jordan normal form, i.e. for
$$ A \ = \ \pmatrix{ \mu_1 & 0 & 0 \cr 0 & \mu_2 & 0 \cr 0 & 0 & \mu_3 \cr} \eqno(C.5) $$
there are two cases (up to permutations of the $\mu_i$) which satisfy our requirements.
In one case we have e.g. $\mu_1 \mu_2 < 0$ and $|\mu_1 / \mu_2 | = q/p \in \Q$, and $\mu_3$ irrational with $\mu_1 , \mu_2$. Here the basic invariant is $\Psi = x^p y^q$, and $C(A)$ is spanned by $A$ and by the identity matrix. We have $X^{(1)} = x \pa_x + y \pa_y + z \pa_y$, $X^{(2)} = \mu_1 x \pa_x + \mu_2 y \pa_y + \mu_3 z \pa_z$, and $$ X_k = (x^p y^q)^k X^{(2)} \in \W_{k(p+q)} \ \ , \ \ Y_k = (x^p y^q)^k X^{(1)} \in \W_{k(p+q)} \eqno(C.6)$$
In the other case we have e.g. $\mu_1 = 0$, $\mu_2 \mu_3 \not= 0$, and $|\mu_2 / \mu_3 | \not\in \Q$. Here $\Psi = x$, and $C(A)$ is spanned by $A$ and by $B = {\rm diag}(1,0,0)$. Here $X^{(1)} = \mu_2 y \pa_y + \mu_3 z \pa_z$, $X^{(2)} = x \pa_x$, , and
$$ X_k = x^k X^{(2)} \ \ , \ \ Y_k = x^k Y^{(1)} \eqno(C.7)$$
The three dimensional cases we have so identified can be studied by mapping to them our general and explicit computations, as discussed in section 3.
\subsubsection*{A case with three dimensional $C(A)$.}
\def\Z{{\cal Z}}
Let us now show that actually the computations presented here also apply to some of the cases with a three dimensional $G = C(A)$.
Let us consider a three dimensional case with linear part
$$ A \ = \ \pmatrix{0 & -1 & 0 \cr 1 & 0 & 0 \cr 0 & 0 & 1 \cr} \eqno(C.8) $$
leaving to the reader to extend it to other cases.
Now $\Psi \equiv \Psi (x,y,z) = (x^2 + y^2 )$, and we consider the chain of vector fields ($k \ge 0$)
$$ X_k := \Psi^k (x \pa_x + y \pa_y ) \ , \ Y_k := \Psi^k (-y \pa_x + x \pa_y ) \ , \ Z_k := \Psi^k (z \pa_z) \ . \eqno(C.9)$$
Notice that $X_A = Y_0 + Z_0$. The general form of vector fields in NF with respect to this $A$ (and having $A$ as linear part) is
$$ W \ = \ (Y_0 + Z_0) \ + \ \sum_{k=1}^\infty a_k X_k + b_k Y_k + c_k Z_k \ . \eqno(C.10)$$
We will denote by $\mu,\nu,\s$ the lowest $k > 0$ such that $a_k,b_k,c_k$ are nonzero.
The vector fields $X_k,Y_k,Z_k$ satisfy the commutation relations
$$ \begin{array}{l}
\[ X_k , X_m \] = 2 (m-k) X_{k+m} \ , \ \[ Y_k , Y_m \] = \[ Z_k , Z_m \] = 0 \ ; \\
\[ X_k , Y_m \] = 2 m Y_{k+m} \ , \ \[ X_k , Z_m \] = 2 m Z_m \ , \ \[ Y_k , Z_m \] = 0 \ . \end{array} \eqno(C.11)$$
We denote as usual by $\X$ the algebra of the $X_k$, by $\Y$ the algebra of the $Y_k$; and we also denote by $\Z$ the algebra of the $Z_k$. We have $\G = \X \oplus \Y \oplus \Z$; it follows from (C.11) that $\G_1 = \X \oplus \Y$ is an abelian ideal in $\G$.
We can thus first work with generators in $\X$ and aim at further normalizing the $\X$ part of $W$; we can eliminate in this way all terms except the $X_\mu$ and $X_{2 \mu}$ ones (in doing this we will in general change the terms in $\Y$ and in $\Z$). We can then proceed to a further normalization with generators in $\G_1$; due to the abelian nature of $\G_1$ we can deal with each of the $\Y$ and $\Z$ subalgebras as in the two-dimensional case. Thus we will have that for $\mu < \nu$ and $\mu < \s$ all terms in $\G_1$ can be eliminated, while in general we have (sums to be discarded if lower limit is higher than higher one)
$$ \^W \ = \ (Y_0 + Z_0 ) \ + \ a_\mu X_\mu \, + \, \^a_{2 \mu} X_{2 \mu} \ + \ \sum_{k=\nu}^\mu \^b_k Y_k \ + \ \sum_{k= \s}^\mu \^c_k Z_k \ . \eqno(C.12)$$
\vfill\eject
\begin{thebibliography}{99}
\bigskip
\parskip=0pt
\bibitem{Arn1} V.I. Arnold, {\it Geometrical methods in the theory of
ordinary differential equations}, Springer 1983, 1988
\bibitem{Arn2} V.I. Arnold and Yu.S. Il'yashenko, {\it Ordinary
differential equations}, in ``Dynamical Systems I'' (D.V. Anosov and
V.I. Arnold eds.), {\it E.M.S.} {\bf 1}, Springer 1988
\bibitem{Bai} A. Baider, ``Unique normal forms for vector fields and
hamiltonians'', {\it J. Diff. Eqs.} {\bf 78} (1989), 33
\bibitem{BaC} A. Baider and R.C. Churchill, ``Uniqueness and non-uniqueness
of normal forms for vector fields'', {\it Proc. R. Soc. Edinburgh A} {\bf 108} (1988), 27
\bibitem{BaS} A. Baider and J. Sanders, ``Further reduction of the
Takens-Bogdanov normal form'', {J. Diff. Eqs.} {\bf 99} (1992), 205-244
\bibitem{Bel} G.R. Belitskii, ``Equivalence and normal forms of germs of smooth mappings'', {\it Russ. Math. Surv.} {\bf 33} (1978), 107
\bibitem{BeK} G.R. Belitskii and A.Ya. Kopanskii, ``Equivariant Sternberg theorem'', preprint {\it mp-arc 00-54}
\bibitem{BGG} G. Benettin, L. Galgani and A.
Giorgilli, ``A proof of the Kolmogorov theorem on invariant tori using
canonical transformations defined by the Lie method'', {\it Nuovo Cimento B} {\bf 79} (1984), 201
\bibitem{Bth} H.W. Broer, {\it Bifurcation of singularities in volume-preserving vector fields}, Ph.D. Thesis, Groningen 1979
\bibitem{Bro} H.W. Broer, ``Formal normal form theorems for vector fields and some consequences for bifurcations in the volume preserving case'', in: ``Dynamical systems and turbulence'', D.A. Rand and L.S. Young eds., {it Lect. Notes Math.} {\bf 898}, Springer, Berlin 1981
\bibitem{BrT} H.W. Broer and F. Takens, ``Formally symmetric normal forms and genericity'', {\it Dynamics Reported} {\bf 2} (1989), 39-59
\bibitem{Brus} A.D. Bruno, ``Local invariants of differential equations'', {\it Math. Notes} {\bf 14} (1973), 844-848
\bibitem{Bru} A.D. Bruno, {\it Local methods in the theory of differential equations}, Springer, Berlin 1989
\bibitem{Brep} A.D. Bruno, reviews 1999a:34111 and 2000h:37071, {\it Mathematical Reviews}
\bibitem{Bru2} A.D. Bruno, {\it Power geometry in algebraic and differential equations}, North-Holland, Amsterdam 2000
\bibitem{Bprep} A.D. Bruno and V.Yu. Petrovich, ``Normal forms of the ODE system'' (in russian); Preprint 2000-18 of the Keldysh Institute, Moscow 2000
\bibitem{BrW} A.D. Bruno and S. Walcher, ``Symmetries and convergence of normalizing transformations'', {\it J. Math. Anal. Appl.} {\bf 183} (1994), 571-576
\bibitem{Che} K.T. Chen, ``Equivalence and decomposition of vector fields about an elementary critical point'', {\it Am. J. Math.} {\bf 85} (1963), 693-722
\bibitem{CDD} G. Chen and J. Della Dora, ``Further reduction of normal forms for dynamical systems'', {\it J. Diff. Eqs.} {\bf 166} (2000), 79-106
\bibitem{Cic} G. Cicogna, ``Symmetries of dynamical systems and convergent normal forms'', {\it J. Phys. A} {\bf 28} (1995), L179-L182;
``On the convergence of the normalizing transformation in the presence of symmetries'', {\it J. Math. Anal. Appl.} {\bf 199} (1996), 243-255; ``Convergent normal forms of symmetric dynamical systems'', {\it J. Phys. A} {\bf 30} (1997), 6021-6028
\bibitem{CGs} G. Cicogna and G. Gaeta, {\it Symmetry and perturbation
theory in nonlinear dynamics}, Springer (Lecture Notes in Physics, vol. m57), 1999
\bibitem{Dep} A. Deprit, ``Canonical transformations depending on a small parameter'', {\it Cel. Mech.} {\bf 1} (1969), 12-30
\bibitem{Dul} H. Dulac, ``Solution d'un syst\'eme d'\'equations diff\'erentielles dans le voisinage des valeurs singuli\'eres'', {\it Bull. Soc. Math. France} {\bf 40} (1912), 324-383
\bibitem{Elp} C. Elphick, E. Tirapegui, M.E. Brachet, P. Coullet and G. Iooss, ``A simple global characterization for normal forms of singular vector fields'', {\it Physica D} {\bf 29} (1987), 95-127; addendum, {\it Physica D} {\bf 32} (1988), 488
\bibitem{FoM} E. Forest and D. Murray, ``Freedom in minimal normal forms'', {\it Physica D} {\bf 74} (1994), 488
\bibitem{GaK} G. Gaeta, {\it Nonlinear symmetries and nonlinear equations}, Kluwer, Dordrecht 1994
\bibitem{LMP} G. Gaeta, ``Reduction of Poincar\'e normal forms'', {\it
Lett. Math. Phys.} {\bf 42} (1997), 103-114
\bibitem{IHP} G. Gaeta, ``Poincar\'e renormalized forms'', {\it Ann. Inst. H. Poincar\'e (Phys. Theo.)} {\bf 70} (1999), 461-514
\bibitem{GaL} G. Gaeta, ``Algorithmic reduction of Poincar\'e normal forms and Lie algebras'', in preparation (will be available as {\it mp-arc} preprint)
\bibitem{Gle} P. Glendinning,
{\it Stability, instability and chaos: an
introduction to the theory of nonlinear differential equations}, Cambridge
University Press, Cambridge 1994
\bibitem{Ily} Y. Il'yashenko, ``Dulac's memoir 'On limit cycles' and related problems of the local theory of differential equations'', {\it Russ. Math. Surv.} {\bf 40:6} (1985), 1-49
\bibitem{IoA} G. Iooss and M. Adelmeyer, {\it Topics in bifurcation
theory and applications}, World Scientific, Singapore 1992
\bibitem{Kir} A.A. Kirillov, {\it Elements of the theory of
representations}, Springer 1984
\bibitem{KOW} H. Kokubu, H. Oka and D. Wang, ``Linear grading functions and further reduction of normal forms'', {\it J. Diff. Eqs.} {\bf 132} (1996), 293-318
\bibitem{Kum} M. Kummer, ``How to avoid secular terms in classical and
quantum mechanics'', {\it Nuovo Cimento B} {\bf 1}
(1971), 123; ``On resonant nonlinearly coupled oscillators with two equal frequencies'', {\it Comm. Math. Phys.} {\bf 48} (1976), 53
\bibitem{Mar} L.M. Markhashov, ``On the reduction of differential equations to the normal form by an analytic transformation'', {\it J. Appl. Math. Mech.} {\bf 38} (1974), 788-790
\bibitem{vdM} J.C. van der Meer, {\it The hamiltonian Hopf bifurcation}, Springer (Lecture Notes in Mathematics, vol. 1160), Berlin 1985
\bibitem{MiL} Yu.A. Mitropolosky and A.K. Lopatin, {\it Nonlinear mechanics,
groups and symmetry}, Kluwer, Dordrecht 1995
\bibitem{NaS} M.A. Naimark and A.I. Stern, {\it Theory of group representations}, Springer 1982
\bibitem{Olv} P.J. Olver, ``Applications of Lie groups to differential equations'', Springer, Berlin 1986
\bibitem{Pli} V.A. Pliss, ``On the reduction of an analytic system of
differential equations to linear form'', {\it Differential Equations} {\bf 1} (1965), 153-161
\bibitem{ScW} J. Scheurle and S. Walcher, ``On normal form computations'', preprint 2001
\bibitem{SiM} C.L. Siegel and J.K. Moser, {\it Lectures on Celestial
Mechanics}, Springer, Berlin 1955, 1995
\bibitem{Ste} S. Sternberg, ``On the local structure of local homeomorphisms of euclidean $n$-space'', {\it Amer. J. Math.} {\bf 80} (1958), 623-631
\bibitem{Tak} F. Takens, ``Singularities of vector fields'',
{\it Publ. Math. I.H.E.S.} {\bf 43} (1974), 47-100
\bibitem{Ush} S. Ushiki, ``Normal forms for singularities of vector fields'',
{\it Jap. J. Appl. Math.} {\bf 1} (1984), 1-34
\bibitem{Ver} F. Verhulst, \ \
{\it Nonlinear differential equations and
dynamical systems}, Springer, Berlin 1989, 1996
\bibitem{Wal} S. Walcher, ``On differential equations in normal form'', {\it
Math. Ann.} {\bf 291} (1991), 293-314
\bibitem{Wal3} S. Walcher, ``On transformation into normal form'',
{\it J. Math. Anal. Appl.} {\bf 180} (1993), 617-632
\bibitem{Wal4} S. Walcher, ``On convergent normal form transformations in the presence of symmetries'', {\it J. Math. Anal. Appl.} {\bf 244} (2000), 17-26
\end{thebibliography}
\end{document}