in (Z/p)^* is m
my(F = [m,factor(m)], e = random(m), R, wR, wQ);
R = ellpow(E, Q, e);
wR = ellweilpairing(E,P,R,m);
wQ = ellweilpairing(E,P,Q,m); \\ wR = wQ^e
pareval([()->znlog(wR,wQ,F), ()->elllog(E,R,Q), ()->e])
}
@eprog\noindent Note the use of \kbd{my} to pass "arguments" to the
functions we need to evaluate while satisfying the listed requirements:
closures of arity $0$ and no global variables (another possibility would be
to use \kbd{export}). As a result, the final three statements satisfy all
the listed requirements and are run in parallel. (Which is silly for
this computation but illustrates the use of pareval.) The function
\kbd{parfor} is more powerful but harder to use.
The library syntax is \fun{GEN}{pareval}{GEN x}.
\subsec{parfor$(i=a,\{b\},\var{expr1},\{r\},\{\var{expr2}\})$}\kbdsidx{parfor}\label{se:parfor}
Evaluates in parallel the expression \kbd{expr1} in the formal
argument $i$ running from $a$ to $b$.
If $b$ is set to \kbd{+oo}, the loop runs indefinitely.
If $r$ and \kbd{expr2} are present, the expression \kbd{expr2} in the
formal variables $r$ and $i$ is evaluated with $r$ running through all
the different results obtained for \kbd{expr1} and $i$ takes the
corresponding argument.
The computations of \kbd{expr1} are \emph{started} in increasing order
of $i$; otherwise said, the computation for $i=c$ is started after those
for $i=1, \ldots, c-1$ have been started, but before the computation for
$i=c+1$ is started. Notice that the order of \emph{completion}, that is,
the order in which the different $r$ become available, may be different;
\kbd{expr2} is evaluated sequentially on each $r$ as it appears.
The following example computes the sum of the squares of the integers
from $1$ to $10$ by computing the squares in parallel and is equivalent
to \kbd{parsum (i=1, 10, i\^{}2)}:
\bprog
? s=0;
? parfor (i=1, 10, i^2, r, s=s+r)
? s
%3 = 385
@eprog
More precisely, apart from a potentially different order of evaluation
due to the parallelism, the line containing \kbd{parfor} is equivalent to
\bprog
? my (r); for (i=1, 10, r=i^2; s=s+r)
@eprog
The sequentiality of the evaluation of \kbd{expr2} ensures that the
variable \kbd{s} is not modified concurrently by two different additions,
although the order in which the terms are added is nondeterministic.
It is allowed for \kbd{expr2} to exit the loop using
\kbd{break}/\kbd{next}/\kbd{return}. If that happens for $i=c$,
then the evaluation of \kbd{expr1} and \kbd{expr2} is continued
for all values $i** 0$ the computation is done with
an accuracy of $\fl$ decimal digits. To get meaningful results in the latter
case, the parameter $\fl$ should be smaller than the number of correct
decimal digits in the input.
\bprog
? lindep([sqrt(2), sqrt(3), sqrt(2)+sqrt(3)])
%1 = [-1, -1, 1]~
@eprog
If $v$ is $p$-adic, $\fl$ is ignored and the algorithm LLL-reduces a
suitable (dual) lattice.
\bprog
? lindep([1, 2 + 3 + 3^2 + 3^3 + 3^4 + O(3^5)])
%2 = [1, -2]~
@eprog
If $v$ is a matrix (or a vector of column vectors, or a vector of row
vectors), $\fl$ is ignored and the function returns a non trivial kernel
vector if one exists, else an empty vector.
\bprog
? lindep([1,2,3;4,5,6;7,8,9])
%3 = [1, -2, 1]~
? lindep([[1,0], [2,0]])
%4 = [2, -1]~
? lindep([[1,0], [0,1]])
%5 = []~
@eprog
If $v$ contains polynomials or power series over some base field, finds a
linear relation with coefficients in the field.
\bprog
? lindep([x*y, x^2 + y, x^2*y + x*y^2, 1])
%4 = [y, y, -1, -y^2]~
@eprog\noindent For better control, it is preferable to use \typ{POL} rather
than \typ{SER} in the input, otherwise one gets a linear combination which is
$t$-adically small, but not necessarily $0$. Indeed, power series are first
converted to the minimal absolute accuracy occurring among the entries of $v$
(which can cause some coefficients to be ignored), then truncated to
polynomials:
\bprog
? v = [t^2+O(t^4), 1+O(t^2)]; L=lindep(v)
%1 = [1, 0]~
? v*L
%2 = t^2+O(t^4) \\ small but not 0
@eprog
The library syntax is \fun{GEN}{lindep0}{GEN v, long flag}.
\subsec{matadjoint$(M,\{\fl=0\})$}\kbdsidx{matadjoint}\label{se:matadjoint}
\idx{adjoint matrix} of $M$, i.e.~a matrix $N$
of cofactors of $M$, satisfying $M*N=\det(M)*\Id$. $M$ must be a
(not necessarily invertible) square matrix of dimension $n$.
If $\fl$ is 0 or omitted, we try to use Leverrier-Faddeev's algorithm,
which assumes that $n!$ invertible. If it fails or $\fl = 1$,
compute $T = \kbd{charpoly}(M)$ independently first and return
$(-1)^{n-1} (T(x)-T(0))/x$ evaluated at $M$.
\bprog
? a = [1,2,3;3,4,5;6,7,8] * Mod(1,4);
? matadjoint(a)
%2 =
[Mod(1, 4) Mod(1, 4) Mod(2, 4)]
[Mod(2, 4) Mod(2, 4) Mod(0, 4)]
[Mod(1, 4) Mod(1, 4) Mod(2, 4)]
@eprog\noindent
Both algorithms use $O(n^4)$ operations in the base ring. Over a field,
they are usually slower than computing the characteristic polynomial or
the inverse of $M$ directly.
The library syntax is \fun{GEN}{matadjoint0}{GEN M, long flag}.
Also available are
\fun{GEN}{adj}{GEN x} (\fl=0) and
\fun{GEN}{adjsafe}{GEN x} (\fl=1).
\subsec{matcompanion$(x)$}\kbdsidx{matcompanion}\label{se:matcompanion}
The left companion matrix to the nonzero polynomial $x$.
The library syntax is \fun{GEN}{matcompanion}{GEN x}.
\subsec{matconcat$(v)$}\kbdsidx{matconcat}\label{se:matconcat}
Returns a \typ{MAT} built from the entries of $v$, which may
be a \typ{VEC} (concatenate horizontally), a \typ{COL} (concatenate
vertically), or a \typ{MAT} (concatenate vertically each column, and
concatenate vertically the resulting matrices). The entries of $v$ are always
considered as matrices: they can themselves be \typ{VEC} (seen as a row
matrix), a \typ{COL} seen as a column matrix), a \typ{MAT}, or a scalar (seen
as an $1 \times 1$ matrix).
\bprog
? A=[1,2;3,4]; B=[5,6]~; C=[7,8]; D=9;
? matconcat([A, B]) \\ horizontal
%1 =
[1 2 5]
[3 4 6]
? matconcat([A, C]~) \\ vertical
%2 =
[1 2]
[3 4]
[7 8]
? matconcat([A, B; C, D]) \\ block matrix
%3 =
[1 2 5]
[3 4 6]
[7 8 9]
@eprog\noindent
If the dimensions of the entries to concatenate do not match up, the above
rules are extended as follows:
\item each entry $v_{i,j}$ of $v$ has a natural length and height: $1 \times
1$ for a scalar, $1 \times n$ for a \typ{VEC} of length $n$, $n \times 1$
for a \typ{COL}, $m \times n$ for an $m\times n$ \typ{MAT}
\item let $H_i$ be the maximum over $j$ of the lengths of the $v_{i,j}$,
let $L_j$ be the maximum over $i$ of the heights of the $v_{i,j}$.
The dimensions of the $(i,j)$-th block in the concatenated matrix are
$H_i \times L_j$.
\item a scalar $s = v_{i,j}$ is considered as $s$ times an identity matrix
of the block dimension $\min (H_i,L_j)$
\item blocks are extended by 0 columns on the right and 0 rows at the
bottom, as needed.
\bprog
? matconcat([1, [2,3]~, [4,5,6]~]) \\ horizontal
%4 =
[1 2 4]
[0 3 5]
[0 0 6]
? matconcat([1, [2,3], [4,5,6]]~) \\ vertical
%5 =
[1 0 0]
[2 3 0]
[4 5 6]
? matconcat([B, C; A, D]) \\ block matrix
%6 =
[5 0 7 8]
[6 0 0 0]
[1 2 9 0]
[3 4 0 9]
? U=[1,2;3,4]; V=[1,2,3;4,5,6;7,8,9];
? matconcat(matdiagonal([U, V])) \\ block diagonal
%7 =
[1 2 0 0 0]
[3 4 0 0 0]
[0 0 1 2 3]
[0 0 4 5 6]
[0 0 7 8 9]
@eprog
The library syntax is \fun{GEN}{matconcat}{GEN v}.
\subsec{matdet$(x,\{\fl=0\})$}\kbdsidx{matdet}\label{se:matdet}
Determinant of the square matrix $x$.
If $\fl=0$, uses an appropriate algorithm depending on the coefficients:
\item integer entries: modular method due to Dixon, Pernet and Stein.
\item real or $p$-adic entries: classical Gaussian elimination using maximal
pivot.
\item intmod entries: classical Gaussian elimination using first nonzero
pivot.
\item other cases: Gauss-Bareiss.
If $\fl=1$, uses classical Gaussian elimination with appropriate pivoting
strategy (maximal pivot for real or $p$-adic coefficients). This is usually
worse than the default.
The library syntax is \fun{GEN}{det0}{GEN x, long flag}.
Also available are \fun{GEN}{det}{GEN x} ($\fl=0$),
\fun{GEN}{det2}{GEN x} ($\fl=1$) and \fun{GEN}{ZM_det}{GEN x} for integer
entries.
\subsec{matdetint$(B)$}\kbdsidx{matdetint}\label{se:matdetint}
Let $B$ be an $m\times n$ matrix with integer coefficients. The
\emph{determinant} $D$ of the lattice generated by the columns of $B$ is
the square root of $\det(B^T B)$ if $B$ has maximal rank $m$, and $0$
otherwise.
This function uses the Gauss-Bareiss algorithm to compute a positive
\emph{multiple} of $D$. When $B$ is square, the function actually returns
$D = |\det B|$.
This function is useful in conjunction with \kbd{mathnfmod}, which needs to
know such a multiple. If the rank is maximal but the matrix is nonsquare,
you can obtain $D$ exactly using
\bprog
matdet( mathnfmod(B, matdetint(B)) )
@eprog\noindent
Note that as soon as one of the dimensions gets large ($m$ or $n$ is larger
than 20, say), it will often be much faster to use \kbd{mathnf(B, 1)} or
\kbd{mathnf(B, 4)} directly.
The library syntax is \fun{GEN}{detint}{GEN B}.
\subsec{matdetmod$(x,d)$}\kbdsidx{matdetmod}\label{se:matdetmod}
Given a matrix $x$ with \typ{INT} entries and $d$ an arbitrary positive
integer, return the determinant of $x$ modulo $d$.
\bprog
? A = [4,2,3; 4,5,6; 7,8,9]
? matdetmod(A,27)
%2 = 9
@eprog Note that using the generic function \kbd{matdet} on a matrix with
\typ{INTMOD} entries uses Gaussian reduction and will fail in general when
the modulus is not prime.
\bprog
? matdet(A * Mod(1,27))
*** at top-level: matdet(A*Mod(1,27))
*** ^------------------
*** matdet: impossible inverse in Fl_inv: Mod(3, 27).
@eprog
The library syntax is \fun{GEN}{matdetmod}{GEN x, GEN d}.
\subsec{matdiagonal$(x)$}\kbdsidx{matdiagonal}\label{se:matdiagonal}
$x$ being a vector, creates the diagonal matrix
whose diagonal entries are those of $x$.
\bprog
? matdiagonal([1,2,3]);
%1 =
[1 0 0]
[0 2 0]
[0 0 3]
@eprog\noindent Block diagonal matrices are easily created using
\tet{matconcat}:
\bprog
? U=[1,2;3,4]; V=[1,2,3;4,5,6;7,8,9];
? matconcat(matdiagonal([U, V]))
%1 =
[1 2 0 0 0]
[3 4 0 0 0]
[0 0 1 2 3]
[0 0 4 5 6]
[0 0 7 8 9]
@eprog
The library syntax is \fun{GEN}{diagonal}{GEN x}.
\subsec{mateigen$(x,\{\fl=0\})$}\kbdsidx{mateigen}\label{se:mateigen}
Returns the (complex) eigenvectors of $x$ as columns of a matrix.
If $\fl=1$, return $[L,H]$, where $L$ contains the
eigenvalues and $H$ the corresponding eigenvectors; multiple eigenvalues are
repeated according to the eigenspace dimension (which may be less
than the eigenvalue multiplicity in the characteristic polynomial).
This function first computes the characteristic polynomial of $x$ and
approximates its complex roots $(\lambda_i)$, then tries to compute the
eigenspaces as kernels of the $x - \lambda_i$. This algorithm is
ill-conditioned and is likely to miss kernel vectors if some roots of the
characteristic polynomial are close, in particular if it has multiple roots.
\bprog
? A = [13,2; 10,14]; mateigen(A)
%1 =
[-1/2 2/5]
[ 1 1]
? [L,H] = mateigen(A, 1);
? L
%3 = [9, 18]
? H
%4 =
[-1/2 2/5]
[ 1 1]
? A * H == H * matdiagonal(L)
%5 = 1
@eprog\noindent
For symmetric matrices, use \tet{qfjacobi} instead; for Hermitian matrices,
compute
\bprog
A = real(x);
B = imag(x);
y = matconcat([A, -B; B, A]);
@eprog\noindent and apply \kbd{qfjacobi} to $y$.
The library syntax is \fun{GEN}{mateigen}{GEN x, long flag, long prec}.
Also available is \fun{GEN}{eigen}{GEN x, long prec} ($\fl = 0$)
\subsec{matfrobenius$(M,\{\fl\},\{v='x\})$}\kbdsidx{matfrobenius}\label{se:matfrobenius}
Returns the Frobenius form of
the square matrix \kbd{M}. If $\fl=1$, returns only the elementary divisors as
a vector of polynomials in the variable \kbd{v}. If $\fl=2$, returns a
two-components vector [F,B] where \kbd{F} is the Frobenius form and \kbd{B} is
the basis change so that $M=B^{-1}FB$.
The library syntax is \fun{GEN}{matfrobenius}{GEN M, long flag, long v = -1} where \kbd{v} is a variable number.
\subsec{mathess$(x)$}\kbdsidx{mathess}\label{se:mathess}
Returns a matrix similar to the square matrix $x$, which is in upper Hessenberg
form (zero entries below the first subdiagonal).
The library syntax is \fun{GEN}{hess}{GEN x}.
\subsec{mathilbert$(n)$}\kbdsidx{mathilbert}\label{se:mathilbert}
$x$ being a \kbd{long}, creates the
\idx{Hilbert matrix}of order $x$, i.e.~the matrix whose coefficient
($i$,$j$) is $1/ (i+j-1)$.
The library syntax is \fun{GEN}{mathilbert}{long n}.
\subsec{mathnf$(M,\{\fl=0\})$}\kbdsidx{mathnf}\label{se:mathnf}
Let $R$ be a Euclidean ring, equal to $\Z$ or to $K[X]$ for some field
$K$. If $M$ is a (not necessarily square) matrix with entries in $R$, this
routine finds the \emph{upper triangular} \idx{Hermite normal form} of $M$.
If the rank of $M$ is equal to its number of rows, this is a square
matrix. In general, the columns of the result form a basis of the $R$-module
spanned by the columns of $M$.
The values of $\fl$ are:
\item 0 (default): only return the Hermite normal form $H$
\item 1 (complete output): return $[H,U]$, where $H$ is the Hermite
normal form of $M$, and $U$ is a transformation matrix such that $MU=[0|H]$.
The matrix $U$ belongs to $\text{GL}(R)$. When $M$ has a large kernel, the
entries of $U$ are in general huge.
\noindent For these two values, we use a naive algorithm, which behaves well
in small dimension only. Larger values correspond to different algorithms,
are restricted to \emph{integer} matrices, and all output the unimodular
matrix $U$. From now on all matrices have integral entries.
\item $\fl=4$, returns $[H,U]$ as in ``complete output'' above, using a
variant of \idx{LLL} reduction along the way. The matrix $U$ is provably
small in the $L_2$ sense, and often close to optimal; but the
reduction is in general slow, although provably polynomial-time.
If $\fl=5$, uses Batut's algorithm and output $[H,U,P]$, such that $H$ and
$U$ are as before and $P$ is a permutation of the rows such that $P$ applied
to $MU$ gives $H$. This is in general faster than $\fl=4$ but the matrix $U$
is usually worse; it is heuristically smaller than with the default algorithm.
When the matrix is dense and the dimension is large (bigger than 100, say),
$\fl = 4$ will be fastest. When $M$ has maximal rank, then
\bprog
H = mathnfmod(M, matdetint(M))
@eprog\noindent will be even faster. You can then recover $U$ as $M^{-1}H$.
\bprog
? M = matrix(3,4,i,j,random([-5,5]))
%1 =
[ 0 2 3 0]
[-5 3 -5 -5]
[ 4 3 -5 4]
? [H,U] = mathnf(M, 1);
? U
%3 =
[-1 0 -1 0]
[ 0 5 3 2]
[ 0 3 1 1]
[ 1 0 0 0]
? H
%5 =
[19 9 7]
[ 0 9 1]
[ 0 0 1]
? M*U
%6 =
[0 19 9 7]
[0 0 9 1]
[0 0 0 1]
@eprog
For convenience, $M$ is allowed to be a \typ{VEC}, which is then
automatically converted to a \typ{MAT}, as per the \tet{Mat} function.
For instance to solve the generalized extended gcd problem, one may use
\bprog
? v = [116085838, 181081878, 314252913,10346840];
? [H,U] = mathnf(v, 1);
? U
%2 =
[ 103 -603 15 -88]
[-146 13 -1208 352]
[ 58 220 678 -167]
[-362 -144 381 -101]
? v*U
%3 = [0, 0, 0, 1]
@eprog\noindent This also allows to input a matrix as a \typ{VEC} of
\typ{COL}s of the same length (which \kbd{Mat} would concatenate to
the \typ{MAT} having those columns):
\bprog
? v = [[1,0,4]~, [3,3,4]~, [0,-4,-5]~]; mathnf(v)
%1 =
[47 32 12]
[ 0 1 0]
[ 0 0 1]
@eprog
The library syntax is \fun{GEN}{mathnf0}{GEN M, long flag}.
Also available are \fun{GEN}{hnf}{GEN M} ($\fl=0$) and
\fun{GEN}{hnfall}{GEN M} ($\fl=1$). To reduce \emph{huge} relation matrices
(sparse with small entries, say dimension $400$ or more), you can use the
pair \kbd{hnfspec} / \kbd{hnfadd}. Since this is quite technical and the
calling interface may change, they are not documented yet. Look at the code
in \kbd{basemath/hnf\_snf.c}.
\subsec{mathnfmod$(x,d)$}\kbdsidx{mathnfmod}\label{se:mathnfmod}
If $x$ is a (not necessarily square) matrix of
maximal rank with integer entries, and $d$ is a multiple of the (nonzero)
determinant of the lattice spanned by the columns of $x$, finds the
\emph{upper triangular} \idx{Hermite normal form} of $x$.
If the rank of $x$ is equal to its number of rows, the result is a square
matrix. In general, the columns of the result form a basis of the lattice
spanned by the columns of $x$. Even when $d$ is known, this is in general
slower than \kbd{mathnf} but uses much less memory.
The library syntax is \fun{GEN}{hnfmod}{GEN x, GEN d}.
\subsec{mathnfmodid$(x,d)$}\kbdsidx{mathnfmodid}\label{se:mathnfmodid}
Outputs the (upper triangular)
\idx{Hermite normal form} of $x$ concatenated with the diagonal
matrix with diagonal $d$. Assumes that $x$ has integer entries.
Variant: if $d$ is an integer instead of a vector, concatenate $d$ times the
identity matrix.
\bprog
? m=[0,7;-1,0;-1,-1]
%1 =
[ 0 7]
[-1 0]
[-1 -1]
? mathnfmodid(m, [6,2,2])
%2 =
[2 1 1]
[0 1 0]
[0 0 1]
? mathnfmodid(m, 10)
%3 =
[10 7 3]
[ 0 1 0]
[ 0 0 1]
@eprog
The library syntax is \fun{GEN}{hnfmodid}{GEN x, GEN d}.
\subsec{mathouseholder$(Q,v)$}\kbdsidx{mathouseholder}\label{se:mathouseholder}
\sidx{Householder transform}applies a sequence $Q$ of Householder
transforms, as returned by \kbd{matqr}$(M,1)$ to the vector or matrix $v$.
\bprog
? m = [2,1; 3,2]; \\ some random matrix
? [Q,R] = matqr(m);
? Q
%3 =
[-0.554... -0.832...]
[-0.832... 0.554...]
? R
%4 =
[-3.605... -2.218...]
[0 0.277...]
? v = [1, 2]~; \\ some random vector
? Q * v
%6 = [-2.218..., 0.277...]~
? [q,r] = matqr(m, 1);
? exponent(r - R) \\ r is the same as R
%8 = -128
? q \\ but q has a different structure
%9 = [[0.0494..., [5.605..., 3]]]]
? mathouseholder(q, v) \\ applied to v
%10 = [-2.218..., 0.277...]~
@eprog\noindent The point of the Householder structure is that it efficiently
represents the linear operator $v \mapsto Q \* v$ in a more stable way
than expanding the matrix $Q$:
\bprog
? m = mathilbert(20); v = vectorv(20,i,i^2+1);
? [Q,R] = matqr(m);
? [q,r] = matqr(m, 1);
? \p100
? [q2,r2] = matqr(m, 1); \\ recompute at higher accuracy
? exponent(R - r)
%5 = -127
? exponent(R - r2)
%6 = -127
? exponent(mathouseholder(q,v) - mathouseholder(q2,v))
%7 = -119
? exponent(Q*v - mathouseholder(q2,v))
%8 = 9
@eprog\noindent We see that $R$ is OK with or without a flag to \kbd{matqr}
but that multiplying by $Q$ is considerably less precise than applying the
sequence of Householder transforms encoded by $q$.
The library syntax is \fun{GEN}{mathouseholder}{GEN Q, GEN v}.
\subsec{matid$(n)$}\kbdsidx{matid}\label{se:matid}
Creates the $n\times n$ identity matrix.
The library syntax is \fun{GEN}{matid}{long n}.
\subsec{matimage$(x,\{\fl=0\})$}\kbdsidx{matimage}\label{se:matimage}
Gives a basis for the image of the
matrix $x$ as columns of a matrix. A priori the matrix can have entries of
any type. If $\fl=0$, use standard Gauss pivot. If $\fl=1$, use
\kbd{matsupplement} (much slower: keep the default flag!).
The library syntax is \fun{GEN}{matimage0}{GEN x, long flag}.
Also available is \fun{GEN}{image}{GEN x} ($\fl=0$).
\subsec{matimagecompl$(x)$}\kbdsidx{matimagecompl}\label{se:matimagecompl}
Gives the vector of the column indices which
are not extracted by the function \kbd{matimage}, as a permutation
(\typ{VECSMALL}). Hence the number of
components of \kbd{matimagecompl(x)} plus the number of columns of
\kbd{matimage(x)} is equal to the number of columns of the matrix $x$.
The library syntax is \fun{GEN}{imagecompl}{GEN x}.
\subsec{matimagemod$(x,d,\&U)$}\kbdsidx{matimagemod}\label{se:matimagemod}
Gives a Howell basis (unique representation for submodules of~$(\Z/d\Z)^n$)
for the image of the matrix $x$ modulo $d$ as columns of a matrix $H$. The
matrix $x$ must have \typ{INT} entries, and $d$ can be an arbitrary positive
integer. If $U$ is present, set it to a matrix such that~$AU = H$.
\bprog
? A = [2,1;0,2];
? matimagemod(A,6,&U)
%2 =
[1 0]
[0 2]
? U
%3 =
[5 1]
[3 4]
? (A*U)%6
%4 =
[1 0]
[0 2]
@eprog
\misctitle{Caveat} In general the number of columns of the Howell form is not
the minimal number of generators of the submodule. Example:
\bprog
? matimagemod([1;2],4)
%5 =
[2 1]
[0 2]
@eprog
\misctitle{Caveat 2} In general the matrix $U$ is not invertible, even if~$A$
and~$H$ have the same size. Example:
\bprog
? matimagemod([4,1;0,4],8,&U)
%6 =
[2 1]
[0 4]
? U
%7 =
[0 0]
[2 1]
@eprog
The library syntax is \fun{GEN}{matimagemod}{GEN x, GEN d, GEN *U = NULL}.
\subsec{matindexrank$(M)$}\kbdsidx{matindexrank}\label{se:matindexrank}
$M$ being a matrix of rank $r$, returns a vector with two
\typ{VECSMALL} components $y$ and $z$ of length $r$ giving a list of rows
and columns respectively (starting from 1) such that the extracted matrix
obtained from these two vectors using $\tet{vecextract}(M,y,z)$ is
invertible. The vectors $y$ and $z$ are sorted in increasing order.
The library syntax is \fun{GEN}{indexrank}{GEN M}.
\subsec{matintersect$(x,y)$}\kbdsidx{matintersect}\label{se:matintersect}
$x$ and $y$ being two matrices with the same number of rows, finds a
basis of the vector space equal to the intersection of the spaces spanned by
the columns of $x$ and $y$ respectively. For efficiency, the columns of $x$
(resp.~$y$) should be independent.
The faster function \tet{idealintersect} can be used to intersect
fractional ideals (projective $\Z_K$ modules of rank $1$); the slower but
more general function \tet{nfhnf} can be used to intersect general
$\Z_K$-modules.
The library syntax is \fun{GEN}{intersect}{GEN x, GEN y}.
\subsec{matinverseimage$(x,y)$}\kbdsidx{matinverseimage}\label{se:matinverseimage}
Given a matrix $x$ and
a column vector or matrix $y$, returns a preimage $z$ of $y$ by $x$ if one
exists (i.e such that $x z = y$), an empty vector or matrix otherwise. The
complete inverse image is $z + \text{Ker} x$, where a basis of the kernel of
$x$ may be obtained by \kbd{matker}.
\bprog
? M = [1,2;2,4];
? matinverseimage(M, [1,2]~)
%2 = [1, 0]~
? matinverseimage(M, [3,4]~)
%3 = []~ \\@com no solution
? matinverseimage(M, [1,3,6;2,6,12])
%4 =
[1 3 6]
[0 0 0]
? matinverseimage(M, [1,2;3,4])
%5 = [;] \\@com no solution
? K = matker(M)
%6 =
[-2]
[1]
@eprog
The library syntax is \fun{GEN}{inverseimage}{GEN x, GEN y}.
\subsec{matinvmod$(x,d)$}\kbdsidx{matinvmod}\label{se:matinvmod}
Computes a left inverse of the matrix~$x$ modulo~$d$. The matrix $x$ must
have \typ{INT} entries, and $d$ can be an arbitrary positive integer.
\bprog
? A = [3,1,2;1,2,1;3,1,1];
? U = matinvmod(A,6)
%2 =
[1 1 3]
[2 3 5]
[1 0 5]
? (U*A)%6
%3 =
[1 0 0]
[0 1 0]
[0 0 1]
? matinvmod(A,5)
*** at top-level: matinvmod(A,5)
*** ^--------------
*** matinvmod: impossible inverse in gen_inv: 0.
@eprog
The library syntax is \fun{GEN}{matinvmod}{GEN x, GEN d}.
\subsec{matisdiagonal$(x)$}\kbdsidx{matisdiagonal}\label{se:matisdiagonal}
Returns true (1) if $x$ is a diagonal matrix, false (0) if not.
The library syntax is \fun{GEN}{isdiagonal}{GEN x}.
\subsec{matker$(x,\{\fl=0\})$}\kbdsidx{matker}\label{se:matker}
Gives a basis for the kernel of the matrix $x$ as columns of a matrix.
The matrix can have entries of any type, provided they are compatible with
the generic arithmetic operations ($+$, $\times$ and $/$).
If $x$ is known to have integral entries, set $\fl=1$.
The library syntax is \fun{GEN}{matker0}{GEN x, long flag}.
Also available are \fun{GEN}{ker}{GEN x} ($\fl=0$),
\fun{GEN}{ZM_ker}{GEN x} ($\fl=1$).
\subsec{matkerint$(x,\{\fl=0\})$}\kbdsidx{matkerint}\label{se:matkerint}
Gives an \idx{LLL}-reduced $\Z$-basis
for the lattice equal to the kernel of the matrix $x$ with rational entries.
\fl{} is deprecated, kept for backward compatibility. The function
\kbd{matsolvemod} allows to solve more general linear systems over $\Z$.
The library syntax is \fun{GEN}{matkerint0}{GEN x, long flag}.
Use directly \fun{GEN}{kerint}{GEN x} if $x$ is known to have
integer entries, and \tet{Q_primpart} first otherwise.
\subsec{matkermod$(x,d,\&\var{im})$}\kbdsidx{matkermod}\label{se:matkermod}
Gives a Howell basis (unique representation for submodules of~$(\Z/d\Z)^n$,
cf. \kbd{matimagemod}) for the kernel of the matrix $x$ modulo $d$ as columns
of a matrix. The matrix $x$ must have \typ{INT} entries, and $d$ can be an
arbitrary positive integer. If $im$ is present, set it to a basis of the image
of~$x$ (which is computed on the way).
\bprog
? A = [1,2,3;5,1,4]
%1 =
[1 2 3]
[5 1 4]
? K = matkermod(A,6)
%2 =
[2 1]
[2 1]
[0 3]
? (A*K)%6
%3 =
[0 0]
[0 0]
@eprog
The library syntax is \fun{GEN}{matkermod}{GEN x, GEN d, GEN *im = NULL}.
\subsec{matmuldiagonal$(x,d)$}\kbdsidx{matmuldiagonal}\label{se:matmuldiagonal}
Product of the matrix $x$ by the diagonal
matrix whose diagonal entries are those of the vector $d$. Equivalent to,
but much faster than $x*\kbd{matdiagonal}(d)$.
The library syntax is \fun{GEN}{matmuldiagonal}{GEN x, GEN d}.
\subsec{matmultodiagonal$(x,y)$}\kbdsidx{matmultodiagonal}\label{se:matmultodiagonal}
Product of the matrices $x$ and $y$ assuming that the result is a
diagonal matrix. Much faster than $x*y$ in that case. The result is
undefined if $x*y$ is not diagonal.
The library syntax is \fun{GEN}{matmultodiagonal}{GEN x, GEN y}.
\subsec{matpascal$(n,\{q\})$}\kbdsidx{matpascal}\label{se:matpascal}
Creates as a matrix the lower triangular
\idx{Pascal triangle} of order $x+1$ (i.e.~with binomial coefficients
up to $x$). If $q$ is given, compute the $q$-Pascal triangle (i.e.~using
$q$-binomial coefficients).
The library syntax is \fun{GEN}{matqpascal}{long n, GEN q = NULL}.
Also available is \fun{GEN}{matpascal}{GEN x}.
\subsec{matpermanent$(x)$}\kbdsidx{matpermanent}\label{se:matpermanent}
Permanent of the square matrix $x$ using Ryser's formula in Gray code
order.
\bprog
? n = 20; m = matrix(n,n,i,j, i!=j);
? matpermanent(m)
%2 = 895014631192902121
? n! * sum(i=0,n, (-1)^i/i!)
%3 = 895014631192902121
@eprog\noindent This function runs in time $O(2^n n)$ for a matrix of size
$n$ and is not implemented for $n$ large.
The library syntax is \fun{GEN}{matpermanent}{GEN x}.
\subsec{matqr$(M,\{\fl=0\})$}\kbdsidx{matqr}\label{se:matqr}
Returns $[Q,R]$, the \idx{QR-decomposition} of the square invertible
matrix $M$ with real entries: $Q$ is orthogonal and $R$ upper triangular. If
$\fl=1$, the orthogonal matrix is returned as a sequence of Householder
transforms: applying such a sequence is stabler and faster than
multiplication by the corresponding $Q$ matrix.\sidx{Householder transform}
More precisely, if
\bprog
[Q,R] = matqr(M);
[q,r] = matqr(M, 1);
@eprog\noindent then $r = R$ and \kbd{mathouseholder}$(q, M)$ is
(close to) $R$; furthermore
\bprog
mathouseholder(q, matid(#M)) == Q~
@eprog\noindent the inverse of $Q$. This function raises an error if the
precision is too low or $x$ is singular.
The library syntax is \fun{GEN}{matqr}{GEN M, long flag, long prec}.
\subsec{matrank$(x)$}\kbdsidx{matrank}\label{se:matrank}
Rank of the matrix $x$.
The library syntax is \fun{long}{rank}{GEN x}.
\subsec{matreduce$(m)$}\kbdsidx{matreduce}\label{se:matreduce}
Let $m$ be a factorization matrix, i.e., a 2-column matrix whose
columns contains arbitrary ``generators'' and integer ``exponents''
respectively. Returns the canonical form of $m$: the
first column is sorted with unique elements and the second one contains the
merged ``exponents'' (exponents of identical entries in the first column of
$m$ are added, rows attached to $0$ exponents are deleted). The generators are
sorted with respect to the universal \kbd{cmp} routine; in particular, this
function is the identity on true integer factorization matrices, but not on
other factorizations (in products of polynomials or maximal ideals, say). It
is idempotent.
For convenience, this function also allows a vector $m$, which is handled as a
factorization with all exponents equal to $1$, as in \kbd{factorback}.
\bprog
? A=[x,2;y,4]; B=[x,-2; y,3; 3, 4]; C=matconcat([A,B]~)
%1 =
[x 2]
[y 4]
[x -2]
[y 3]
[3 4]
? matreduce(C)
%2 =
[3 4]
[y 7]
? matreduce([x,x,y,x,z,x,y]) \\ vector argument
%3 =
[x 4]
[y 2]
[z 1]
@eprog
The library syntax is \fun{GEN}{matreduce}{GEN m}.
\subsec{matrix$(m,\{n=m\},\{X\},\{Y\},\{\var{expr}=0\})$}\kbdsidx{matrix}\label{se:matrix}
Creation of the
$m\times n$ matrix whose coefficients are given by the expression
\var{expr}. There are two formal parameters in \var{expr}, the first one
($X$) corresponding to the rows, the second ($Y$) to the columns, and $X$
goes from 1 to $m$, $Y$ goes from 1 to $n$. If one of the last 3 parameters
is omitted, fill the matrix with zeroes. If $n$ is omitted, return a
square $m \times m$ matrix.
%\syn{NO}
\subsec{matrixqz$(A,\{p=0\})$}\kbdsidx{matrixqz}\label{se:matrixqz}
$A$ being an $m\times n$ matrix in $M_{m,n}(\Q)$, let
$\text{Im}_\Q A$ (resp.~$\text{Im}_\Z A$) the $\Q$-vector space
(resp.~the $\Z$-module) spanned by the columns of $A$. This function has
varying behavior depending on the sign of $p$:
If $p \geq 0$, $A$ is assumed to have maximal rank $n\leq m$. The function
returns a matrix $B\in M_{m,n}(\Z)$, with $\text{Im}_\Q B = \text{Im}_\Q A$,
such that the GCD of all its $n\times n$ minors is coprime to
$p$; in particular, if $p = 0$ (default), this GCD is $1$.
If $p=-1$, returns a basis of the lattice $\Z^n \cap \text{Im}_\Z A$.
If $p=-2$, returns a basis of the lattice $\Z^n \cap \text{Im}_\Q A$.
\misctitle{Caveat} ($p=-1$ or $-2$) For efficiency reason, we do not compute
the HNF of the resulting basis.
\bprog
? minors(x) = vector(#x[,1], i, matdet(x[^i,]));
? A = [3,1/7; 5,3/7; 7,5/7]; minors(A)
%1 = [4/7, 8/7, 4/7] \\ determinants of all 2x2 minors
? B = matrixqz(A)
%2 =
[3 1]
[5 2]
[7 3]
? minors(%)
%3 = [1, 2, 1] \\ B integral with coprime minors
? matrixqz(A,-1)
%4 =
[3 1]
[5 3]
[7 5]
? matrixqz(A,-2)
%5 =
[3 1]
[5 2]
[7 3]
@eprog
The library syntax is \fun{GEN}{matrixqz0}{GEN A, GEN p = NULL}.
\subsec{matsize$(x)$}\kbdsidx{matsize}\label{se:matsize}
$x$ being a vector or matrix, returns a row vector
with two components, the first being the number of rows (1 for a row vector),
the second the number of columns (1 for a column vector).
The library syntax is \fun{GEN}{matsize}{GEN x}.
\subsec{matsnf$(X,\{\fl=0\})$}\kbdsidx{matsnf}\label{se:matsnf}
If $X$ is a (singular or nonsingular) matrix outputs the vector of
\idx{elementary divisors} of $X$, i.e.~the diagonal of the
\idx{Smith normal form} of $X$, normalized so that $d_n \mid d_{n-1} \mid
\ldots \mid d_1$. $X$ must have integer or polynomial entries; in the latter
case, $X$ must be a square matrix.
The binary digits of \fl\ mean:
1 (complete output): if set, outputs $[U,V,D]$, where $U$ and $V$ are two
unimodular matrices such that $UXV$ is the diagonal matrix $D$. Otherwise
output only the diagonal of $D$. If $X$ is not a square matrix, then $D$
will be a square diagonal matrix padded with zeros on the left or the top.
4 (cleanup): if set, cleans up the output. This means that elementary
divisors equal to $1$ will be deleted, i.e.~outputs a shortened vector $D'$
instead of $D$. If complete output was required, returns $[U',V',D']$ so
that $U'XV' = D'$ holds. If this flag is set, $X$ is allowed to be of the
form `vector of elementary divisors' or $[U,V,D]$ as would normally be
output with the cleanup flag unset.
If $v$ is an output from \kbd{matsnf} and $p$ is a power of an irreducible
element, then \kbd{snfrank(v, p)} returns the $p$-rank of the attached
module.
\bprog
? X = [27,0; 0,3; 1,1; 0,0]; matsnf(X)
%1 = [0, 0, 3, 1]
? [U,V,D] = v = matsnf(X, 1); U*X*V == D
%2
? U
%3 =
[0 0 0 1]
[1 9 -27 0]
[0 1 0 0]
[0 0 1 0]
? V
%4 =
[-1 1]
[ 1 0]
? snfrank(v, 3)
%5 = 3
@eprog\noindent Continuing the same example after cleanup:
\bprog
? [U,V,D] = v = matsnf(X, 1+4); U*X*V == D
%6 = 1
? D
%7 =
[0]
[0]
[3]
? snfrank(v, 3)
%8 = 3
? snfrank(v, 2)
%9 = 2
@eprog
The library syntax is \fun{GEN}{matsnf0}{GEN X, long flag}.
\subsec{matsolve$(M,B)$}\kbdsidx{matsolve}\label{se:matsolve}
Let $M$ be a left-invertible matrix and $B$ a column vector
such that there exists a solution $X$ to the system of linear equations
$MX = B$; return the (unique) solution $X$. This has the same effect as, but
is faster, than $M^{-1}*B$. Uses Dixon $p$-adic lifting method if $M$ and
$B$ are integral and Gaussian elimination otherwise. When there is no
solution, the function returns an $X$ such that $MX - B$ is nonzero
although it has at least $\#M$ zero entries:
\bprog
? M = [1,2;3,4;5,6];
? B = [4,6,8]~; X = matsolve(M, B)
%2 = [-2, 3]~
? M*X == B
%3 = 1
? B = [1,2,4]~; X = matsolve(M, [1,2,4]~)
%4 = [0, 1/2]~
? M*X - B
%5 = [0, 0, -1]~
@eprog\noindent Raises an exception if $M$ is not left-invertible, even if
there is a solution:
\bprog
? M = [1,1;1,1]; matsolve(M, [1,1]~)
*** at top-level: matsolve(M,[1,1]~)
*** ^------------------
*** matsolve: impossible inverse in gauss: [1, 1; 1, 1].
@eprog\noindent The function also works when $B$ is a matrix and we return
the unique matrix solution $X$ provided it exists. Again, if there is no
solution, the function returns an $X$ such that $MX - B$ is nonzero
although it has at least $\#M$ zero rows.
The library syntax is \fun{GEN}{gauss}{GEN M, GEN B}.
\subsec{matsolvemod$(M,D,B,\{\fl=0\})$}\kbdsidx{matsolvemod}\label{se:matsolvemod}
$M$ being any integral matrix,
$D$ a column vector of nonnegative integer moduli, and $B$ an integral
column vector, gives an integer solution to the system of congruences
$\sum_i m_{i,j}x_j\equiv b_i\pmod{d_i}$ if one exists, otherwise returns
zero. Note that we explicitly allow $d_i = 0$ corresponding to an equality
in $\Z$. Shorthand notation: $B$ (resp.~$D$) can be given as a single integer,
in which case all the $b_i$ (resp.~$d_i$) above are taken to be equal to $B$
(resp.~$D$). Again, $D = 0$ solves the linear system of equations over $\Z$.
\bprog
? M = [1,2;3,4];
? matsolvemod(M, [3,4]~, [1,2]~)
%2 = [10, 0]~
? matsolvemod(M, 3, 1) \\ M X = [1,1]~ over F_3
%3 = [2, 1]~
? matsolvemod(M, [3,0]~, [1,2]~) \\ x + 2y = 1 (mod 3), 3x + 4y = 2 (in Z)
%4 = [6, -4]~
? matsolvemod(M, 0, [1,2]~) \\ no solution in Z for x + 2y = 1, 3x + 4y = 2
@eprog
If $\fl=1$, all solutions are returned in the form of a two-component row
vector $[x,u]$, where $x$ is an integer solution to the system of
congruences and $u$ is a matrix whose columns give a basis of the homogeneous
system (so that all solutions can be obtained by adding $x$ to any linear
combination of columns of $u$). If no solution exists, returns zero.
The library syntax is \fun{GEN}{matsolvemod}{GEN M, GEN D, GEN B, long flag}.
Also available are \fun{GEN}{gaussmodulo}{GEN M, GEN D, GEN B}
($\fl=0$) and \fun{GEN}{gaussmodulo2}{GEN M, GEN D, GEN B} ($\fl=1$).
\subsec{matsupplement$(x)$}\kbdsidx{matsupplement}\label{se:matsupplement}
Assuming that the columns of the matrix $x$
are linearly independent (if they are not, an error message is issued), finds
a square invertible matrix whose first columns are the columns of $x$,
i.e.~supplement the columns of $x$ to a basis of the whole space.
\bprog
? matsupplement([1;2])
%1 =
[1 0]
[2 1]
@eprog
Raises an error if $x$ has 0 columns, since (due to a long standing design
bug), the dimension of the ambient space (the number of rows) is unknown in
this case:
\bprog
? matsupplement(matrix(2,0))
*** at top-level: matsupplement(matrix
*** ^--------------------
*** matsupplement: sorry, suppl [empty matrix] is not yet implemented.
@eprog
The library syntax is \fun{GEN}{suppl}{GEN x}.
\subsec{mattranspose$(x)$}\kbdsidx{mattranspose}\label{se:mattranspose}
Transpose of $x$ (also $x\til$).
This has an effect only on vectors and matrices.
The library syntax is \fun{GEN}{gtrans}{GEN x}.
\subsec{minpoly$(A,\{v='x\})$}\kbdsidx{minpoly}\label{se:minpoly}
\idx{minimal polynomial}
of $A$ with respect to the variable $v$., i.e. the monic polynomial $P$
of minimal degree (in the variable $v$) such that $P(A) = 0$.
The library syntax is \fun{GEN}{minpoly}{GEN A, long v = -1} where \kbd{v} is a variable number.
\subsec{norml2$(x)$}\kbdsidx{norml2}\label{se:norml2}
Square of the $L^2$-norm of $x$. More precisely,
if $x$ is a scalar, $\kbd{norml2}(x)$ is defined to be the square
of the complex modulus of $x$ (real \typ{QUAD}s are not supported).
If $x$ is a polynomial, a (row or column) vector or a matrix, \kbd{norml2($x$)} is
defined recursively as $\sum_i \kbd{norml2}(x_i)$, where $(x_i)$ run through
the components of $x$. In particular, this yields the usual $\sum |x_i|^2$
(resp.~$\sum |x_{i,j}|^2$) if $x$ is a polynomial or vector (resp.~matrix) with
complex components.
\bprog
? norml2( [ 1, 2, 3 ] ) \\ vector
%1 = 14
? norml2( [ 1, 2; 3, 4] ) \\ matrix
%2 = 30
? norml2( 2*I + x )
%3 = 5
? norml2( [ [1,2], [3,4], 5, 6 ] ) \\ recursively defined
%4 = 91
@eprog
The library syntax is \fun{GEN}{gnorml2}{GEN x}.
\subsec{normlp$(x,\{p=\var{oo}\})$}\kbdsidx{normlp}\label{se:normlp}
$L^p$-norm of $x$; sup norm if $p$ is omitted or \kbd{+oo}. More precisely,
if $x$ is a scalar, \kbd{normlp}$(x, p)$ is defined to be \kbd{abs}$(x)$.
If $x$ is a polynomial, a (row or column) vector or a matrix:
\item if $p$ is omitted or \kbd{+oo}, then \kbd{normlp($x$)} is defined
recursively as $\max_i \kbd{normlp}(x_i))$, where $(x_i)$ run through the
components of~$x$. In particular, this yields the usual sup norm if $x$ is a
polynomial or vector with complex components.
\item otherwise, \kbd{normlp($x$, $p$)} is defined recursively as $(\sum_i
\kbd{normlp}^p(x_i,p))^{1/p}$. In particular, this yields the usual $(\sum
|x_i|^p)^{1/p}$ if $x$ is a polynomial or vector with complex components.
\bprog
? v = [1,-2,3]; normlp(v) \\ vector
%1 = 3
? normlp(v, +oo) \\ same, more explicit
%2 = 3
? M = [1,-2;-3,4]; normlp(M) \\ matrix
%3 = 4
? T = (1+I) + I*x^2; normlp(T)
%4 = 1.4142135623730950488016887242096980786
? normlp([[1,2], [3,4], 5, 6]) \\ recursively defined
%5 = 6
? normlp(v, 1)
%6 = 6
? normlp(M, 1)
%7 = 10
? normlp(T, 1)
%8 = 2.4142135623730950488016887242096980786
@eprog
The library syntax is \fun{GEN}{gnormlp}{GEN x, GEN p = NULL, long prec}.
\subsec{powers$(x,n,\{\var{x0}\})$}\kbdsidx{powers}\label{se:powers}
For nonnegative $n$, return the vector with $n+1$ components
$[1,x,\dots,x^n]$ if \kbd{x0} is omitted, and $[x_0, x_0*x, ..., x_0*x^n]$
otherwise.
\bprog
? powers(Mod(3,17), 4)
%1 = [Mod(1, 17), Mod(3, 17), Mod(9, 17), Mod(10, 17), Mod(13, 17)]
? powers(Mat([1,2;3,4]), 3)
%2 = [[1, 0; 0, 1], [1, 2; 3, 4], [7, 10; 15, 22], [37, 54; 81, 118]]
? powers(3, 5, 2)
%3 = [2, 6, 18, 54, 162, 486]
@eprog\noindent When $n < 0$, the function returns the empty vector \kbd{[]}.
The library syntax is \fun{GEN}{gpowers0}{GEN x, long n, GEN x0 = NULL}.
Also available is
\fun{GEN}{gpowers}{GEN x, long n} when \kbd{x0} is \kbd{NULL}.
\subsec{qfauto$(G,\{\var{fl}\})$}\kbdsidx{qfauto}\label{se:qfauto}
$G$ being a square and symmetric matrix with integer entries representing a
positive definite quadratic form, outputs the automorphism group of the
associate lattice.
Since this requires computing the minimal vectors, the computations can
become very lengthy as the dimension grows. $G$ can also be given by an
\kbd{qfisominit} structure.
See \kbd{qfisominit} for the meaning of \var{fl}.
The output is a two-components vector $[o,g]$ where $o$ is the group order
and $g$ is the list of generators (as a vector). For each generator $H$,
the equality $G={^t}H\*G\*H$ holds.
The interface of this function is experimental and will likely change in the
future.
This function implements an algorithm of Plesken and Souvignier, following
Souvignier's implementation.
\bprog
? K = matkerint(Mat(concat([vector(23,i,2*i+1), 51, 145])));
? M = matdiagonal(vector(25,i,if(i==25,-1,1)));
? L24 = K~ * M * K; \\ the Leech lattice
? [o,g] = qfauto(L24); o
%4 = 8315553613086720000
? #g
%5 = 2
@eprog
The library syntax is \fun{GEN}{qfauto0}{GEN G, GEN fl = NULL}.
The function \fun{GEN}{qfauto}{GEN G, GEN fl} is also available
where $G$ is a vector of \kbd{zm} matrices.
\subsec{qfautoexport$(\var{qfa},\{\fl\})$}\kbdsidx{qfautoexport}\label{se:qfautoexport}
\var{qfa} being an automorphism group as output by
\tet{qfauto}, export the underlying matrix group as a string suitable
for (no flags or $\fl=0$) GAP or ($\fl=1$) Magma. The following example
computes the size of the matrix group using GAP:
\bprog
? G = qfauto([2,1;1,2])
%1 = [12, [[-1, 0; 0, -1], [0, -1; 1, 1], [1, 1; 0, -1]]]
? s = qfautoexport(G)
%2 = "Group([[-1, 0], [0, -1]], [[0, -1], [1, 1]], [[1, 1], [0, -1]])"
? extern("echo \"Order("s");\" | gap -q")
%3 = 12
@eprog
The library syntax is \fun{GEN}{qfautoexport}{GEN qfa, long flag}.
\subsec{qfbil$(x,y,\{q\})$}\kbdsidx{qfbil}\label{se:qfbil}
This function is obsolete, use \kbd{qfeval}.
The library syntax is \fun{GEN}{qfbil}{GEN x, GEN y, GEN q = NULL}.
\subsec{qfeval$(\{q\},x,\{y\})$}\kbdsidx{qfeval}\label{se:qfeval}
Evaluate the quadratic form $q$ (given by a symmetric matrix)
at the vector $x$; if $y$ is present, evaluate the polar form at $(x,y)$;
if $q$ omitted, use the standard Euclidean scalar product, corresponding to
the identity matrix.
Roughly equivalent to \kbd{x\til * q * y}, but a little faster and
more convenient (does not distinguish between column and row vectors):
\bprog
? x = [1,2,3]~; y = [-1,3,1]~; q = [1,2,3;2,2,-1;3,-1,9];
? qfeval(q,x,y)
%2 = 23
? for(i=1,10^6, qfeval(q,x,y))
time = 661ms
? for(i=1,10^6, x~*q*y)
time = 697ms
@eprog\noindent The speedup is noticeable for the quadratic form,
compared to \kbd{x\til * q * x}, since we save almost half the
operations:
\bprog
? for(i=1,10^6, qfeval(q,x))
time = 487ms
@eprog\noindent The special case $q = \text{Id}$ is handled faster if we
omit $q$ altogether:
\bprog
? qfeval(,x,y)
%6 = 8
? q = matid(#x);
? for(i=1,10^6, qfeval(q,x,y))
time = 529 ms.
? for(i=1,10^6, qfeval(,x,y))
time = 228 ms.
? for(i=1,10^6, x~*y)
time = 274 ms.
@eprog
We also allow \typ{MAT}s of compatible dimensions for $x$,
and return \kbd{x\til * q * x} in this case as well:
\bprog
? M = [1,2,3;4,5,6;7,8,9]; qfeval(,M) \\ Gram matrix
%5 =
[66 78 90]
[78 93 108]
[90 108 126]
? q = [1,2,3;2,2,-1;3,-1,9];
? for(i=1,10^6, qfeval(q,M))
time = 2,008 ms.
? for(i=1,10^6, M~*q*M)
time = 2,368 ms.
? for(i=1,10^6, qfeval(,M))
time = 1,053 ms.
? for(i=1,10^6, M~*M)
time = 1,171 ms.
@eprog
If $q$ is a \typ{QFB}, it is implicitly converted to the
attached symmetric \typ{MAT}. This is done more
efficiently than by direct conversion, since we avoid introducing a
denominator $2$ and rational arithmetic:
\bprog
? q = Qfb(2,3,4); x = [2,3];
? qfeval(q, x)
%2 = 62
? Q = Mat(q)
%3 =
[ 2 3/2]
[3/2 4]
? qfeval(Q, x)
%4 = 62
? for (i=1, 10^6, qfeval(q,x))
time = 758 ms.
? for (i=1, 10^6, qfeval(Q,x))
time = 1,110 ms.
@eprog
Finally, when $x$ is a \typ{MAT} with \emph{integral} coefficients, we allow
a \typ{QFB} for $q$ and return the binary
quadratic form $q \circ M$. Again, the conversion to \typ{MAT} is less
efficient in this case:
\bprog
? q = Qfb(2,3,4); Q = Mat(q); x = [1,2;3,4];
? qfeval(q, x)
%2 = Qfb(47, 134, 96)
? qfeval(Q,x)
%3 =
[47 67]
[67 96]
? for (i=1, 10^6, qfeval(q,x))
time = 701 ms.
? for (i=1, 10^6, qfeval(Q,x))
time = 1,639 ms.
@eprog
The library syntax is \fun{GEN}{qfeval0}{GEN q = NULL, GEN x, GEN y = NULL}.
\subsec{qfgaussred$(q)$}\kbdsidx{qfgaussred}\label{se:qfgaussred}
\idx{decomposition into squares} of the
quadratic form represented by the symmetric matrix $q$. The result is a
matrix whose diagonal entries are the coefficients of the squares, and the
off-diagonal entries on each line represent the bilinear forms. More
precisely, if $(a_{ij})$ denotes the output, one has
$$ q(x) = \sum_i a_{ii} (x_i + \sum_{j \neq i} a_{ij} x_j)^2 $$
\bprog
? qfgaussred([0,1;1,0])
%1 =
[1/2 1]
[-1 -1/2]
@eprog\noindent This means that $2xy = (1/2)(x+y)^2 - (1/2)(x-y)^2$.
Singular matrices are supported, in which case some diagonal coefficients
will vanish:
\bprog
? qfgaussred([1,1;1,1])
%1 =
[1 1]
[1 0]
@eprog\noindent This means that $x^2 + 2xy + y^2 = (x+y)^2$.
The library syntax is \fun{GEN}{qfgaussred}{GEN q}.
\fun{GEN}{qfgaussred_positive}{GEN q} assumes that $q$ is
positive definite and is a little faster; returns \kbd{NULL} if a vector
with negative norm occurs (non positive matrix or too many rounding errors).
\subsec{qfisom$(G,H,\{\var{fl}\},\{\var{grp}\})$}\kbdsidx{qfisom}\label{se:qfisom}
$G$, $H$ being square and symmetric matrices with integer entries representing
positive definite quadratic forms, return an invertible matrix $S$ such that
$G={^t}S\*H\*S$. This defines a isomorphism between the corresponding lattices.
Since this requires computing the minimal vectors, the computations can
become very lengthy as the dimension grows.
See \kbd{qfisominit} for the meaning of \var{fl}.
If \var{grp} is given it must be the automorphism group of $H$. It will be used
to speed up the computation.
$G$ can also be given by an \kbd{qfisominit} structure which is preferable if
several forms $H$ need to be compared to $G$.
This function implements an algorithm of Plesken and Souvignier, following
Souvignier's implementation.
The library syntax is \fun{GEN}{qfisom0}{GEN G, GEN H, GEN fl = NULL, GEN grp = NULL}.
Also available is \fun{GEN}{qfisom}{GEN G, GEN H, GEN fl, GEN grp}
where $G$ is a vector of \kbd{zm}, and $H$ is a \kbd{zm}, and $grp$ is
either \kbd{NULL} or a vector of \kbd{zm}.
\subsec{qfisominit$(G,\{\var{fl}\},\{m\})$}\kbdsidx{qfisominit}\label{se:qfisominit}
$G$ being a square and symmetric matrix with integer entries representing a
positive definite quadratic form, return an \kbd{isom} structure allowing to
compute isomorphisms between $G$ and other quadratic forms faster.
The interface of this function is experimental and will likely change in future
release.
If present, the optional parameter \var{fl} must be a \typ{VEC} with two
components. It allows to specify the invariants used, which can make the
computation faster or slower. The components are
\item \kbd{fl[1]} Depth of scalar product combination to use.
\item \kbd{fl[2]} Maximum level of Bacher polynomials to use.
If present, $m$ must be the set of vectors of norm up to the maximal of the
diagonal entry of $G$, either as a matrix or as given by \kbd{qfminim}.
Otherwise this function computes the minimal vectors so it become very
lengthy as the dimension of $G$ grows.
The library syntax is \fun{GEN}{qfisominit0}{GEN G, GEN fl = NULL, GEN m = NULL}.
Also available is
\fun{GEN}{qfisominit}{GEN F, GEN fl}
where $F$ is a vector of \kbd{zm}.
\subsec{qfjacobi$(A)$}\kbdsidx{qfjacobi}\label{se:qfjacobi}
Apply Jacobi's eigenvalue algorithm to the real symmetric matrix $A$.
This returns $[L, V]$, where
\item $L$ is the vector of (real) eigenvalues of $A$, sorted in increasing
order,
\item $V$ is the corresponding orthogonal matrix of eigenvectors of $A$.
\bprog
? \p19
? A = [1,2;2,1]; mateigen(A)
%1 =
[-1 1]
[ 1 1]
? [L, H] = qfjacobi(A);
? L
%3 = [-1.000000000000000000, 3.000000000000000000]~
? H
%4 =
[ 0.7071067811865475245 0.7071067811865475244]
[-0.7071067811865475244 0.7071067811865475245]
? norml2( (A-L[1])*H[,1] ) \\ approximate eigenvector
%5 = 9.403954806578300064 E-38
? norml2(H*H~ - 1)
%6 = 2.350988701644575016 E-38 \\ close to orthogonal
@eprog
The library syntax is \fun{GEN}{jacobi}{GEN A, long prec}.
\subsec{qflll$(x,\{\fl=0\})$}\kbdsidx{qflll}\label{se:qflll}
\idx{LLL} algorithm applied to the
\emph{columns} of the matrix $x$. The columns of $x$ may be linearly
dependent. The result is by default a unimodular transformation matrix $T$
such that $x \cdot T$ is an LLL-reduced basis of the lattice generated by
the column vectors of $x$. Note that if $x$ is not of maximal rank $T$ will
not be square. The LLL parameters are $(0.51,0.99)$, meaning that the
Gram-Schmidt coefficients for the final basis satisfy $|\mu_{i,j}| \leq
0.51$, and the Lov\'{a}sz's constant is $0.99$.
If $\fl=0$ (default), assume that $x$ has either exact (integral or
rational) or real floating point entries. The matrix is rescaled, converted
to integers and the behavior is then as in $\fl = 1$.
If $\fl=1$, assume that $x$ is integral. Computations involving Gram-Schmidt
vectors are approximate, with precision varying as needed (Lehmer's trick,
as generalized by Schnorr). Adapted from Nguyen and Stehl\'e's algorithm
and Stehl\'e's code (\kbd{fplll-1.3}).
If $\fl=2$, $x$ should be an integer matrix whose columns are linearly
independent. Returns a partially reduced basis for $x$, using an unpublished
algorithm by Peter Montgomery: a basis is said to be \emph{partially reduced}
if $|v_i \pm v_j| \geq |v_i|$ for any two distinct basis vectors $v_i, \,
v_j$. This is faster than $\fl=1$, esp. when one row is huge compared
to the other rows (knapsack-style), and should quickly produce relatively
short vectors. The resulting basis is \emph{not} LLL-reduced in general.
If LLL reduction is eventually desired, avoid this partial reduction:
applying LLL to the partially reduced matrix is significantly \emph{slower}
than starting from a knapsack-type lattice.
If $\fl=3$, as $\fl=1$, but the reduction is performed in place: the
routine returns $x \cdot T$. This is usually faster for knapsack-type
lattices.
If $\fl=4$, as $\fl=1$, returning a vector $[K, T]$ of matrices: the
columns of $K$ represent a basis of the integer kernel of $x$
(not LLL-reduced in general) and $T$ is the transformation
matrix such that $x\cdot T$ is an LLL-reduced $\Z$-basis of the image
of the matrix $x$.
If $\fl=5$, case as case $4$, but $x$ may have polynomial coefficients.
If $\fl=8$, same as case $0$, but $x$ may have polynomial coefficients.
\bprog
? \p500
realprecision = 500 significant digits
? a = 2*cos(2*Pi/97);
? C = 10^450;
? v = powers(a,48); b = round(matconcat([matid(48),C*v]~));
? p = b * qflll(b)[,1]; \\ tiny linear combination of powers of 'a'
time = 4,470 ms.
? exponent(v * p / C)
%5 = -1418
? p3 = qflll(b,3)[,1]; \\ compute in place, faster
time = 3,790 ms.
? p3 == p \\ same result
%7 = 1
? p2 = b * qflll(b,2)[,1]; \\ partial reduction: faster, not as good
time = 343 ms.
? exponent(v * p2 / C)
%9 = -1190
@eprog
The library syntax is \fun{GEN}{qflll0}{GEN x, long flag}.
Also available are \fun{GEN}{lll}{GEN x} ($\fl=0$),
\fun{GEN}{lllint}{GEN x} ($\fl=1$), and \fun{GEN}{lllkerim}{GEN x} ($\fl=4$).
\subsec{qflllgram$(G,\{\fl=0\})$}\kbdsidx{qflllgram}\label{se:qflllgram}
Same as \kbd{qflll}, except that the
matrix $G = \kbd{x\til * x}$ is the Gram matrix of some lattice vectors $x$,
and not the coordinates of the vectors themselves. In particular, $G$ must
now be a square symmetric real matrix, corresponding to a positive
quadratic form (not necessarily definite: $x$ needs not have maximal rank).
The result is a unimodular
transformation matrix $T$ such that $x \cdot T$ is an LLL-reduced basis of
the lattice generated by the column vectors of $x$. See \tet{qflll} for
further details about the LLL implementation.
If $\fl=0$ (default), assume that $G$ has either exact (integral or
rational) or real floating point entries. The matrix is rescaled, converted
to integers and the behavior is then as in $\fl = 1$.
If $\fl=1$, assume that $G$ is integral. Computations involving Gram-Schmidt
vectors are approximate, with precision varying as needed (Lehmer's trick,
as generalized by Schnorr). Adapted from Nguyen and Stehl\'e's algorithm
and Stehl\'e's code (\kbd{fplll-1.3}).
$\fl=4$: $G$ has integer entries, gives the kernel and reduced image of $x$.
$\fl=5$: same as $4$, but $G$ may have polynomial coefficients.
The library syntax is \fun{GEN}{qflllgram0}{GEN G, long flag}.
Also available are \fun{GEN}{lllgram}{GEN G} ($\fl=0$),
\fun{GEN}{lllgramint}{GEN G} ($\fl=1$), and \fun{GEN}{lllgramkerim}{GEN G}
($\fl=4$).
\subsec{qfminim$(x,\{B\},\{m\},\{\fl=0\})$}\kbdsidx{qfminim}\label{se:qfminim}
$x$ being a square and symmetric matrix of dimension $d$ representing
a positive definite quadratic form, this function deals with the vectors of
$x$ whose norm is less than or equal to $B$, enumerated using the
Fincke-Pohst algorithm, storing at most $m$ pairs of vectors: only one
vector is given for each pair $\pm v$. There is no limit if $m$ is omitted:
beware that this may be a huge vector! The vectors are returned in no
particular order.
The function searches for the minimal nonzero vectors if $B$ is omitted.
The behavior is undefined if $x$ is not positive definite (a ``precision too
low'' error is most likely, although more precise error messages are
possible). The precise behavior depends on $\fl$.
\item If $\fl=0$ (default), return $[N, M, V]$, where $N$ is the number of
vectors enumerated (an even number, possibly larger than $2m$), $M \leq B$
is the maximum norm found, and $V$ is a matrix whose columns are found
vectors.
\item If $\fl=1$, ignore $m$ and return $[M,v]$, where $v$ is a nonzero
vector of length $M \leq B$. If no nonzero vector has length $\leq B$,
return $[]$. If no explicit $B$ is provided, return a vector of smallish
norm, namely the vector of smallest length (usually the first one but not
always) in an LLL-reduced basis for $x$.
In these two cases, $x$ must have integral \emph{small} entries: more
precisely, we definitely must have $d\cdot \|x\|_\infty^2 < 2^{53}$ but
even that may not be enough. The implementation uses low precision floating
point computations for maximal speed and gives incorrect results when $x$
has large entries. That condition is checked in the code and the routine
raises an error if large rounding errors occur. A more robust, but much
slower, implementation is chosen if the following flag is used:
\item If $\fl=2$, $x$ can have non integral real entries, but this is also
useful when $x$ has large integral entries. Return $[N, M, V]$ as in case
$\fl = 0$, where $M$ is returned as a floating point number. If $x$ is
inexact and $B$ is omitted, the ``minimal'' vectors in $V$ only have
approximately the same norm (up to the internal working accuracy).
This version is very robust but still offers no hard and fast guarantee
about the result: it involves floating point operations performed at a high
floating point precision depending on your input, but done without rigorous
tracking of roundoff errors (as would be provided by interval arithmetic for
instance). No example is known where the input is exact but the function
returns a wrong result.
\bprog
? x = matid(2);
? qfminim(x) \\@com 4 minimal vectors of norm 1: $\pm[0,1]$, $\pm[1,0]$
%2 = [4, 1, [0, 1; 1, 0]]
? { x = \\ The Leech lattice
[4, 2, 0, 0, 0,-2, 0, 0, 0, 0, 0, 0, 1,-1, 0, 0, 0, 1, 0,-1, 0, 0, 0,-2;
2, 4,-2,-2, 0,-2, 0, 0, 0, 0, 0, 0, 0,-1, 0, 0, 0, 0, 0,-1, 0, 1,-1,-1;
0,-2, 4, 0,-2, 0, 0, 0, 0, 0, 0, 0,-1, 1, 0, 0, 1, 0, 0, 1,-1,-1, 0, 0;
0,-2, 0, 4, 0, 0, 0, 0, 0, 0, 0, 0, 1,-1, 0, 0, 0, 1,-1, 0, 1,-1, 1, 0;
0, 0,-2, 0, 4, 0, 0, 0, 1,-1, 0, 0, 1, 0, 0, 0,-2, 0, 0,-1, 1, 1, 0, 0;
-2, -2,0, 0, 0, 4,-2, 0,-1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0,-1, 1, 1;
0, 0, 0, 0, 0,-2, 4,-2, 0, 0, 0, 0, 0, 1, 0, 0, 0,-1, 0, 0, 0, 1,-1, 0;
0, 0, 0, 0, 0, 0,-2, 4, 0, 0, 0, 0,-1, 0, 0, 0, 0, 0,-1,-1,-1, 0, 1, 0;
0, 0, 0, 0, 1,-1, 0, 0, 4, 0,-2, 0, 1, 1, 0,-1, 0, 1, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0,-1, 0, 0, 0, 0, 4, 0, 0, 1, 1,-1, 1, 0, 0, 0, 1, 0, 0, 1, 0;
0, 0, 0, 0, 0, 0, 0, 0,-2, 0, 4,-2, 0,-1, 0, 0, 0,-1, 0,-1, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,-2, 4,-1, 1, 0, 0,-1, 1, 0, 1, 1, 1,-1, 0;
1, 0,-1, 1, 1, 0, 0,-1, 1, 1, 0,-1, 4, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1,-1;
-1,-1, 1,-1, 0, 0, 1, 0, 1, 1,-1, 1, 0, 4, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1;
0, 0, 0, 0, 0, 0, 0, 0, 0,-1, 0, 0, 0, 1, 4, 0, 0, 0, 1, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0, 0,-1, 1, 0, 0, 1, 1, 0, 4, 0, 0, 0, 0, 1, 1, 0, 0;
0, 0, 1, 0,-2, 0, 0, 0, 0, 0, 0,-1, 0, 0, 0, 0, 4, 1, 1, 1, 0, 0, 1, 1;
1, 0, 0, 1, 0, 0,-1, 0, 1, 0,-1, 1, 1, 0, 0, 0, 1, 4, 0, 1, 1, 0, 1, 0;
0, 0, 0,-1, 0, 1, 0,-1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 4, 0, 1, 1, 0, 1;
-1, -1,1, 0,-1, 1, 0,-1, 0, 1,-1, 1, 0, 1, 0, 0, 1, 1, 0, 4, 0, 0, 1, 1;
0, 0,-1, 1, 1, 0, 0,-1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 4, 1, 0, 1;
0, 1,-1,-1, 1,-1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 4, 0, 1;
0,-1, 0, 1, 0, 1,-1, 1, 0, 1, 0,-1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 4, 1;
-2,-1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,-1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 4]; }
? qfminim(x,,0) \\ 0: don't store minimal vectors
time = 121 ms.
%4 = [196560, 4, [;]] \\ 196560 minimal vectors of norm 4
? qfminim(x) \\ store all minimal vectors !
time = 821 ms.
? qfminim(x,,0,2); \\ safe algorithm. Slower and unnecessary here.
time = 5,540 ms.
%6 = [196560, 4.000061035156250000, [;]]
? qfminim(x,,,2); \\ safe algorithm; store all minimal vectors
time = 6,602 ms.
@eprog\noindent\sidx{Leech lattice}\sidx{minimal vector}
In this example, storing 0 vectors limits memory use; storing all of them
requires a \kbd{parisize} about 50MB. All minimal vectors are nevertheless
enumerated in both cases of course, which means the speedup is likely to be
marginal.
The library syntax is \fun{GEN}{qfminim0}{GEN x, GEN B = NULL, GEN m = NULL, long flag, long prec}.
Also available are
\fun{GEN}{minim}{GEN x, GEN B = NULL, GEN m = NULL} ($\fl=0$),
\fun{GEN}{minim2}{GEN x, GEN B = NULL, GEN m = NULL} ($\fl=1$).
\fun{GEN}{minim_raw}{GEN x, GEN B = NULL, GEN m = NULL} (do not perform LLL
reduction on x and return \kbd{NULL} on accuracy error).
\fun{GEN}{minim_zm}{GEN x, GEN B = NULL, GEN m = NULL} ($\fl=0$, return vectors as
\typ{VECSMALL} to save memory)
\subsec{qfminimize$(G)$}\kbdsidx{qfminimize}\label{se:qfminimize}
Given a square symmetric matrix $G$ with rational coefficients, and
non-zero determinant, of dimension $n \geq 1$, return \kbd{[H,U]} such that
\kbd{H = c*U\til*G*U} for some rational $c$, and $H$ integral with minimal
determinant. The coefficients of $U$ are usually nonintegral.
\bprog
? G = matdiagonal([650, -104329, -104329]);
? [H,U]=qfminimize(G); H
%2 = [-1,0,0;0,-1,0;0,0,1]
? U
%3 = [0,0,1/5;5/323,-1/323,0;-1/323,-5/323,0]
? U~*G*U
%4 = [-26,0,0;0,-26,0;0,0,26]
@eprog
Hence $c = 26$ in this example.
The library syntax is \fun{GEN}{qfminimize}{GEN G}.
\subsec{qfnorm$(x,\{q\})$}\kbdsidx{qfnorm}\label{se:qfnorm}
This function is obsolete, use \kbd{qfeval}.
The library syntax is \fun{GEN}{qfnorm}{GEN x, GEN q = NULL}.
\subsec{qforbits$(G,V)$}\kbdsidx{qforbits}\label{se:qforbits}
Return the orbits of $V$ under the action of the group
of linear transformation generated by the set $G$.
It is assumed that $G$ contains minus identity, and only one vector
in $\{v, -v\}$ should be given.
If $G$ does not stabilize $V$, the function return $0$.
In the example below, we compute representatives and lengths of the orbits of
the vectors of norm $\leq 3$ under the automorphisms of the lattice $\Z^6$.
\bprog
? Q=matid(6); G=qfauto(Q); V=qfminim(Q,3);
? apply(x->[x[1],#x],qforbits(G,V))
%2 = [[[0,0,0,0,0,1]~,6],[[0,0,0,0,1,-1]~,30],[[0,0,0,1,-1,-1]~,80]]
@eprog
The library syntax is \fun{GEN}{qforbits}{GEN G, GEN V}.
\subsec{qfparam$(G, \var{sol}, \{\fl = 0\})$}\kbdsidx{qfparam}\label{se:qfparam}
Coefficients of binary quadratic forms that parametrize the
solutions of the ternary quadratic form $G$, using the particular
solution~\var{sol}.
\fl{} is optional and can be 1, 2, or 3, in which case the \fl-th form is
reduced. The default is \fl=0 (no reduction).
\bprog
? G = [1,0,0;0,1,0;0,0,-34];
? M = qfparam(G, qfsolve(G))
%2 =
[ 3 -10 -3]
[-5 -6 5]
[ 1 0 1]
@eprog
Indeed, the solutions can be parametrized as
$$(3x^2 - 10xy - 3y^2)^2 + (-5x^2 - 6xy + 5y^2)^2 -34(x^2 + y^2)^2 = 0.$$
\bprog
? v = y^2 * M*[1,x/y,(x/y)^2]~
%3 = [3*x^2 - 10*y*x - 3*y^2, -5*x^2 - 6*y*x + 5*y^2, -x^2 - y^2]~
? v~*G*v
%4 = 0
@eprog
The library syntax is \fun{GEN}{qfparam}{GEN G, GEN sol, long flag}.
\subsec{qfperfection$(G)$}\kbdsidx{qfperfection}\label{se:qfperfection}
$G$ being a square and symmetric matrix with integer entries
representing a positive definite quadratic form, outputs the perfection rank
of the form. That is, gives the rank of the family of the $s$ symmetric
matrices $vv^t$, where $v$ runs through the minimal vectors.
A form is perfect if and only if its perfection rank is $d(d+1)/2$ where
$d$ is the dimension of $G$.
The algorithm computes the minimal vectors and its runtime is exponential
in $d$.
The library syntax is \fun{GEN}{qfperfection}{GEN G}.
\subsec{qfrep$(q,B,\{\fl=0\})$}\kbdsidx{qfrep}\label{se:qfrep}
$q$ being a square and symmetric matrix with integer entries representing a
positive definite quadratic form, count the vectors representing successive
integers.
\item If $\fl = 0$, count all vectors. Outputs the vector whose $i$-th
entry, $1 \leq i \leq B$ is half the number of vectors $v$ such that $q(v)=i$.
\item If $\fl = 1$, count vectors of even norm. Outputs the vector
whose $i$-th entry, $1 \leq i \leq B$ is half the number of vectors such
that $q(v) = 2i$.
\bprog
? q = [2, 1; 1, 3];
? qfrep(q, 5)
%2 = Vecsmall([0, 1, 2, 0, 0]) \\ 1 vector of norm 2, 2 of norm 3, etc.
? qfrep(q, 5, 1)
%3 = Vecsmall([1, 0, 0, 1, 0]) \\ 1 vector of norm 2, 0 of norm 4, etc.
@eprog\noindent
This routine uses a naive algorithm based on \tet{qfminim}, and
will fail if any entry becomes larger than $2^{31}$ (or $2^{63}$).
The library syntax is \fun{GEN}{qfrep0}{GEN q, GEN B, long flag}.
\subsec{qfsign$(x)$}\kbdsidx{qfsign}\label{se:qfsign}
Returns $[p,m]$ the signature of the quadratic form represented by the
symmetric matrix $x$. Namely, $p$ (resp.~$m$) is the number of positive
(resp.~negative) eigenvalues of $x$. The result is computed using Gaussian
reduction.
The library syntax is \fun{GEN}{qfsign}{GEN x}.
\subsec{qfsolve$(G)$}\kbdsidx{qfsolve}\label{se:qfsolve}
Given a square symmetric matrix $G$ of dimension $n \geq 1$, solve over
$\Q$ the quadratic equation $X^tGX = 0$. The matrix $G$ must have rational
coefficients. The solution might be a single nonzero column vector
(\typ{COL}) or a matrix (whose columns generate a totally isotropic
subspace).
If no solution exists, returns an integer, that can be a prime $p$ such that
there is no local solution at $p$, or $-1$ if there is no real solution,
or $-2$ if $n = 2$ and $-\det G$ is not a square (which implies there is a
real solution, but no local solution at some $p$ dividing $\det G$).
\bprog
? G = [1,0,0;0,1,0;0,0,-34];
? qfsolve(G)
%1 = [-3, -5, 1]~
? qfsolve([1,0; 0,2])
%2 = -1 \\ no real solution
? qfsolve([1,0,0;0,3,0; 0,0,-2])
%3 = 3 \\ no solution in Q_3
? qfsolve([1,0; 0,-2])
%4 = -2 \\ no solution, n = 2
@eprog
The library syntax is \fun{GEN}{qfsolve}{GEN G}.
\subsec{setbinop$(f,X,\{Y\})$}\kbdsidx{setbinop}\label{se:setbinop}
The set whose elements are the f(x,y), where x,y run through X,Y.
respectively. If $Y$ is omitted, assume that $X = Y$ and that $f$ is symmetric:
$f(x,y) = f(y,x)$ for all $x,y$ in $X$.
\bprog
? X = [1,2,3]; Y = [2,3,4];
? setbinop((x,y)->x+y, X,Y) \\ set X + Y
%2 = [3, 4, 5, 6, 7]
? setbinop((x,y)->x-y, X,Y) \\ set X - Y
%3 = [-3, -2, -1, 0, 1]
? setbinop((x,y)->x+y, X) \\ set 2X = X + X
%2 = [2, 3, 4, 5, 6]
@eprog
The library syntax is \fun{GEN}{setbinop}{GEN f, GEN X, GEN Y = NULL}.
\subsec{setdelta$(x,y)$}\kbdsidx{setdelta}\label{se:setdelta}
Symmetric difference of the two sets $x$ and $y$ (see \kbd{setisset}).
If $x$ or $y$ is not a set, the result is undefined.
The library syntax is \fun{GEN}{setdelta}{GEN x, GEN y}.
\subsec{setintersect$(x,y)$}\kbdsidx{setintersect}\label{se:setintersect}
Intersection of the two sets $x$ and $y$ (see \kbd{setisset}).
If $x$ or $y$ is not a set, the result is undefined.
The library syntax is \fun{GEN}{setintersect}{GEN x, GEN y}.
\subsec{setisset$(x)$}\kbdsidx{setisset}\label{se:setisset}
Returns true (1) if $x$ is a set, false (0) if
not. In PARI, a set is a row vector whose entries are strictly
increasing with respect to a (somewhat arbitrary) universal comparison
function. To convert any object into a set (this is most useful for
vectors, of course), use the function \kbd{Set}.
\bprog
? a = [3, 1, 1, 2];
? setisset(a)
%2 = 0
? Set(a)
%3 = [1, 2, 3]
@eprog
The library syntax is \fun{long}{setisset}{GEN x}.
\subsec{setminus$(x,y)$}\kbdsidx{setminus}\label{se:setminus}
Difference of the two sets $x$ and $y$ (see \kbd{setisset}),
i.e.~set of elements of $x$ which do not belong to $y$.
If $x$ or $y$ is not a set, the result is undefined.
The library syntax is \fun{GEN}{setminus}{GEN x, GEN y}.
\subsec{setsearch$(S,x,\{\fl=0\})$}\kbdsidx{setsearch}\label{se:setsearch}
Determines whether $x$ belongs to the set $S$ (see \kbd{setisset}).
We first describe the default behavior, when $\fl$ is zero or omitted. If $x$
belongs to the set $S$, returns the index $j$ such that $S[j]=x$, otherwise
returns 0.
\bprog
? T = [7,2,3,5]; S = Set(T);
? setsearch(S, 2)
%2 = 1
? setsearch(S, 4) \\ not found
%3 = 0
? setsearch(T, 7) \\ search in a randomly sorted vector
%4 = 0 \\ WRONG !
@eprog\noindent
If $S$ is not a set, we also allow sorted lists with
respect to the \tet{cmp} sorting function, without repeated entries,
as per \tet{listsort}$(L,1)$; otherwise the result is undefined.
\bprog
? L = List([1,4,2,3,2]); setsearch(L, 4)
%1 = 0 \\ WRONG !
? listsort(L, 1); L \\ sort L first
%2 = List([1, 2, 3, 4])
? setsearch(L, 4)
%3 = 4 \\ now correct
@eprog\noindent
If $\fl$ is nonzero, this function returns the index $j$ where $x$ should be
inserted, and $0$ if it already belongs to $S$. This is meant to be used for
dynamically growing (sorted) lists, in conjunction with \kbd{listinsert}.
\bprog
? L = List([1,5,2,3,2]); listsort(L,1); L
%1 = List([1,2,3,5])
? j = setsearch(L, 4, 1) \\ 4 should have been inserted at index j
%2 = 4
? listinsert(L, 4, j); L
%3 = List([1, 2, 3, 4, 5])
@eprog
The library syntax is \fun{long}{setsearch}{GEN S, GEN x, long flag}.
\subsec{setunion$(x,y)$}\kbdsidx{setunion}\label{se:setunion}
Union of the two sets $x$ and $y$ (see \kbd{setisset}).
If $x$ or $y$ is not a set, the result is undefined.
The library syntax is \fun{GEN}{setunion}{GEN x, GEN y}.
\subsec{snfrank$(D,q)$}\kbdsidx{snfrank}\label{se:snfrank}
Assuming that $D$ is a Smith normal form
(i.e. vector of elementary divisors) for some module and $q$ a power of an
irreducible element or $0$, return the minimal number of generators for
$D/qD$. For instance, if $q=p^n$ where $p$ is a prime number, this is the
dimension of $(p^{n-1}D)/p^nD$ as an $\F_p$-vector space.
\bprog
? snfrank([4,4,2], 2)
%1 = 3
? snfrank([4,4,2], 4)
%2 = 2
? snfrank([4,4,2], 8)
%3 = 0
? snfrank([4,4,2], 0)
%4 = 3
@eprog\noindent The function also works for $K[x]$-modules:
\bprog
? D=matsnf([-x-5,-1,-1,0; 0,x^2+10*x+26,-1,-x-5; 1,-x-5,-x-5,1; -1,0,0,1]);
? snfrank(D, x^2 + 10*x + 27)
%6 = 2
? A=matdiagonal([x-1,x^2+1,x-1,(x^2+1)^2,x,(x-1)^2]); D=matsnf(A);
? snfrank(D,x-1)
%8 = 3
? snfrank(D,(x-1)^2)
%9 = 1
? snfrank(D,(x-1)^3)
%9 = 0
? snfrank(D,x^2+1)
%10 = 2
@eprog\noindent Finally this function supports any output from \kbd{matsnf}
(e.g., with transformation matrices included, with or without cleanup).
The library syntax is \fun{long}{snfrank}{GEN D, GEN q}.
\subsec{trace$(x)$}\kbdsidx{trace}\label{se:trace}
This applies to quite general $x$. If $x$ is not a
matrix, it is equal to the sum of $x$ and its conjugate, except for polmods
where it is the trace as an algebraic number.
For $x$ a square matrix, it is the ordinary trace. If $x$ is a
nonsquare matrix (but not a vector), an error occurs.
The library syntax is \fun{GEN}{gtrace}{GEN x}.
\subsec{vecextract$(x,y,\{z\})$}\kbdsidx{vecextract}\label{se:vecextract}
Extraction of components of the vector or matrix $x$ according to $y$.
In case $x$ is a matrix, its components are the \emph{columns} of $x$. The
parameter $y$ is a component specifier, which is either an integer, a string
describing a range, or a vector.
If $y$ is an integer, it is considered as a mask: the binary bits of $y$ are
read from right to left, but correspond to taking the components from left to
right. For example, if $y=13=(1101)_2$ then the components 1,3 and 4 are
extracted.
If $y$ is a vector (\typ{VEC}, \typ{COL} or \typ{VECSMALL}), which must have
integer entries, these entries correspond to the component numbers to be
extracted, in the order specified.
If $y$ is a string, it can be
\item a single (nonzero) index giving a component number (a negative
index means we start counting from the end).
\item a range of the form \kbd{"$a$..$b$"}, where $a$ and $b$ are
indexes as above. Any of $a$ and $b$ can be omitted; in this case, we take
as default values $a = 1$ and $b = -1$, i.e.~ the first and last components
respectively. We then extract all components in the interval $[a,b]$, in
reverse order if $b < a$.
In addition, if the first character in the string is \kbd{\pow}, the
complement of the given set of indices is taken.
If $z$ is not omitted, $x$ must be a matrix. $y$ is then the \emph{row}
specifier, and $z$ the \emph{column} specifier, where the component specifier
is as explained above.
\bprog
? v = [a, b, c, d, e];
? vecextract(v, 5) \\@com mask
%1 = [a, c]
? vecextract(v, [4, 2, 1]) \\@com component list
%2 = [d, b, a]
? vecextract(v, "2..4") \\@com interval
%3 = [b, c, d]
? vecextract(v, "-1..-3") \\@com interval + reverse order
%4 = [e, d, c]
? vecextract(v, "^2") \\@com complement
%5 = [a, c, d, e]
? vecextract(matid(3), "2..", "..")
%6 =
[0 1 0]
[0 0 1]
@eprog
The range notations \kbd{v[i..j]} and \kbd{v[\pow i]} (for \typ{VEC} or
\typ{COL}) and \kbd{M[i..j, k..l]} and friends (for \typ{MAT}) implement a
subset of the above, in a simpler and \emph{faster} way, hence should be
preferred in most common situations. The following features are not
implemented in the range notation:
\item reverse order,
\item omitting either $a$ or $b$ in \kbd{$a$..$b$}.
The library syntax is \fun{GEN}{extract0}{GEN x, GEN y, GEN z = NULL}.
\subsec{vecprod$(v)$}\kbdsidx{vecprod}\label{se:vecprod}
Return the product of the components of the vector $v$. Return $1$ on an
empty vector.
\bprog
? vecprod([1,2,3])
%1 = 6
? vecprod([])
%2 = 1
@eprog
The library syntax is \fun{GEN}{vecprod}{GEN v}.
\subsec{vecsearch$(v,x,\{\var{cmpf}\})$}\kbdsidx{vecsearch}\label{se:vecsearch}
Determines whether $x$ belongs to the sorted vector or list $v$: return
the (positive) index where $x$ was found, or $0$ if it does not belong to
$v$.
If the comparison function cmpf is omitted, we assume that $v$ is sorted in
increasing order, according to the standard comparison function \kbd{lex},
thereby restricting the possible types for $x$ and the elements of $v$
(integers, fractions, reals, and vectors of such). We also transparently
allow a \typ{VECSMALL} $x$ in this case, for the natural ordering of the
integers.
If \kbd{cmpf} is present, it is understood as a comparison function and we
assume that $v$ is sorted according to it, see \tet{vecsort} for how to
encode comparison functions.
\bprog
? v = [1,3,4,5,7];
? vecsearch(v, 3)
%2 = 2
? vecsearch(v, 6)
%3 = 0 \\ not in the list
? vecsearch([7,6,5], 5) \\ unsorted vector: result undefined
%4 = 0
@eprog\noindent Note that if we are sorting with respect to a key
which is expensive to compute (e.g. a discriminant), one should rather
precompute all keys, sort that vector and search in the vector of keys,
rather than searching in the original vector with respect to a comparison
function.
By abuse of notation, $x$ is also allowed to be a matrix, seen as a vector
of its columns; again by abuse of notation, a \typ{VEC} is considered
as part of the matrix, if its transpose is one of the matrix columns.
\bprog
? v = vecsort([3,0,2; 1,0,2]) \\ sort matrix columns according to lex order
%1 =
[0 2 3]
[0 2 1]
? vecsearch(v, [3,1]~)
%2 = 3
? vecsearch(v, [3,1]) \\ can search for x or x~
%3 = 3
? vecsearch(v, [1,2])
%4 = 0 \\ not in the list
@eprog\noindent
The library syntax is \fun{long}{vecsearch}{GEN v, GEN x, GEN cmpf = NULL}.
\subsec{vecsort$(x,\{\var{cmpf}\},\{\fl=0\})$}\kbdsidx{vecsort}\label{se:vecsort}
Sorts the vector $x$ in ascending order, using a mergesort method.
$x$ must be a list, vector or matrix (seen as a vector of its columns).
Note that mergesort is stable, hence the initial ordering of ``equal''
entries (with respect to the sorting criterion) is not changed.
If \kbd{cmpf} is omitted, we use the standard comparison function
\kbd{lex}, thereby restricting the possible types for the elements of $x$
(integers, fractions or reals and vectors of those). We also transparently
allow a \typ{VECSMALL} $x$ in this case, for the standard ordering on the
integers.
If \kbd{cmpf} is present, it is understood as a comparison function and we
sort according to it. The following possibilities exist:
\item an integer $k$: sort according to the value of the $k$-th
subcomponents of the components of~$x$.
\item a vector: sort lexicographically according to the components listed in
the vector. For example, if $\kbd{cmpf}=\kbd{[2,1,3]}$, sort with respect to
the second component, and when these are equal, with respect to the first,
and when these are equal, with respect to the third.
\item a comparison function: \typ{CLOSURE} with two arguments $x$ and $y$,
and returning a real number which is $<0$, $>0$ or $=0$ if $x y$ or
$x=y$ respectively.
\item a key: \typ{CLOSURE} with one argument $x$ and returning
the value $f(x)$ with respect to which we sort.
\bprog
? vecsort([3,0,2; 1,0,2]) \\ sort columns according to lex order
%1 =
[0 2 3]
[0 2 1]
? vecsort(v, (x,y)->y-x) \\@com reverse sort
? vecsort(v, (x,y)->abs(x)-abs(y)) \\@com sort by increasing absolute value
? vecsort(v, abs) \\@com sort by increasing absolute value, using key
? cmpf(x,y) = my(dx = poldisc(x), dy = poldisc(y)); abs(dx) - abs(dy);
? v = [x^2+1, x^3-2, x^4+5*x+1] vecsort(v, cmpf) \\@com comparison function
? vecsort(v, x->abs(poldisc(x))) \\@com key
@eprog\noindent
The \kbd{abs} and \kbd{cmpf} examples show how to use a named function
instead of an anonymous function. It is preferable to use a \var{key}
whenever possible rather than include it in the comparison function as above
since the key is evaluated $O(n)$ times instead of $O(n\log n)$,
where $n$ is the number of entries.
A direct approach is also possible and equivalent to using a sorting key:
\bprog
? T = [abs(poldisc(x)) | x<-v];
? perm = vecsort(T,,1); \\@com indirect sort
? vecextract(v, perm)
@eprog\noindent This also provides the vector $T$ of all keys, which is
interesting for instance in later \tet{vecsearch} calls: it is more
efficient to sort $T$ (\kbd{T = vecextract(T, perm)}) then search for a key
in $T$ rather than to search in $v$ using a comparison function or a key.
Note also that \tet{mapisdefined} is often easier to use and faster than
\kbd{vecsearch}.
\noindent The binary digits of \fl\ mean:
\item 1: indirect sorting of the vector $x$, i.e.~if $x$ is an
$n$-component vector, returns a permutation of $[1,2,\dots,n]$ which
applied to the components of $x$ sorts $x$ in increasing order.
For example, \kbd{vecextract(x, vecsort(x,,1))} is equivalent to
\kbd{vecsort(x)}.
\item 4: use descending instead of ascending order.
\item 8: remove ``duplicate'' entries with respect to the sorting function
(keep the first occurring entry). For example:
\bprog
? vecsort([Pi,Mod(1,2),z], (x,y)->0, 8) \\@com make everything compare equal
%1 = [3.141592653589793238462643383]
? vecsort([[2,3],[0,1],[0,3]], 2, 8)
%2 = [[0, 1], [2, 3]]
@eprog
The library syntax is \fun{GEN}{vecsort0}{GEN x, GEN cmpf = NULL, long flag}.
\subsec{vecsum$(v)$}\kbdsidx{vecsum}\label{se:vecsum}
Return the sum of the components of the vector $v$. Return $0$ on an
empty vector.
\bprog
? vecsum([1,2,3])
%1 = 6
? vecsum([])
%2 = 0
@eprog
The library syntax is \fun{GEN}{vecsum}{GEN v}.
\subsec{vector$(n,\{X\},\{\var{expr}=0\})$}\kbdsidx{vector}\label{se:vector}
Creates a row vector (type
\typ{VEC}) with $n$ components whose components are the expression
\var{expr} evaluated at the integer points between 1 and $n$. If the last
two arguments are omitted, fills the vector with zeroes.
\bprog
? vector(3,i, 5*i)
%1 = [5, 10, 15]
? vector(3)
%2 = [0, 0, 0]
@eprog
The variable $X$ is lexically scoped to each evaluation of \var{expr}. Any
change to $X$ within \var{expr} does not affect subsequent evaluations, it
still runs 1 to $n$. A local change allows for example different indexing:
\bprog
vector(10, i, i=i-1; f(i)) \\ i = 0, ..., 9
vector(10, i, i=2*i; f(i)) \\ i = 2, 4, ..., 20
@eprog\noindent
This per-element scope for $X$ differs from \kbd{for} loop evaluations,
as the following example shows:
\bprog
n = 3
v = vector(n); vector(n, i, i++) ----> [2, 3, 4]
v = vector(n); for (i = 1, n, v[i] = i++) ----> [2, 0, 4]
@eprog\noindent
%\syn{NO}
\subsec{vectorsmall$(n,\{X\},\{\var{expr}=0\})$}\kbdsidx{vectorsmall}\label{se:vectorsmall}
Creates a row vector of small integers (type \typ{VECSMALL}) with $n$
components whose components are the expression \var{expr} evaluated at the
integer points between 1 and $n$.
%\syn{NO}
\subsec{vectorv$(n,\{X\},\{\var{expr}=0\})$}\kbdsidx{vectorv}\label{se:vectorv}
As \tet{vector}, but returns a column vector (type \typ{COL}).
%\syn{NO}
\section{Transcendental functions}\label{se:trans}
Since the values of transcendental functions cannot be exactly represented,
these functions will always return an inexact object: a real number,
a complex number, a $p$-adic number or a power series. All these objects
have a certain finite precision.
As a general rule, which of course in some cases may have exceptions,
transcendental functions operate in the following way:
\item If the argument is either a real number or an inexact complex number
(like \kbd{1.0 + I} or \kbd{Pi*I} but not \kbd{2 - 3*I}), then the
computation is done with the precision of the argument.
In the example below, we see that changing the precision to $50$ digits does
not matter, because $x$ only had a precision of $19$ digits.
\bprog
? \p 15
realprecision = 19 significant digits (15 digits displayed)
? x = Pi/4
%1 = 0.785398163397448
? \p 50
realprecision = 57 significant digits (50 digits displayed)
? sin(x)
%2 = 0.7071067811865475244
@eprog
Note that even if the argument is real, the result may be complex
(e.g.~$\text{acos}(2.0)$ or $\text{acosh}(0.0)$). See each individual
function help for the definition of the branch cuts and choice of principal
value.
\item If the argument is either an integer, a rational, an exact complex
number or a quadratic number, it is first converted to a real
or complex number using the current \idx{precision}, which can be
view and manipulated using the defaults \tet{realprecision} (in decimal
digits) or \tet{realbitprecision} (in bits). This precision can be changed
indifferently
\item in decimal digits: use \b{p} or \kbd{default(realprecision,...)}.
\item in bits: use \b{pb} or \kbd{default(realbitprecision,...)}.
After this conversion, the computation proceeds as above for real or complex
arguments.
In library mode, the \kbd{realprecision} does not matter; instead the
precision is taken from the \kbd{prec} parameter which every transcendental
function has. As in \kbd{gp}, this \kbd{prec} is not used when the argument
to a function is already inexact. Note that the argument \var{prec} stands
for the length in words of a real number, including codewords. Hence we must
have $\var{prec} \geq 3$. (Some functions allow a \kbd{bitprec} argument
instead which allow finer granularity.)
Some accuracies attainable on 32-bit machines cannot be attained
on 64-bit machines for parity reasons. For example the default \kbd{gp} accuracy
is 28 decimal digits on 32-bit machines, corresponding to \var{prec} having
the value 5, but this cannot be attained on 64-bit machines.
\item If the argument is a polmod (representing an algebraic number),
then the function is evaluated for every possible complex embedding of that
algebraic number. A column vector of results is returned, with one component
for each complex embedding. Therefore, the number of components equals
the degree of the \typ{POLMOD} modulus.
\item If the argument is an intmod or a $p$-adic, at present only a
few functions like \kbd{sqrt} (square root), \kbd{sqr} (square), \kbd{log},
\kbd{exp}, powering, \kbd{teichmuller} (Teichm\"uller character) and
\kbd{agm} (arithmetic-geometric mean) are implemented.
Note that in the case of a $2$-adic number, $\kbd{sqr}(x)$ may not be
identical to $x*x$: for example if $x = 1+O(2^5)$ and $y = 1+O(2^5)$ then
$x*y = 1+O(2^5)$ while $\kbd{sqr}(x) = 1+O(2^6)$. Here, $x * x$ yields the
same result as $\kbd{sqr}(x)$ since the two operands are known to be
\emph{identical}. The same statement holds true for $p$-adics raised to the
power $n$, where $v_p(n) > 0$.
\misctitle{Remark} If we wanted to be strictly consistent with
the PARI philosophy, we should have $x*y = (4 \mod 8)$ and $\kbd{sqr}(x) =
(4 \mod 32)$ when both $x$ and $y$ are congruent to $2$ modulo $4$.
However, since intmod is an exact object, PARI assumes that the modulus
must not change, and the result is hence $(0\, \mod\, 4)$ in both cases. On
the other hand, $p$-adics are not exact objects, hence are treated
differently.
\item If the argument is a polynomial, a power series or a rational function,
it is, if necessary, first converted to a power series using the current
series precision, held in the default \tet{seriesprecision}. This precision
(the number of significant terms) can be changed using \b{ps} or
\kbd{default(seriesprecision,...)}. Then the Taylor series expansion of the
function around $X=0$ (where $X$ is the main variable) is computed to a
number of terms depending on the number of terms of the argument and the
function being computed.
Under \kbd{gp} this again is transparent to the user. When programming in
library mode, however, it is \emph{strongly} advised to perform an explicit
conversion to a power series first, as in
\bprog
x = gtoser(x, gvar(x), seriesprec)
@eprog\noindent
where the number of significant terms \kbd{seriesprec} can be specified
explicitly. If you do not do this, a global variable \kbd{precdl} is used
instead, to convert polynomials and rational functions to a power series with
a reasonable number of terms; tampering with the value of this global
variable is \emph{deprecated} and strongly discouraged.
\item If the argument is a vector or a matrix, the result is the
componentwise evaluation of the function. In particular, transcendental
functions on square matrices, which are not implemented in the present
version \vers, will have a different name if they are implemented some day.
\subsec{Catalan}\kbdsidx{Catalan}\label{se:Catalan}
Catalan's constant $G = \sum_{n>=0}\dfrac{(-1)^n}{(2n+1)^2}=0.91596\cdots$.
Note that \kbd{Catalan} is one of the few reserved names which cannot be
used for user variables.
The library syntax is \fun{GEN}{mpcatalan}{long prec}.
\subsec{Euler}\kbdsidx{Euler}\label{se:Euler}
Euler's constant $\gamma=0.57721\cdots$. Note that
\kbd{Euler} is one of the few reserved names which cannot be used for
user variables.
The library syntax is \fun{GEN}{mpeuler}{long prec}.
\subsec{I}\kbdsidx{I}\label{se:I}
The complex number $\sqrt{-1}$.
The library syntax is \fun{GEN}{gen_I}{}.
\subsec{Pi}\kbdsidx{Pi}\label{se:Pi}
The constant $\pi$ ($3.14159\cdots$). Note that \kbd{Pi} is one of the few
reserved names which cannot be used for user variables.
The library syntax is \fun{GEN}{mppi}{long prec}.
\subsec{abs$(x)$}\kbdsidx{abs}\label{se:abs}
Absolute value of $x$ (modulus if $x$ is complex).
Rational functions are not allowed. Contrary to most transcendental
functions, an exact argument is \emph{not} converted to a real number before
applying \kbd{abs} and an exact result is returned if possible.
\bprog
? abs(-1)
%1 = 1
? abs(3/7 + 4/7*I)
%2 = 5/7
? abs(1 + I)
%3 = 1.414213562373095048801688724
@eprog\noindent
If $x$ is a polynomial, returns $-x$ if the leading coefficient is
real and negative else returns $x$. For a power series, the constant
coefficient is considered instead.
The library syntax is \fun{GEN}{gabs}{GEN x, long prec}.
\subsec{acos$(x)$}\kbdsidx{acos}\label{se:acos}
Principal branch of $\cos^{-1}(x) = -i \log (x + i\sqrt{1-x^2})$.
In particular, $\Re(\text{acos}(x))\in [0,\pi]$ and if $x\in \R$ and $|x|>1$,
then $\text{acos}(x)$ is complex. The branch cut is in two pieces:
$]-\infty,-1]$ , continuous with quadrant II, and $[1,+\infty[$, continuous
with quadrant IV. We have $\text{acos}(x) = \pi/2 - \text{asin}(x)$ for all
$x$.
The library syntax is \fun{GEN}{gacos}{GEN x, long prec}.
\subsec{acosh$(x)$}\kbdsidx{acosh}\label{se:acosh}
Principal branch of $\cosh^{-1}(x) = 2
\log(\sqrt{(x+1)/2} + \sqrt{(x-1)/2})$. In particular,
$\Re(\text{acosh}(x))\geq 0$ and
$\Im(\text{acosh}(x))\in ]-\pi,\pi]$; if $x\in \R$ and $x<1$, then
$\text{acosh}(x)$ is complex.
The library syntax is \fun{GEN}{gacosh}{GEN x, long prec}.
\subsec{agm$(x,y)$}\kbdsidx{agm}\label{se:agm}
Arithmetic-geometric mean of $x$ and $y$. In the
case of complex or negative numbers, the optimal AGM is returned
(the largest in absolute value over all choices of the signs of the square
roots). $p$-adic or power series arguments are also allowed. Note that
a $p$-adic agm exists only if $x/y$ is congruent to 1 modulo $p$ (modulo
16 for $p=2$). $x$ and $y$ cannot both be vectors or matrices.
The library syntax is \fun{GEN}{agm}{GEN x, GEN y, long prec}.
\subsec{airy$(z)$}\kbdsidx{airy}\label{se:airy}
Airy $[Ai,Bi]$ functions of argument $z$.
\bprog
? [A,B] = airy(1);
? A
%2 = 0.13529241631288141552414742351546630617
? B
%3 = 1.2074235949528712594363788170282869954
@eprog\noindent
The library syntax is \fun{GEN}{airy}{GEN z, long prec}.
\subsec{arg$(x)$}\kbdsidx{arg}\label{se:arg}
Argument of the complex number $x$, such that $-\pi < \arg(x) \le \pi$.
The library syntax is \fun{GEN}{garg}{GEN x, long prec}.
\subsec{asin$(x)$}\kbdsidx{asin}\label{se:asin}
Principal branch of $\sin^{-1}(x) = -i \log(ix + \sqrt{1 - x^2})$.
In particular, $\Re(\text{asin}(x))\in [-\pi/2,\pi/2]$ and if $x\in \R$ and
$|x|>1$ then $\text{asin}(x)$ is complex. The branch cut is in two pieces:
$]-\infty,-1]$, continuous with quadrant II, and $[1,+\infty[$ continuous
with quadrant IV. The function satisfies $i \text{asin}(x) =
\text{asinh}(ix)$.
The library syntax is \fun{GEN}{gasin}{GEN x, long prec}.
\subsec{asinh$(x)$}\kbdsidx{asinh}\label{se:asinh}
Principal branch of $\sinh^{-1}(x) = \log(x + \sqrt{1+x^2})$. In
particular $\Im(\text{asinh}(x))\in [-\pi/2,\pi/2]$.
The branch cut is in two pieces: $]-i \infty ,-i]$, continuous with quadrant
III and $[+i,+i \infty[$, continuous with quadrant I.
The library syntax is \fun{GEN}{gasinh}{GEN x, long prec}.
\subsec{atan$(x)$}\kbdsidx{atan}\label{se:atan}
Principal branch of $\text{tan}^{-1}(x) = \log ((1+ix)/(1-ix)) /
2i$. In particular the real part of $\text{atan}(x)$ belongs to
$]-\pi/2,\pi/2[$.
The branch cut is in two pieces:
$]-i\infty,-i[$, continuous with quadrant IV, and $]i,+i \infty[$ continuous
with quadrant II. The function satisfies $\text{atan}(x) =
-i\text{atanh}(ix)$ for all $x\neq \pm i$.
The library syntax is \fun{GEN}{gatan}{GEN x, long prec}.
\subsec{atanh$(x)$}\kbdsidx{atanh}\label{se:atanh}
Principal branch of $\text{tanh}^{-1}(x) = \log ((1+x)/(1-x)) / 2$. In
particular the imaginary part of $\text{atanh}(x)$ belongs to
$[-\pi/2,\pi/2]$; if $x\in \R$ and $|x|>1$ then $\text{atanh}(x)$ is complex.
The library syntax is \fun{GEN}{gatanh}{GEN x, long prec}.
\subsec{besselh1$(\var{nu},x)$}\kbdsidx{besselh1}\label{se:besselh1}
$H^1$-Bessel function of index \var{nu} and argument $x$.
The library syntax is \fun{GEN}{hbessel1}{GEN nu, GEN x, long prec}.
\subsec{besselh2$(\var{nu},x)$}\kbdsidx{besselh2}\label{se:besselh2}
$H^2$-Bessel function of index \var{nu} and argument $x$.
The library syntax is \fun{GEN}{hbessel2}{GEN nu, GEN x, long prec}.
\subsec{besseli$(\var{nu},x)$}\kbdsidx{besseli}\label{se:besseli}
$I$-Bessel function of index \var{nu} and
argument $x$. If $x$ converts to a power series, the initial factor
$(x/2)^\nu/\Gamma(\nu+1)$ is omitted (since it cannot be represented in PARI
when $\nu$ is not integral).
The library syntax is \fun{GEN}{ibessel}{GEN nu, GEN x, long prec}.
\subsec{besselj$(\var{nu},x)$}\kbdsidx{besselj}\label{se:besselj}
$J$-Bessel function of index \var{nu} and
argument $x$. If $x$ converts to a power series, the initial factor
$(x/2)^\nu/\Gamma(\nu+1)$ is omitted (since it cannot be represented in PARI
when $\nu$ is not integral).
The library syntax is \fun{GEN}{jbessel}{GEN nu, GEN x, long prec}.
\subsec{besseljh$(n,x)$}\kbdsidx{besseljh}\label{se:besseljh}
$J$-Bessel function of half integral index.
More precisely, $\kbd{besseljh}(n,x)$ computes $J_{n+1/2}(x)$ where $n$
must be of type integer, and $x$ is any element of $\C$. In the
present version \vers, this function is not very accurate when $x$ is small.
The library syntax is \fun{GEN}{jbesselh}{GEN n, GEN x, long prec}.
\subsec{besseljzero$(\var{nu},\{k=1\})$}\kbdsidx{besseljzero}\label{se:besseljzero}
$k$-th zero of the $J$-Bessel function of index \var{nu}, close
to $\pi(\nu/2 + k - 1/4)$, usually noted $j_{\nu,k}$.
\bprog
? besseljzero(0) \\ @com{first zero of $J_0$}
%1 = 2.4048255576957727686216318793264546431
? besselj(0, %)
%2 = 7.1951595399463653939930598011247182898 E-41
? besseljzero(0, 2) \\ @com{second zero}
%3 = 5.5200781102863106495966041128130274252
? besseljzero(I) \\ @com{also works for complex orders, here $J_i$}
%4 = 2.5377... + 1.4753...*I
@eprog\noindent The function uses a Newton iteration due to Temme.
If $\nu$ is real and nonnegative, the result is guaranteed and we really
return the $k$-th positive zero of $J_\nu$. For general $\nu$, the result
is not well defined, we just use Newton iteration with $\pi(\nu/2 + k - 1/4)$
as a starting value. (N.B. Using this method for large real $\nu$ would give
completely different results than the $j_{\nu,k}$ unless $k$ is large enough.)
The library syntax is \fun{GEN}{besseljzero}{GEN nu, long k, long bitprec}.
\subsec{besselk$(\var{nu},x)$}\kbdsidx{besselk}\label{se:besselk}
$K$-Bessel function of index \var{nu} and argument $x$.
The library syntax is \fun{GEN}{kbessel}{GEN nu, GEN x, long prec}.
\subsec{besseln$(\var{nu},x)$}\kbdsidx{besseln}\label{se:besseln}
Deprecated alias for \kbd{bessely}.
The library syntax is \fun{GEN}{ybessel}{GEN nu, GEN x, long prec}.
\subsec{bessely$(\var{nu},x)$}\kbdsidx{bessely}\label{se:bessely}
$Y$-Bessel function of index \var{nu} and argument $x$.
The library syntax is \fun{GEN}{ybessel}{GEN nu, GEN x, long prec}.
\subsec{besselyzero$(\var{nu},\{k=1\})$}\kbdsidx{besselyzero}\label{se:besselyzero}
$k$-th zero of the $Y$-Bessel function of index \var{nu}, close
to $\pi(\nu/2 + k - 3/4)$, usually noted $y_{\nu,k}$.
\bprog
? besselyzero(0) \\ @com{first zero of $Y_0$}
%1 = 0.89357696627916752158488710205833824123
? bessely(0, %)
%2 = 1.8708573650996561952 E-39
? besselyzero(0, 2) \\ @com{second zero}
%3 = 3.9576784193148578683756771869174012814
? besselyzero(I) \\ @com{also works for complex orders, here $Y_i$}
%4 = 1.03930... + 1.3266...*I
@eprog\noindent The function uses a Newton iteration due to Temme.
If $\nu$ is real and nonnegative, the result is guaranteed and we really
return the $k$-th positive zero of $Y_\nu$. For general $\nu$, the result
is not well defined, we just use Newton iteration with $\pi(\nu/2 + k - 3/4)$
as a starting value. (N.B. Using this method for large real $\nu$ would give
completely different results than the $y_{\nu,k}$ unless $k$ is large enough.)
The library syntax is \fun{GEN}{besselyzero}{GEN nu, long k, long bitprec}.
\subsec{cos$(x)$}\kbdsidx{cos}\label{se:cos}
Cosine of $x$.
Note that, for real $x$, cosine and sine can be obtained simultaneously as
\bprog
cs(x) = my(z = exp(I*x)); [real(z), imag(z)];
@eprog and for general complex $x$ as
\bprog
cs2(x) = my(z = exp(I*x), u = 1/z); [(z+u)/2, (z-u)/2];
@eprog Note that the latter function suffers from catastrophic cancellation
when $z^2 \approx \pm1$.
The library syntax is \fun{GEN}{gcos}{GEN x, long prec}.
\subsec{cosh$(x)$}\kbdsidx{cosh}\label{se:cosh}
Hyperbolic cosine of $x$.
The library syntax is \fun{GEN}{gcosh}{GEN x, long prec}.
\subsec{cotan$(x)$}\kbdsidx{cotan}\label{se:cotan}
Cotangent of $x$.
The library syntax is \fun{GEN}{gcotan}{GEN x, long prec}.
\subsec{cotanh$(x)$}\kbdsidx{cotanh}\label{se:cotanh}
Hyperbolic cotangent of $x$.
The library syntax is \fun{GEN}{gcotanh}{GEN x, long prec}.
\subsec{dilog$(x)$}\kbdsidx{dilog}\label{se:dilog}
Principal branch of the dilogarithm of $x$,
i.e.~analytic continuation of the power series
$\text{Li}_2(x)=\sum_{n\ge1}x^n/n^2$.
The library syntax is \fun{GEN}{dilog}{GEN x, long prec}.
\subsec{eint1$(x,\{n\})$}\kbdsidx{eint1}\label{se:eint1}
Exponential integral $\int_x^\infty \dfrac{e^{-t}}{t}\,dt =
\kbd{incgam}(0, x)$, where the latter expression extends the function
definition from real $x > 0$ to all complex $x \neq 0$.
If $n$ is present, we must have $x > 0$; the function returns the
$n$-dimensional vector $[\kbd{eint1}(x),\dots,\kbd{eint1}(nx)]$. Contrary to
other transcendental functions, and to the default case ($n$ omitted), the
values are correct up to a bounded \emph{absolute}, rather than relative,
error $10^{-n}$, where $n$ is \kbd{precision}$(x)$ if $x$ is a \typ{REAL}
and defaults to \kbd{realprecision} otherwise. (In the most important
application, to the computation of $L$-functions via approximate functional
equations, those values appear as weights in long sums and small individual
relative errors are less useful than controlling the absolute error.) This is
faster than repeatedly calling \kbd{eint1($i$ * x)}, but less precise.
The library syntax is \fun{GEN}{veceint1}{GEN x, GEN n = NULL, long prec}.
Also available is \fun{GEN}{eint1}{GEN x, long prec}.
\subsec{ellE$(k)$}\kbdsidx{ellE}\label{se:ellE}
Complete elliptic integral of the second kind
$$E(k)=\int_0^{\pi/2}(1-k^2\sin(t)^2)^{1/2}\,dt$$ for the
complex parameter $k$ using the agm.
In particular, the perimeter of an ellipse of semi-major and semi-minor axes
$a$ and $b$ is given by
\bprog
e = sqrt(1 - (b/a)^2); \\ eccentricity
4 * a * ellE(e) \\ perimeter
@eprog
The library syntax is \fun{GEN}{ellE}{GEN k, long prec}.
\subsec{ellK$(k)$}\kbdsidx{ellK}\label{se:ellK}
Complete elliptic integral of the first kind
$$K(k)=\int_0^{\pi/2}(1-k^2\sin(t)^2)^{-1/2}\,dt$$ for the
complex parameter $k$ using the agm.
The library syntax is \fun{GEN}{ellK}{GEN k, long prec}.
\subsec{erfc$(x)$}\kbdsidx{erfc}\label{se:erfc}
Complementary error function, analytic continuation of
$(2/\sqrt\pi)\int_x^\infty e^{-t^2}\,dt
= \text{sign(x)}\kbd{incgam}(1/2,x^2)/\sqrt\pi$ for real $x \neq 0$.
The latter expression extends the function definition from real $x$ to
complex $x$ with positive real part (or zero real part and positive
imaginary part). This is extended to the whole complex plane by
the functional equation $\kbd{erfc}(-x) = 2 - \kbd{erfc}(x)$.
\bprog
? erfc(0)
%1 = 1.0000000000000000000000000000000000000
? erfc(1)
%2 = 0.15729920705028513065877936491739074071
? erfc(1+I)
%3 = -0.31615128169794764488027108024367036903
- 0.19045346923783468628410886196916244244*I
@eprog
The library syntax is \fun{GEN}{gerfc}{GEN x, long prec}.
\subsec{eta$(z,\{\fl=0\})$}\kbdsidx{eta}\label{se:eta}
Variants of \idx{Dedekind}'s $\eta$ function.
If $\fl = 0$, return $\prod_{n=1}^\infty(1-q^n)$, where $q$ depends on $x$
in the following way:
\item $q = e^{2i\pi x}$ if $x$ is a \emph{complex number} (which must then
have positive imaginary part); notice that the factor $q^{1/24}$ is
missing!
\item $q = x$ if $x$ is a \typ{PADIC}, or can be converted to a
\emph{power series} (which must then have positive valuation).
If $\fl$ is nonzero, $x$ is converted to a complex number and we return the
true $\eta$ function, $q^{1/24}\prod_{n=1}^\infty(1-q^n)$,
where $q = e^{2i\pi x}$.
The library syntax is \fun{GEN}{eta0}{GEN z, long flag, long prec}.
Also available is \fun{GEN}{trueeta}{GEN x, long prec} ($\fl=1$).
\subsec{exp$(x)$}\kbdsidx{exp}\label{se:exp}
Exponential of $x$.
$p$-adic arguments with positive valuation are accepted.
The library syntax is \fun{GEN}{gexp}{GEN x, long prec}.
For a \typ{PADIC} $x$, the function
\fun{GEN}{Qp_exp}{GEN x} is also available.
\subsec{expm1$(x)$}\kbdsidx{expm1}\label{se:expm1}
Return $\exp(x)-1$, computed in a way that is also accurate
when the real part of $x$ is near $0$.
A naive direct computation would suffer from catastrophic cancellation;
PARI's direct computation of $\exp(x)$ alleviates this well known problem at
the expense of computing $\exp(x)$ to a higher accuracy when $x$ is small.
Using \kbd{expm1} is recommended instead:
\bprog
? default(realprecision, 10000); x = 1e-100;
? a = expm1(x);
time = 4 ms.
? b = exp(x)-1;
time = 4 ms.
? default(realprecision, 10040); x = 1e-100;
? c = expm1(x); \\ reference point
? abs(a-c)/c \\ relative error in expm1(x)
%7 = 1.4027986153764843997 E-10019
? abs(b-c)/c \\ relative error in exp(x)-1
%8 = 1.7907031188259675794 E-9919
@eprog\noindent As the example above shows, when $x$ is near $0$,
\kbd{expm1} is more accurate than \kbd{exp(x)-1}.
The library syntax is \fun{GEN}{gexpm1}{GEN x, long prec}.
\subsec{gamma$(s)$}\kbdsidx{gamma}\label{se:gamma}
For $s$ a complex number, evaluates Euler's gamma
function\sidx{gamma-function}, which is the analytic continuation of
$$\Gamma(s)=\int_0^\infty t^{s-1}\exp(-t)\,dt,\quad \Re(s) > 0.$$
Error if $s$ is a nonpositive integer, where $\Gamma$ has a (simple) pole.
\bprog
? gamma(5) \\ @com $\Gamma(n) = (n-1)!$ for a positive integer $n$
%1 = 24.000000000000000000000000000000000000
? gamma(0)
*** at top-level: gamma(0)
*** ^--------
*** gamma: domain error in gamma: argument = nonpositive integer
? gamma(x + O(x^3))
%2 = x^-1 - 0.57721566490153286060651209008240243104 + O(x)
@eprog
For $s$ a \typ{PADIC}, evaluates the Morita gamma function at $s$, that
is the unique continuous $p$-adic function on the $p$-adic integers
extending $\Gamma_p(k)=(-1)^k \prod_{j**