# Algebraic Complexity and duality

Get Complete Project Material File(s) Now! »

## Algorithms and complexity

Asymptotic complexity

We shall measure the complexity of the algorithms that appear in this document using the classical O (big-Oh) notation. Given two functions f, g : N ! N, by f 2 O(g) we mean that there are an x0 and a constant M such that f(x) < Mg(x) for any x > x0. (2.1)
Similarly, we shall use the and notations to state lower bounds and tight bounds: the definition of f 2 (g) is like Eq. (2.1), but with the inequality turned the other way. The definition of is f 2 (g) if and only if f 2 O(g) and f 2 (g).
We shall also make use of the notation ˜ (soft-Oh of ) that forgets polylogarithmic factors in the variable x, thus O(xy log x log y) Ox(xy log y) Ox,y(xy).
We simply write ˜ when the variables are clear from the context.
Many algorithms below rely on fast multiplication; thus, we let MR : N ! N be a multiplication function, such that polynomials in R[X] of degree less than n can be multiplied in MR(n) arithmetic operations. We drop the index R when the ring is clear from the context. To simplify expressions, following [vzGG99, x8.3], we shall assume that M(n) is
• superlinear: M(n)=n > M(m)=m if n > m,
• at most quadratic: M(mn) 6 m2M(n).
We shall see soon that these assumptions are reasonable ones.
The cost of modular composition –that is computing f g mod h, for f, g, h 2 R[X] of degrees at most n, and h monic– will be written C(n). We shall make the assumption that C(n) > M(n) for any n; the next section will review algorithms for modular composition.

Fundamental algorithms

In this section we review some fundamental algorithms that we will repeatedly use in the rest of the document. Most of the algorithms we present are taken from [vzGG99]; another source of inspiration is [BCG+10].

Polynomial multiplication

Multiplication of polynomials with coefficients in a ring is a fundamental under-pinning to which most of the algorithms in computer algebra reduce.
In the previous section we introduced the notation M(n) to denote the number of operations in R required to multiply two polynomials of degree at most n in R[X]. Using the school-book algorithm, we have M(n) 2 O(n2). The first major step forward in the complexity of multiplication was done by Karatsuba [KO63]. He observed that using the formula
f = f1Xn + f2, g = g1Xn + g2,
fg = f1g1X2n + (f1 + f2)(g1 + g2) – f1g1 – f2g2 Xn + f2g2,
multiplication can be computed recursively using only 3 recursive calls. It follows that M(n) 2 O(nlog2 3).
When the base ring R is a field containing a primitive n-th root of unit !, polynomials can be multiplied by evaluating at the powers of !, multiplying each evaluation, and interpolating back. The map that sends a polynomial of degree n over its evaluations at the n-th roots of unit is called discrete Fourier transform, there are many algorithms of complexity O(n log n) to compute it, they all go under the generic name of fast Fourier transform (FFT).
Thus, multiplication in certain fields can be carried out in O(n log n) operations. Schönhage and Strassen’s method [SS71], along with its generalizations [Sch77, CK91], adjoins enough roots of unit to any ring R by taking an extension of it; this yields an algorithm of complexity O(n log n loglog n) to multiply polynomials of degree n in R[X].

Formal power series

We denote by R[[X]] the ring of formal power series on R. Its elements are the sequences (fi)i>0 of elements of R, they are denoted by X f(X) = fiXi. (2.2) i>0
Multiplication and evaluation are defined in the obvious way. An element f 2 R[[X]] is invertible if and only if f0 is an unit of R.
Since formal power series are infinite objects, to be used in a discrete algorithm they must be approximated. We denote by f mod Xn the polynomial f mod Xn = fiXi. (2.3) 06i<n
We write f = g + O(Xn), where g is a polynomial or a power series, whenever f mod Xn = g mod Xn,
and we say that g approximates f to the precision n.
Using polynomial multiplication, the product of two series known up to precision n can be computed in M(n) operations.
Derivative, integral. We define the derivative of a power series as f0(X) = (i + 1)fi+1Xi; (2.4) i>0 if R contains Q, we also define the integral as Z f(X) = X f 1 Xi+1. i>0 i +i (2.5)
Derivatives and integrals up to precision n can be computed in O(n) operations by their definition.
Logarithm, exponential. In what follows, we suppose that R contains Q. The logarithm of a power series f such that f(0) = 1 is defined as Z f0 log f = . (2.6)
The exponential of a power series f such that f(0) = 0, is defined as exp(f) = 1 + f=1! + f2=2! + (2.7)
First order linear differential equations. All the usual identities involving multipli-cation, derivatives, integrals, logarithms and exponentials are verified on power series. An immediate consequence of this is a formula to solve first order linear differential equations due to Brent and Kung [BK78].
Let f, g 2 R[[X]], the equation
y0 = f(X)y + g(X) (2.8)
with initial condition y(0) = a has solution
y(X) = j(X) a + Z g(X)j(X) , (2.9)
where j = exp(- f); the verification is immediate.
In the next subsection we shall see that multiplicative inverses, logarithms, exponentials and powers up to precision n can all be computed in O(M(n)) operations, thus formula (2.9) can also be applied at the same cost.
Note. If R does not contain Q, but has characteristic 0, it is easy to use the previous definitions by working in R[2-1, 3-1, . . .] and taking the result back in R when needed. In characteristic different from 0, these definition do not make sense anymore because Eq. (2.5) introduces a division by 0. However, when 2, 3, . . . , n are invertible in R, we can still do computations on power series truncated to the order n.

### Newton’s iteration

Let : R ! R be a C1 function, Newton’s iteration is a classical method to approximate a root x of . Start from an approximation x0, and linearize to compute x1 = x0 – (x0) , (2.10) then iterate this step until the desired precision is obtained. When x0 is taken close enough to a root, and when the derivative at this root is non-zero, Newton’s iteration converges quadratically to the solution, meaning that at each iteration the distance to the solution is squared.
In computer algebra, Newton’s iteration is applied to operators : R[[X]] ! R[[X]] on formal power series; in this context, quadratic convergence means that the number of correct terms is doubled at each iteration. Many fast algorithms for some fundamental operations on power series and polynomials are obtained by this method, here we summarize the most important ones.
Inversion. If f 2 R[[X]] is invertible, the operator (y) = 1=y – f applied to y0 = 1=f(0) converges quadratically to the inverse of f. Since the iteration associated to is yi+1 = yi(2 – yif), (2.11) the cost of inverting a power series is O(M(n)). From Eq. (2.6) we deduce that computing the logarithm of a power series has the same cost.
Another important consequence of this algorithm is that the Euclidean division of polynomials of degree at most n can also be performed in O(M(n)) operations.
Exponential. If f is such that f(0) = 0, we compute its exponential using the operator (y) = f – log y, which gives the iteration yi+1 = yi(1 + f – log y). (2.12)
Thus, the cost of computing an exponential is O(M(n)) too. Using the formula f = exp( log f), (2.13) we deduce that, in characteristic 0, computing arbitrary rational powers of power series costs O(M(n)) too.

READ  Generalities on first order ordinary differential equations

Modular composition

Given polynomials f, g, h 2 R[X] of degree at most n with h monic, the modular composition requires to compute f(g(X)) mod h(X); (2.14) the special case where h = Xn permits us to compute the composition of power series truncated to the order n.
Modular composition is a fundamental algorithm with lots of applications, the most relevant being polynomial factorization [vzGS92, KS98] and computation of minimal polynomials (see Remark 5.23).
Since many algorithms in this document make use of modular composition, we introduced the notation C(n) for its complexity. A naive algorithm implies C(n) 2 O(nM(n)). The first improvement to this bound was given by Brent and Kung in [BK78]: they devised a baby step-giant step algorithm of complex-ity O pnM(n) + n(!+1)=2 ; in the same paper they also gave an algorithm of complexity O n log nM(n) for composition of power series. Bernstein [Ber98] found the bound O(M(n) log n) for the composition of power series in case the characteristic of the base ring is small, however for a long time Brent and Kung’s
Algorithm 2.1: Iterated Frobenius
Input : 0 < i < d, a 2 Fq[X]=f(x), 1(X) = Xq mod f(x).
Output : ’i (a).
Pq
1: let i = bj2j be the binary expansion of i;
2: k 1;
3: for j = blog2 ic – 1 to 0 do
4: if bj = 0 then
5: 2k kk mod f;
6: k 2k;
7: else kk mod f;
8: 2k
9: 2k+1 2k1 mod f;
10: k 2k + 1;
11: return a i mod f.
algorithm and its variants [HP98, KS98] have stood as the only generic algorithm for modular composition. A major breakthrough has been recently achieved by Kedlaya and Umans [Uma08, KU08], who give an algorithm for modular com-position over a finite field Fq of binary complexity n1+o(1) log1+o(1) q, using a reduction to multivariate multipoint evaluation.
Computing iterated Frobenius and pseudotrace. Fast modular composition can be used to compute Frobenius automorphisms and pseudotraces in finite fields. This algorithm is due to von zur Gathen and Shoup [vzGS92], who applied it to polynomial factorization. We will repeatedly use it in Chapters 6 and 8.
Consider the field extension Fqd =Fq, its Galois group is generated by the Frobenius automorphism
’q : Fqd ! Fqd , x 7!xq.
For any n < d, we also define the n-th pseudotrace1 as Tn : Fqd ! Fqd ,
nX-1
x 7! xqi .
i=0
(2.15)
(2.16)
Notice that, when n = d, the pseudotrace coincides with the trace TrFqd =Fq ; in this case one can use much faster algorithms.
We suppose that elements of Fqd are represented as residue classes in Fq[X]=f(X) for some irreducible polynomial f, then the Frobenius automorphism can be com-puted with O(log q) multiplications in Fqd plus one modular composition as
1(X) = Xq mod f(X), (2.17)
’q(a) = Xq a mod f = a Xq mod f = a 1 mod f. (2.18)
1 In [vzGS92], this map goes under the name of trace map.
Algorithm 2.2: Pseudotrace
Input : 0 < i < d, a 2 Fq[X]=f(x), 1(X) = Xq mod f(x).
Output : TPn(a).
1: let n = bj2j be the binary expansion of n;
2: o 0, 1 a 1;
3: k = b0;
4: for j = 1 to blog2 nc do
5: 2j 2j-1 2j-1 mod f;
6: 2j 2j-1 + 2j-1 2j-1 mod f;
7: if bj = 1 then
8: 2j+k 2j + k 2j mod f;
9: k 2j + k;
10: return n.
Iterating i times ’q can be done with only O(log i) modular compositions via square-and-multiply as shown in Algorithm 2.1.
Thus the cost of computing the i-th iterated Frobenius is O(C(d) log i) (2.19) operations in Fq plus a precomputation costing O(M(d) log q).
In Algorithm 2.2 we apply the same idea to compute the n-th pseudotrace in O(C(d) log n) operations; note that we use a dynamic programming technique to keep the complexity into this bound. The key equation is Tn+m(a) = Tn(a) + ’qn(Tm(a)). (2.20)

#### Interpolation and Chinese remainder algorithm

Let K be a field, and let x1, . . . , xn 2 K be distinct points. Let f be the polynomial that vanishes on x1, . . . , xn, by the Chinese remainder theorem we know that K[X]=f(X) YK[X]=(X – x ). (2.21) = i
If a is an element on the left side of the isomorphism (i.e. a polynomial modulo f), moving it to the right is evaluation at the points x1, . . . , xn. The inverse operation is interpolation.
We now give two algorithms to perform this change of representation efficiently. We suppose for simplicity that n = 2k. The first step is to construct the subproduct tree.
Computing the subproduct tree takes O(M(n) log n) operations and requires an equivalent storage. Having precomputed it, it is immediate to evaluate a polynomial at the points x1, . . . , xn.
If a has degree at most n, computing the multipoint evaluation also takes O(M(n) log n) operations using a fast Newton iteration for Euclidean division.
For the inverse operation, we use the Lagrange interpolants
si = Y (X – xj) . (2.22)
j6=i xi – xj
Algorithm 2.3: Subproduct tree
Input : n = 2k, x1, . . . , xn 2 K.
Output : The subproduct tree.
1: let f(ik) = (X – xi) for 0 < i 6 2k;
2: for j = k – 1 to 0 do
3: for all i 2 [1, . . . , 2j] do
4: f(ij) f(ij+1)f(2ji+1).
Algorithm 2.4: Multipoint evaluation
Input : The subproduct tree, a 2 K[X]=f(X).
Output : a(x1), . . . , a(xn).
1: for j = 1 to k do
2: for all i 2 [1, . . . , 2j-1] do
3: ai(j) ai(j+1) mod fij+1 ;
4: a2(ji) ai(j+1) mod f2j+i 1 ;
5: return a(1k), . . . , a(nk).

I Prerequisites
1 Algebra
1.1 Linear algebra
1.2 Basic Galois theory
1.3 Basic algebraic geometry
2 Algorithms and complexity
2.1 Asymptotic complexity
2.2 Fundamental algorithms
II The transposition principle
3 Algebraic Complexity and duality
3.1 Arithmetic circuits
3.2 Multilinearity
3.3 Straight Line Programs
3.4 Automatic differentiation
4 Automatic transposition of code
4.1 Inferring linearity
4.2 transalpyne
III Fast arithmetics using univariate representations
5 Trace computations
5.1 Decomposition of a zero-dimensional ideal
5.2 Trace formulas
5.3 Sitckelberger’s theorem
5.4 Rational Univariate Representation
5.5 The univariate case
5.6 Shoup’s algorithm
5.7 From univariate to bivariate and back again
6 Artin-Schreier towers
6.1 Introduction
6.2 Preliminaries
6.3 A primitive tower
6.4 Level embedding
6.5 Frobenius and pseudotrace
6.6 Arbitrary towers
6.7 Experimental results
IV Applications to isogenies and cryptography
7 Elliptic curves and isogenies
7.1 Definitions
7.2 Curves over C
7.3 Curves over finite fields
7.4 Modular polynomials
8 Computing isogenies over finite fields
8.1 Overview
8.2 Vélu formulas
8.3 BMSS
8.4 Lercier-Sirvent
8.5 Couveignes’ algorithm
8.6 The algorithm C2-AS
8.7 The algorithm C2-AS-FI
8.8 The algorithm C2-AS-FI-MC
8.9 Isogenies of unknown degree
9 Experimental results
9.1 Implementation of Couveignes’ algorithm
9.2 Implementation of C2-UD
9.3 Implementation of Lercier-Sirvent
9.4 Benchmarks
A Categorical considerations
A.1 Categorical semantics of arithmetic circuits
A.2 Coevaluation
A.3 The tranposition theorem
A.4 From circuits to function-level programming
A.5 Self-transposing polynomial multiplication
B Linearity inference of Karatsuba multiplication
C Proof of Vélu’s formulas
Conclusion
List of symbols
Index
Bibliography

GET THE COMPLETE PROJECT