Description of the Hilbert series of a homogeneous regular sequence

Get Complete Project Material File(s) Now! »

State of the art

Computational strategy for Gröbner bases

We now give more details about how Gröbner bases can be used for solving polynomial systems, and why a reduction strategy taking into account the degree of the polynomials is useful in this work ow.
The de nition of a Gröbner basis depends on the choice of a monomial ordering, in order to give a formal meaning to the reduction of polynomials. Gröbner bases for di erent monomial orderings will yield di erent kinds of information, and be more or less easy to compute. In terms of actually solving the system, two monomial orderings are particularly useful:
the lexicographical ordering: up to a random change of coordinates, the lexicographical Gröbner basis of a zero-dimensional system is a parameterized representation of all solutions: Xn is given as zero of a univariate polynomial, and all other unknowns are expressed as polynomials in Xn .
On the other hand, computing bases for elimination orderings or the lexicographical ordering is usually more di cult than for other orderings such as the GRevLex ordering. A common strategy for solving a polynomial system is thus to rst compute a GRevLex basis using algorithm F4 or F5, and then perform a change of ordering to compute an elimination or lexicographical basis from the GRevLex basis. This change of ordering can be done using another run of algorithm F4 or F5, or using a dedicated algorithm such as FGLM [Fau+93] in the zero-dimensional case (Figure 3), or the Gröbner walk [CKM97] in the general case.
This makes computing a Gröbner basis for the GRevLex ordering an important step in most cases. This order orders the monomials according to their degree rst, which is why algorithms usually use the normal strategy by default when computing a GRevLex basis: it is a way to consider the smallest monomials rst.

Complexity studies for homogeneous systems

Homogeneous systems are maybe the most widely studied structure for Gröbner basis com-putations: indeed, from the very start, strategies such as the normal strategy or the sugar strategy were designed in order to take into account the degree of homogeneous polynomials, or properties of their highest degree components when the systems were inhomogeneous.

State of the art

It has been observed that the normal strategy frequently gives good results in practice. This does not necessarily mean that the algorithms are lightning-fast, but monitoring their progress does not show any “irregular” behavior of the kind of our earlier examples: the degree of the polynomials considered grows steadily from start to end.
The study of this behavior led to algebraic characterizations of systems exhibiting this regular behavior. It turned out that these properties are generic, which is a formal way of expressing that “most” systems will satisfy them, and explains why these strategies appeared e cient in practice.
An example of generic property ensuring regularity in Gröbner basis algorithms is regular sequences. It has several equivalent de nitions, but for now we will restrict to systems with n equations and n unknowns (so-called square systems). A square system is a regular sequence if and only if its set of solutions is nite.
This property ensures that algorithm F5 has a regular behavior on homogeneous systems. For inhomogeneous systems, the “good” property is regularity in the a ne sense, and it requires that the highest degree components of the system form a regular sequence.
Under these genericity hypotheses, the regular behavior of the algorithms makes it possible to obtain complexity bounds. For example, here are complexity bounds for algorithms F5 and FGLM for generic square systems.
where dreg is the degree of regularity of the system, that is the largest degree we need to reach in a run of F5. It is bounded by Macaulay’s bound [BFS14, Cor. 13]:

Other structures

The problem of computing Gröbner bases for structured polynomial systems is not new, and a lot of new structures have been successfully exploited from this point of view in the past few years: examples include multi-homogeneous systems [FSS11], systems invariant under a group action [FS12], determinantal systems [FSS13]…
The structure of these ideals has been thoroughly studied, both from an algebraic stand-point [BV88] and from a computational standpoint [FSS13]. In particular, it is known that generically, the singular locus of Dr is the determinantal variety Dr 1 .
As for computations, the idea of modelling the determinantal variety as the projection of an incidence variety is far from new: for example, the classic method of using Lagrange multipliers in optimization relies on modelling rank defects of the Jacobian matrix of a system as an incidence variety, and this technique was central in the work of [Nal15] on real optimization for determinantal varieties. This strategy has been analyzed from a computational point of view in the zero-dimensional case [FSS13].

Contributions

In this thesis, we worked on two classes of structured polynomial systems: weighted homoge-neous systems, and determinantal systems for a root classi cation problem.
Weighted homogeneity is the structure used in the DLP example on page 9. It generalizes homogeneity, by giving more choice over how the degree is computed. This structure appears naturally in a wide range of applications, for example whenever one applies a polynomial change of variables to a homogeneous system. Algorithmic strategies for weighted homogeneous systems were known, but there was no complexity analysis. Furthermore, the FGLM step of the classic strategy became a bottleneck in the resolution. We identi ed genericity properties which allowed us to obtain complexity bounds for the algorithms, and described a strategy allowing algorithm FGLM to take advantage of the structure. Under genericity hypotheses, we proved that the algorithms have a completely regular behavior in this strategy.
On the other hand, for determinantal systems, we started with a speci c problem from an application to contrast optimization, in medical imagery. While solving this problem, we identi ed that the underlying problem was the more general problem of real root classi cation, specialized to singularities of determinantal varieties. However, while implemented algorithms exist for this general problem, they were unable to give a solution to the system from contrast optimization. We proposed a strategy taking advantage of the determinantal structure of the system, in order to spread the computations over several smaller problems within the reach of existing algorithms. Using incidence varieties to model the rank defects, the regularity of the Gröbner basis algorithms used for computing eliminations is improved.

Weighted-homogeneous systems

This morphism gives a natural strategy for computing Gröbner bases for a weighted homoge-neous system F : we can run the usual algorithms on homW (F ). Some algorithms, such as F4 or F5, appears to become faster with this strategy, but algorithm FGLM becomes a bottleneck.
This strategy is also interesting from a theoretical point of view. Homogeneity is certainly the most studied structure with respect to Gröbner basis computations: this includes the complexity studies given above, but also detailed characterizations of the Hilbert series of an homogeneous ideal [Mor96], thin-grained complexity bounds under stronger genericity hypotheses [BFS14], and complexity analyses for overdetermined systems under a property conjectured to be generic [BFS04].
The question raised by all these results for homogeneous systems is how far these bounds and results can be transposed to the weighted homogeneous case.
It turns out that for a generic W -homogeneous system F , homW (F ) is “sufficiently generic” as a homogeneous system, and a run of Algorithm Matrix-F5 on homW (F ) computes a Gröbner basis without any reduction to zero. In this case, as in the homogeneous case, its complexity can be determined in terms of the size of the computed matrices and the degree of regularity of the system.
In applications, systems are rarely weighted homogeneous. In this situation, as in the homogeneous case, the behavior of the algorithms is tied to reductions to zero of the highest W -degree components of the systems, and so, to whether these components satisfy regularity properties. If this condition is not satisfied, the behavior of algorithm F5 is expected to be irregular, with reductions to zero and degree falls, and generic complexity analyses no longer apply: the algorithms perform worse in practice, and theory fails to quantify exactly how much worse.
This also explains why transforming a weighted homogeneous system into a homogeneous one yields significant improvements in practice. Indeed, given a generic weighted homogeneous systems, its highest degree components are unlikely to be large, and thus to satisfy regularity properties: informally, the smaller a class of polynomial systems is, the less likely there will be enough room for interesting generic properties to be satisfied.
This explains why algorithm FGLM becomes a bottleneck when computing a lexicographical Gröbner basis through homW : the transformation arti cially demultiplies the number of solutions, leading the algorithms to build and reduce unnecessarily large matrices.
On the other hand, it turns out that applying homW1 to a reduced Gröbner basis of homW F yields a Gröbner basis for a weighted equivalent of the GRevLex order. This basis can also be computed directly from the original system by using the weighted degree instead of the total degree in algorithm F4 or F5.
This new basis can be used as input for algorithm FGLM, and it generates the same ideal as the original system F , hence it has the correct degree. Algorithm FGLM can compute a lexicographical basis of the ideal from this basis, with the same complexity bounds as above in terms of the degree of the ideal. The complete computational strategy is summed up on Figure 4.
Regularity properties and complexity bounds The de nition of regular sequences does not involve the gradation, so it extends to a weighted setting. Regarding inhomogeneous systems, as in the total degree case, we consider whether the highest W -degree components form a regular sequence.
These properties are still generic amongst sequences of polynomials of given W -degree, as long as there exists at least one such sequence. It is proved straightforwardly by transposing the genericity proofs in the homogeneous case. However, the existence hypothesis is a necessary one, conditioning the system of weights: take for example W = 2 ; 3 and D = 5 ; 5 , the only monomial of W -degree 5 being XY , so for any weighted-homogeneous system with respective W -degree D, the set of solutions is the union of fX = 0g and fY = 0g, and it is not a nite set.
We shall give hypotheses on the system of degrees ensuring that this hypothesis is satis ed. For example, if all degrees are divisible by the least common multiple of the weights, or if each degree di is divisible by the corresponding weight wi , then regular sequences exist, and thus are generic. But these hypotheses are not necessary: take for example W = 2 ; 3 and D = 5 ; 6 , the sequence XY ; X 3 + Y 2 is regular.

Contributions

which gives us immediately the complexity of algorithm FGLM in the strategy described above, for a square system de ned by a regular sequence:
For a run of Algorithm Matrix-F5 on homW F , the usual complexity bounds from the homogeneous case can be applied, but they can actually be improved. The main asymptotical improvement comes from the work of combinatoricians on the Sylvester denumerant of d, that is the number of monomials with a given W -degree d. In particular, it is known [FS09] that asymptotically, this number of monomials is the number of monomials with total degree d, divided by the product of the weights. As such, the complexity bound for Algorithm Matrix-F5 run on homW F is divided by Q wi 3 when compared to the original bound for homogeneous systems, given on page 13.
In particular, when applicable, it implies that for the best complexity, one should order the variables such that the smallest variable (for the monomial order) is the one with the smallest weight.
This bound is still not sharp in full generality. We conjecture that with no hypothesis on the weights and W -degrees as long as there exist regular sequences, the following bound is true and optimal:
Figure 5: Contrast optimization: both pictures were taken using the same setup of two tubes, the inner one containing deoxygenated blood and the outer one containing oxygenated blood. The left-hand picture is the reference picture, with no contrast optimization; the right-hand picture is contrast-optimized by saturating the inner sample.
Examples This structure appears in many applications, we already gave the example of the work of the authors of [Fau+13] on the DLP for Edwards elliptic curves.
In Section 3.6, we shall present benchmarks computing the relations between fundamental invariants of the cyclic group or the dihedral group, between monomials, and between minors of matrices. We could obtain diverse speed-ups for the rst step of the computational strategy (computing a GRevLex basis) for these systems, from a modest 1:5 to 4 for matrix minors to 105 for monomials.

Real root classification for determinantal systems Application to contrast optimization

Context The second class of structured systems that we studied arised in an application in medical imagery. Nuclear Magnetic Resonance (MNR) is a technique relying on applying a magnetic eld to a body, and measuring the response of di erent matters. Contrast optimization is the problem of ensuring that two biological matters that we wish to distinguish on a picture, for example blood and water, stand apart: ideally, one should be completely black, and the other as bright as possible.
The saturation method is a contrast optimization technique, consisting of making the magnetic eld variable in order to progressively obtain the optimal contrast. In order to nd the optimal “path” towards the saturated state, control theory has been involved [Lap+12].
This strategy involves alternating so-called bang-arcs and singular-arcs, and estimating the complexity of the path requires to estimate the number of switching points, in terms of the matters that we want to study. These switching points are characterized as the singularities of the determinant of a matrix whose entries are polynomials in the parameters and some variables, together with some polynomial inequalities [Bon+13].
This problem has two forms: in its most general form, both matters are free, each matter being described by two parameters. The structure of the system allows to eliminate one parameter, and we need to identify parts of R3 where the number of singularities does not change. A useful easier case is the case where one of the matters is water, leaving only 2 parameters.
This is a particular case of a more general problem: given a matrix MX ; G whose entries are polynomial in some variables X and some parameters G, and a target rank r , let Vr be the locus at which M has rank at most r . We wish to identify a dense subset U of the parameter space, together with a covering of U with open subsets, such that on each open subset, the number of singularities of Vr in the ber over the parameters is constant.
In the case of the application, the physics of the system brings an additional constraint on the solutions: they have to be in the Bloch ball, which is described by inequations. Thus the problem is a real roots classi cation problem on a semi-algebraic set.
Algorithmic strategy The problem is a real roots classi cation problem for a semi-algebraic set described by a parameterized determinantal system and some inequalities.
The Cylindrical Algebraic Decomposition (CAD) method can be used to solve this problem, but none of its implementations can attack the application directly. On the other hand, it is known that adding equational constraints can dramatically improve the complexity of CAD computations. In our context, these equations should be the equations of the borders of the subdivision areas in the parameter space. This is the crux of real roots classi cation algorithms such as those presented in [LR07; YHX01]. However, none of the existing implementations of these algorithms for this problem were able to tackle the general case of the imagery problem.
We propose a strategy taking advantage of the determinantal structure of the varieties at hand, to guide existing algorithms towards a solution.
Under some hypotheses satis ed by the application, these strategies classify the cardinality of the bers by identifying critical points of the projections, that is points where several single roots merge into one multiple root, and points where the variety meets the boundary of the relevant domain. In other words, we need to compute the singular locus of a determinantal variety, and then the singularities of this singular locus, the critical points of a projection restricted to it, and its intersection with the boundary of the domain.
The classical tool for computing critical and singular points is the Jacobian criterion, which characterizes singularities in terms of minors of the Jacobian matrix of the system de ning the variety. In other words, a singular locus is again a determinantal variety.
Furthermore, the singular locus of a determinantal ideal is closely related to smaller minors of the same matrix. In particular, points where the rank of the matrix is r 1 are always singular points of the locus where the rank is r .
This is the rst property that we use to steer the computations: we compute a rank strati cation of the solutions, by separating those singularities where the matrix has exactly rank r (which we can compute using the Jacobian criterion), and those points where the matrix has rank at ost r 1, which are automatically singularities of the ideal.
Furthermore, recall that the classi cation strategy aims at computing a set of hypersurfaces of the parameter space Rt , splitting it into open sets over which the number of singularities is constant. So if we know that some component C of the singular locus of the determinantal variety has dimension at most t 1, we only need to compute the equation of one hypersurface containing its image by the projection on the parameter space: this hypersurface will necessarily contain the projection of the singular points of C, the critical values of the projection restricted to C and the intersection of C with the boundaries of the domain.
Under some generic hypotheses, the dimension of rank strata is determined by the target rank defect and the number of variables and parameters. It turns out that the high rank stratum necessarily has dimension less than t, thus reducing the problem to a single elimination for that stratum.
The last regularization step is the use of incidence variety to model singularities of the determinants where needed. Algorithmically, we add k k r variables to the system, build a matrix N with k rows and k r columns, each entry being one of the new variables, and we build the system with the entries of M N . In order to ensure that the resulting kernel vectors are linearly independent, we pick at random a matrix U with k r rows and k columns, and we add to the system the entries of U N Idk r .
Results for the application This strategy was used for answering questions related to the contrast imaging problem.
In particular, an open question for the case of water was whether the number of singularities was a global invariant, not depending on the second matter: for all experimental samples, there was only one such singularity.
It turns out that the answer is no: we identi ed 3 areas of the parameter space where the number of singularities is 1 , 2 and 3 respectively. See Figure 6: the red area is out of the physically relevant domain; for parameters in the areas containing blue diamonds, there are two singularities in the ber; for parameters in the areas containing green circles, there are three singularities in the ber; and for all other areas, there is only one singularity.
These results can be obtained in about 10 s using the new strategy, while a direct approach using Gröbner bases requires 100 s. Furthermore, we were able to use this strategy to obtain the boundaries of the classi cation in the general case. This computation takes less than 2 h, while it is intractable for the general strategies.

READ  PHARMACOLOGICAL PERTURBATION OF THE MECHANO-ELECTRICAL TRANSDUCTION MACHINERY

Perspectives

In this section, we list some questions left open by this thesis, as well as potential extensions and generalizations that we would nd interesting.
Open questions and conjectures for W -homogeneous systems First, there is the prob-lem of W -compatibility: what are the systems of degrees and weights such that there exists W -homogeneous regular sequences, and thus that they are generic? We were able to give su cient conditions, but counter-examples show that they are not necessary.
Figure 6: Decomposition of the parameter space in the case of water: the parameters are the relaxation times for the second matter, γ2 and 2, and in each area delimited by the black curves, the cardinality of the ber of the singularities of Vr is constant and written as a circled number; the red area corresponds to physically irrelevant parameters.
Genericity results are useful because testing whether a given system is a regular sequence is not easy in general: if one knows that most sequences similar to the system at hand are regular, it makes sense to run the algorithms assuming that the hypothesis is satis ed.
As of now, for systems of degrees and weights falling out of the su cient conditions for regular sequences to exist, this assumption is riskier. Obtaining a set of necessary and su cient conditions could complete the characterization of the genericity of weighted homogeneous regular sequences and make this procedure more likely to succeed.
Second, we conjectured formula (1) as a sharp bound of the degree of regularity of a sequence in simultaneous Noether position, without any hypothesis on the weights. So far, we have not been able to prove it.
And last, it would be interesting to be able to characterize semi-regular W -homogeneous sequences in terms of their Hilbert series, once again without any hypothesis on the weights.
Several systems of weights Weighted homogeneous systems have been studied considering only one system of weights. However, some systems may be W -homogeneous for several systems of weights W . Consider for example the polynomial it is 1 ; 1; 1 -homogeneous with degree 6, and 1 ; 2; 3 -homogeneous with degree 10.
This new constraint gives extra structure to the Macaulay matrix of the system. Exploiting this structure should yield further improvements on the complexity of Matrix-F5 for these systems.
Furthermore, polynomials with are Wi -homogeneous for all systems of weights Wi in some family are also W -homogeneous for any W obtained by linear combination of the Wi ’s: for ex-ample, the above polynomial is also 2 ; 3; 4 -homogeneous with degree 16. It may be interesting to see if this property can be used to nd systems of weights leading to easier computations.
Non-positive weights An extension of the previous point could be to consider systems of weights including weights set to zero or a negative integer. With only one system of weights, this leads to problems because homogeneous components do not necessarily have nite dimension. However, with several systems of weights, it is possible to ensure that this does not happen, and it would be useful to know what algorithmic strategies we can use in this situation.
This class of systems is very wide, it would for example include multi-homogeneous systems (systems where the variables are split into groups, and polynomials are homogeneous in total degree for each group): for example, consider a polynomial f in K X1; X2; Y1; Y2 , and assume that it is bilinear for the variables X1; X2 and Y1; Y2. In terms of weights, consider the systems of weights W1 = 1 ; 1; 0; 0 and W2 = 0 ; 0; 1; 1 ; f being bilinear means that f is W1-homogeneous with W1-degree 1 and W2-homogeneous with W2-degree 1.
Weight search Returning to the case of only one system of weights, until now, we have only worked where systems for which the user was able to guess a good system of weights (either because it was natural in the application, or through trial and error). It would be interesting to describe a strategy, or at least a good set of heuristics, giving good candidate systems of weights for an arbitrary inhomogeneous system.
Testing that a system is a regular sequence is not computationally easy, so using that as a criterion is impractical. On the other hand, we could look for systems of weights making the highest W -degree component large-enough for it to be generically regular.
Experimentally, we also observed systems which behaved better with a system of weights such that the highest W -degree component is small, but with a very large W -homogeneous com-ponent at smaller W -degree. Characterizing these situations could lead to further improvements on a strategy for picking up good systems of weights.
Application to imagery Our work on real roots classi cation for the imagery problem also left several open questions and possible extensions.
First, we only described algorithms, and showed experimental data con rming that they are more e cient. It would be interesting to complete the analysis with complexity bounds for the whole process.
Furthermore, the algorithmic strategy described for the roots classi cation problem for the contrast imagery problem builds a strati cation by rank. To the best of our knowledge, the question of whether this strati cation is physically meaningful has never been raised.
As for the extensions, the root classi cation problem is only one of the many computational problems posed by specialists in control theory working on constrast optimization for the MNR. For example, they would be interested in a rational parametrisation of the roots, or in a classi cation of the parameter space according to the number of connected components in the bers for higher-dimensional varieties. All these questions o er new computational challenges.

Table of contents :

Introduction 
1. Motivation
2. State of the art
3. Contributions
4. Perspectives
I. Preliminaries
1. Algebra and geometry 
1.1. Commutative algebra
1.1.1. Ideals
1.1.2. Graded rings
1.1.3. Polynomial algebras
1.1.4. Combinatorics of monomials
1.1.5. Hilbert series
1.2. Regularity properties
1.2.1. Regular sequences
1.2.2. Description of the Hilbert series of a homogeneous regular sequence
1.2.3. Noether position
1.2.4. Semi-regular sequences
1.3. Algebraic geometry
1.3.1. Nullstellensatz
1.3.2. Zariski topology
1.3.3. Morphisms and rational maps
1.3.4. Dimension and degree
1.3.5. Geometric interpretation of the regularity properties
1.3.6. Singularities
1.3.7. Real semi-algebraic geometry
1.4. Genericity
1.4.1. Defnition
1.4.2. Generic properties of homogeneous systems
1.4.3. Generic changes of coordinates
1.5. Determinantal varieties
1.5.1. Defnition
1.5.2. Cramer’s formula and Schur’s complement
1.5.3. Incidence varieties
2. Gröbner bases
2.1. Monomial orderings
2.1.1. Defnition
2.1.2. Lexicographical ordering
2.1.3. Graded reverse-lexicographical ordering
2.1.4. Elimination orders
2.2. Gröbner bases: defnition
2.3. Applications
2.3.1. Zero-dimensional systems
2.3.2. Positive-dimensional systems and eliminations
2.4. Algorithms
2.4.1. A pairs algorithm: Buchberger’s algorithm
2.4.2. A matrix algorithm: Matrix-F5
2.4.3. Matrix algorithms in the axne cas
2.4.4. Change of order: FGLM
2.5. Complexity results
2.5.1. Complexity model and notations
2.5.2. Complexity of Matrix-F5 and degree of regularity
2.5.3. Thin-grained complexity of Matrix-F5
2.5.4. Complexity of FGLM
II. Contributions 
3. Weighted homogeneous systems
3.1. Introduction
3.2. Properties
3.2.1. Defnitions and properties
3.2.2. Degree and Bézout’s bound
3.2.3. Changes of variables and reverse chain-divisible systems of weights
3.2.4. Genericity of regularity properties andW-compatibility
3.2.5. Characterization of the Hilbert series
3.3. W-Degree of regularity of regular sequences
3.3.1. Macaulay’s bound in dimension zero
3.3.2. Sharper bound on the degree of regularity
3.3.3. Conjectured exact formula
3.3.4. Positive-dimensional regular sequences
3.4. Overdetermined systems
3.4.1. Denition of semi-regularity
3.4.2. Characterization with the Hilbert series
3.4.3. W-degree of regularity of semi-regular sequences
3.4.4. Fröberg’s conjecture
3.5. Algorithms and complexity
3.5.1. Critical pairs algorithms
3.5.2. Adapting the Matrix-F5 algorithm
3.5.3. Thin-grained complexity of Matrix-F5
3.5.4. FGLM and computational strategy for zero-dimensional systems
3.5.5. Other algorithms
3.6. Experiments
3.6.1. Generic systems
3.6.2. Discrete Logarithm Problem
3.6.3. Polynomial inversion
4. Real roots classification for determinants – Applic. to contrast optimization 
4.1. Introduction
4.2. Modeling the dynamics of the contrast optimization
4.3. Algorithm
4.3.1. Classifcation strategy
4.3.2. The determinantal problem
4.3.3. Incidence varieties
4.3.4. Locus of rank exactly r0
4.3.5. Singularities
4.3.6. Boundary
4.4. The contrast problem
4.4.1. The case of water
4.4.2. The general case
III. Appendices 
A. Source code
A.1. Magma source code for Matrix-F5
A.1.1. Algorithm Matrix-F5
A.1.2. Weighted homogeneous systems
A.2. Maple source code for determinantal varieties (Chapter 4)
A.2.1. Algorithms
A.2.2. Case of water
A.2.3. General case

GET THE COMPLETE PROJECT

Related Posts