# Research

By any objective standard, the theory of computational complexity ranks as one of the greatest intellectual achievements of humankind — along with fire, the wheel, and computability theory.
Scott Aaronson

## Interests

My research interests lie in the field of theoretical computer science, and I am particularly interested in (the relations between) complexity theory and symbolic computation. My Erdős number is 3.

I am a member of the French Computer Science and Mathematics learned societies, and of GDR IM (in particular its computer algebra and complexity and algorithms sections).

## Students

• PhD: Armelle Perret du Cray (2019 – 2023).
• Master 2: Alexis Galan (2023), Jean-Baptiste Brun (2020), Armelle Perret du Cray (2019), Léonard de Haro (2017).
• Master 1: Nicolas Martin (2023), Lucas Ottow (2021)
• Undergraduates: Timothée Defourné (2020), Emmanuel Memmi (2018), Yannis Gaziello (2016), Pierre van Iseghem (2016).

## Publications

• J.-G. Dumas, B. Grenet.
Manuscript, 2023.
We consider the fast in-place computation of the Euclidean polynomial modular remainder R(X) ≡ A(X) mod B(X) with A and B of respective degrees n and m ≤ n. If the multiplication of two polynomials of degree k can be performed with M(k) operations and O(k) extra space, then standard algorithms for the remainder require O(n/m M(m)) arithmetic operations and, apart from that of A and B, at least O(n − m) extra memory. This extra space is notably usually used to store the whole quotient Q(X) such that A = BQ + R with deg R < deg B. We avoid the storage of the whole of this quotient, and propose an algorithm still using O(n/m M(m)) arithmetic operations but only O(m) extra space. When the divisor B is sparse with a constant number of non-zero terms, the arithmetic complexity bound reduces to O(n). When it is allowed to use the input space of A or B for intermediate computations, but putting A and B back to their initial states after the completion of the remainder computation, we further propose an in-place algorithm (that is with its extra required space reduced to O(1) only) using at most O(n/m M(m) log(m) arithmetic operations. To achieve this, we develop techniques for Toeplitz matrix operations which output is also part of the input. In-place accumulated versions are obtained for the latter and for polynomial remaindering via reductions to accumulated polynomial multiplication, for which a recent fast in-place algorithm has been developed.
```@misc{DuGre23rem,
title = {{In-place fast polynomial modular remainder}},
author = {Dumas, Jean-Guillaume and Grenet, Bruno},
year = 2023,
howpublished = {Manuscript},
arxiv = {2302.13600},
hal = {hal-03979016},
}```
• J.-G. Dumas, B. Grenet.
Manuscript, 2023.
Bilinear operations are ubiquitous in computer science and in particular in computer algebra and symbolic computation. One of the most fundamental arithmetic operation is the multiplication, and when applied to, e.g., polynomials or matrices, its result is a bilinear function of its inputs. In terms of arithmetic operations, many sub-quadratic (resp. sub-cubic) algorithms were developed for these tasks. But these fast algorithms come at the expense of (potentially large) extra temporary space to perform the computation. On the contrary, classical, quadratic (resp. cubic) algorithms, when computed sequentially, quite often require very few (constant) extra registers. Further work then proposed simultaneously `fast'' and`in-place'' algorithms, for both matrix and polynomial operations We here propose algorithms to extend the latter line of work for accumulated algorithms arising from a bilinear formula. Indeed one of the main ingredient of the latter line of work is to use the (free) space of the output as intermediate storage. When the result has to be accumulated, i.e., if the output is also part of the input, this free space thus does not even exist. To be able to design accumulated in-place algorithm we thus relax the in-place model to allow algorithms to also modify their input, therefore to use them as intermediate storage for instance, provided that they are restored to their initial state after completion of the procedure. This is in fact a natural possibility in many programming environments. Furthermore, this restoration allows for recursive combinations of such procedures, as the (non concurrent) recursive calls will not mess-up the state of their callers. We propose here a generic technique transforming any bilinear algorithm into an in-place algorithm under this model. This then directly applies to polynomial and matrix multiplication algorithms, including fast ones.
```@misc{DuGre23bil,
title = {{Fast in-place accumulated bilinear formulae}},
author = {Dumas, Jean-Guillaume and Grenet, Bruno},
year = 2023,
howpublished = {Manuscript},
arxiv = {2307.12712},
hal = {hal-04167499},
}```
• P. Giorgi, B. Grenet, A. Perret du Cray.
J. Symb. Comput., 2023.
Polynomial multiplication is known to have quasi-linear complexity in both the dense and the sparse cases. Yet no truly linear algorithm has been given in any case for the problem, and it is not clear whether it is even possible. This leaves room for a better algorithm for the simpler problem of verifying a polynomial product. While finding deterministic methods seems out of reach, there exist probabilistic algorithms for the problem that are optimal in number of algebraic operations.
We study the generalization of the problem to the verification of a polynomial product modulo a sparse divisor. We investigate its bit complexity for both dense and sparse multiplicands. In particular, we are able to show the primacy of the verification over modular multiplication when the divisor has a constant sparsity and a second highest-degree monomial that is not too large. We use these results to obtain new bounds on the bit complexity of the standard polynomial multiplication verification. In particular, we provide optimal algorithms in the bit complexity model in the dense case by improving a result of Kaminski and develop the first quasi-optimal algorithm for verifying sparse polynomial product.
```@article{GGPdC23,
title = {{Polynomial modular product verification and its implications}},
author = {Giorgi, Pascal and Grenet, Bruno and Perret du Cray, Armelle},
year = 2023,
journal = {J. Symb. Comput.},
volume = 116,
pages = {98-129},
doi = {10.1016/j.jsc.2022.08.011},
arxiv = {2101.02142},
hal = {hal-03102121},
}```
• P. Giorgi, B. Grenet, A. Perret du Cray, D. S. Roche.
Manuscript, 2022.
We describe a straightforward method to generate a random prime q such that the multiplicative group 𝔽q* also has a random large prime-order subgroup. The described algorithm also yields this order p as well as a p'th primitive root of unity ω. The methods here are efficient asymptotically, but due to large constants may not be very useful in practical settings.
```@misc{GGPR22c,
title = {{Random primes in arithmetic progressions}},
author = {Giorgi, Pascal and Grenet, Bruno and Perret du Cray, Armelle and Roche, Daniel S.},
year = 2022,
howpublished = {Manuscript},
arxiv = {2202.05955},
}```
• P. Giorgi, B. Grenet, A. Perret du Cray, D. S. Roche.
ISSAC, 2022.
Numerous algorithms call for computation over the integers modulo a randomly-chosen large prime. In some cases, the quasi-cubic complexity of selecting a random prime can dominate the total running time. We propose a new variant of dynamic evaluation, applied to a randomly-chosen (composite) integer. The transformation we propose can apply to any algorithm in the algebraic RAM model, even allowing randomization. The resulting transformed algorithm avoids any primality tests and will, with constant positive probability, have the same result as the original computation modulo a randomly-chosen prime. As an application, we demonstrate how to compute the exact number of nonzero terms in an unknown integer polynomial in quasi-linear time. We also show how the same algorithmic transformation technique can be used for computing modulo random irreducible polynomials over a finite field.
```@inProceedings{GGPR22b,
title = {{Random Primes without Primality Testing}},
author = {Giorgi, Pascal and Grenet, Bruno and Perret du Cray, Armelle and Roche, Daniel S.},
year = 2022,
booktitle = {Proceedings of ISSAC},
pages = {207-215},
doi = {10.1145/3476446.3536191},
arxiv = {2202.12073},
}```
• P. Giorgi, B. Grenet, A. Perret du Cray, D. S. Roche.
ISSAC, 2022.
Given a way to evaluate an unknown polynomial with integer coefficients, we present new algorithms to recover its nonzero coefficients and corresponding exponents. As an application, we adapt this interpolation algorithm to the problem of computing the exact quotient of two given polynomials. These methods are efficient in terms of the bit-length of the sparse representation, that is, the number of nonzero terms, the size of coefficients, the number of variables, and the logarithm of the degree. At the core of our results is a new Monte Carlo randomized algorithm to recover a polynomial f(x) with integer coefficients given a way to evaluate f(θ) mod m for any chosen integers θ and m. This algorithm has nearly-optimal bit complexity, meaning that the total bit-length of the probes, as well as the computational running time, is softly linear (ignoring logarithmic factors) in the bit-length of the resulting sparse polynomial. To our knowledge, this is the first sparse interpolation algorithm with soft-linear bit complexity in the total output size. For integer polynomials, the best previously known results have at least a cubic dependency on the bit-length of the exponents.
```@inProceedings{GGPR22a,
title = {{Sparse Polynomial Interpolation and Division in Soft-linear Time}},
author = {Giorgi, Pascal and Grenet, Bruno and Perret du Cray, Armelle and Roche, Daniel S.},
year = 2022,
booktitle = {Proceedings of ISSAC},
pages = {459-468},
doi = {10.1145/3476446.3536173},
publisher = {ACM},
arxiv = {2202.08106},
note = {Distinguished Paper Award},
}```
• P. Giorgi, B. Grenet, A. Perret du Cray.
ISSAC, 2021.
No polynomial-time algorithm is known to test whether a sparse polynomial G divides another sparse polynomial F. While computing the quotient Q=F quo G can be done in polynomial time with respect to the sparsities of F, G and Q, this is not yet sufficient to get a polynomial-time divisibility test in general. Indeed, the sparsity of the quotient Q can be exponentially larger than the ones of F and G. In the favorable case where the sparsity #Q of the quotient is polynomial, the best known algorithm to compute Q has a non-linear factor #G#Q in the complexity, which is not optimal.
In this work, we are interested in the two aspects of this problem. First, we propose a new randomized algorithm that computes the quotient of two sparse polynomials when the division is exact. Its complexity is quasi-linear in the sparsities of F, G and Q. Our approach relies on sparse interpolation and it works over any finite field or the ring of integers. Then, as a step toward faster divisibility testing, we provide a new polynomial-time algorithm when the divisor has a specific shape. More precisely, we reduce the problem to finding a polynomial S such that QS is sparse and testing divisibility by S can be done in polynomial time. We identify some structure patterns in the divisor G for which we can efficiently compute such a polynomial S.
```@inProceedings{GGPdC21,
title = {{On exact division and divisibility testing for sparse polynomials}},
author = {Giorgi, Pascal and Grenet, Bruno and Perret du Cray, Armelle},
year = 2021,
booktitle = {Proceedings of ISSAC},
pages = {163{--}170},
doi = {10.1145/3452143.3465539},
publisher = {ACM},
arxiv = {2102.04826},
hal = {hal-03136945},
}```
• A. Chattopadhyay, B. Grenet, P. Koiran, N. Portier, Y. Strozecki.
J. Symb. Comput., 2021.
Implementation in the package Lacunaryx of Mathemagix.
We present a deterministic polynomial-time algorithm which computes the multilinear factors of multivariate lacunary polynomials over number fields. It is based on a new Gap theorem which allows to test whether P(X) = Σj aj Xαj (vX+t)βj (uX+w)γj is identically zero in polynomial time. Previous algorithms for this task were based on Gap Theorems expressed in terms of the height of the coefficients. Our Gap Theorem is based on the valuation of the polynomial and is valid for any field of characteristic zero. As a consequence we obtain a faster and more elementary algorithm. Furthermore, we can partially extend the algorithm to other situations, such as absolute and approximate factorizations. We also give a version of our Gap Theorem valid for fields of large characteristic, and deduce a randomized polynomial-time algorithm to compute multilinear factors with at least three monomials of multivariate lacunary polynomials of finite fields of large characteristic. We provide NP-hardness results to explain our inability to compute binomial factors.
```@article{CGKPS21,
title = {{Computing the multilinear factors of lacunary polynomials without heights}},
author = {Chattopadhyay, Arkadev and Grenet, Bruno and Koiran, Pascal and Portier, Natacha and Strozecki, Yann},
year = 2021,
journal = {J. Symb. Comput.},
volume = 104,
pages = {183{--}206},
doi = {10.1016/j.jsc.2020.04.013},
arxiv = {1311.5694},
hal = {hal-00936318},
}```
• P. Giorgi, B. Grenet, A. Perret du Cray.
ISSAC, 2020.
We present a probabilistic algorithm to compute the product of two univariate sparse polynomials over a field with a number of bit operations that is quasi-linear in the size of the input and the output. Our algorithm works for any field of characteristic zero or larger than the degree. We mainly rely on sparse interpolation and on a new algorithm for verifying a sparse product that has also a quasi-linear time complexity. Using Kronecker substitution techniques we extend our result to the multivariate case.
```@inProceedings{GGPdC20,
title = {{Essentially optimal sparse polynomial multiplication}},
author = {Giorgi, Pascal and Grenet, Bruno and Perret du Cray, Armelle},
year = 2020,
booktitle = {Proceedings of ISSAC},
pages = {202{--}209},
doi = {10.1145/3373207.3404026},
publisher = {ACM},
arxiv = {2001.11959},
hal = {hal-02476609},
}```
• We consider space-saving versions of several important operations on univariate polynomials, namely power series inversion and division, division with remainder, multi-point evaluation, and interpolation. Now-classical results show that such problems can be solved in (nearly) the same asymptotic time as fast polynomial multiplication. However, these reductions, even when applied to an in-place variant of fast polynomial multiplication, yield algorithms which require at least a linear amount of extra space for intermediate results. We demonstrate new in-place algorithms for the aforementioned polynomial computations which require only constant extra space and achieve the same asymptotic running time as their out-of-place counterparts. We also provide a precise complexity analysis so that all constants are made explicit, parameterized by the space usage of the underlying multiplication algorithms.
```@inProceedings{GGR20,
title = {{Fast in-place algorithms for polynomial operations: division, evaluation, interpolation}},
author = {Giorgi, Pascal and Grenet, Bruno and Roche, Daniel S.},
year = 2020,
booktitle = {Proceedings of ISSAC},
pages = {210{--}217},
doi = {10.1145/3373207.3404061},
publisher = {ACM},
arxiv = {2002.10304},
hal = {lirmm-02493066},
}```
• B. Grenet, I. Volkovich.
SOSA, 2020.
We give a new simple and short ("one-line") analysis for the runtime of the well-known Euclidean Algorithm. While very short simple, the obtained upper bound in optimal.
```@inProceedings{GV20,
title = {{One (more) line on the most Ancient Algorithm in History}},
author = {Grenet, Bruno and Volkovich, Ilya},
year = 2020,
booktitle = {Proceedings of SOSA},
pages = {15{--}17},
doi = {10.1137/1.9781611976014.3},
publisher = {SIAM},
arxiv = {1808.07957},
hal = {lirmm-02335368},
}```
• P. Giorgi, B. Grenet, D. S. Roche.
ISSAC, 2019.
The polynomial multiplication problem has attracted considerable attention since the early days of computer algebra, and several algorithms have been designed to achieve the best possible time complexity. More recently, efforts have been made to improve the space complexity, developing modified versions of a few specific algorithms to use no extra space while keeping the same asymptotic running time.
In this work, we broaden the scope in two regards. First, we ask whether an arbitrary multiplication algorithm can be performed in-place generically. Second, we consider two important variants which produce only part of the result (and hence have less space to work with), the so-called middle and short products, and ask whether these operations can also be performed in-place.
To answer both questions in (mostly) the affirmative, we provide a series of reductions starting with any linear-space multiplication algorithm. For full and short product algorithms these reductions yield in-place versions with the same asymptotic time complexity as the out-of-place version. For the middle product, the reduction incurs an extra logarithmic factor in the time complexity only when the algorithm is quasi-linear.
```@inProceedings{GGR19,
title = {{Generic reductions for in-place polynomial multiplication}},
author = {Giorgi, Pascal and Grenet, Bruno and Roche, Daniel S.},
year = 2019,
booktitle = {Proceedings of ISSAC},
pages = {187{--}194},
doi = {10.1145/3326229.3326249},
publisher = {ACM},
arxiv = {1902.02967},
hal = {lirmm-02003089},
}```
• B. Grenet, J. van der Hoeven, G. Lecerf.
Appl. Alg. Eng. Comm. Comput., 2016.
We design new deterministic algorithms, based on Graeffe transforms, to compute all the roots of a polynomial which splits over a finite field 𝔽q. Our algorithms were designed to be particularly efficient in the case when the cardinality q–1 of the multiplicative group of 𝔽q is smooth. Such fields are often used in practice because they support fast discrete Fourier transforms. We also present a new nearly optimal algorithm for computing characteristic polynomials of multiplication endomorphisms in finite field extensions. This algorithm allows for the efficient computation of Graeffe transforms of arbitrary orders.
```@article{GHL16,
title = {{Deterministic root finding over finite fields using Graeffe transforms}},
author = {Grenet, Bruno and van der Hoeven, Joris and Lecerf, Grégoire},
year = 2016,
journal = {Appl. Alg. Eng. Comm. Comput.},
volume = 27,
number = 3,
pages = {237{--}257},
doi = {10.1007/s00200-015-0280-5},
hal = {hal-01104251},
}```
• B. Grenet.
J. Symb. Comput., 2016. ISSAC 2014 special issue
In this paper, we present a new method for computing bounded-degree factors of lacunary multivariate polynomials. In particular for polynomials over number fields, we give a new algorithm that takes as input a multivariate polynomial f in lacunary representation and a degree bound d and computes the irreducible factors of degree at most d of f in time polynomial in the lacunary size of f and in d. Our algorithm consists in a reduction of the problem to the univariate case on the one hand and to the irreducible factorization of multivariate low-degree polynomials on the other hand, which is valid for any field of zero characteristic. The reduction algorithms we propose are elementary in that they only manipulate the exponent vectors of the input polynomial. The proof of correctness and the complexity bounds rely on the valuations of certain formal power series with rational exponents, called Puiseux series, and on considerations on the Newton polytope of the polynomial.
```@article{Gre16,
title = {{Bounded-degree factors of lacunary multivariate polynomials}},
author = {Grenet, Bruno},
year = 2016,
journal = {J. Symb. Comput.},
volume = 75,
pages = {171{--}192},
doi = {10.1016/j.jsc.2015.11.013},
arxiv = {1412.3570},
note = {ISSAC 2014 special issue},
}```
• B. Grenet.
ACM Comm. Comput. Algebra, 2015.
In this paper, we report on an implementation in the free software Mathemagix of lacunary factorization algorithms, distributed as a library called Lacunaryx.
```@article{Gre15,
title = {{Lacunaryx: Computing bounded-degree factors of lacunary polynomials}},
author = {Grenet, Bruno},
year = 2015,
journal = {ACM Comm. Comput. Algebra},
volume = 49,
number = 4,
pages = {121{--}124},
doi = {10.1145/2893803.2893807},
arxiv = {1506.03726},
note = {Best software presentation award},
}```
• B. Grenet, J. van der Hoeven, G. Lecerf.
ISSAC, 2015.
Consider a finite field 𝔽q whose multiplicative group has smooth cardinality. We study the problem of computing all roots of a polynomial that splits over 𝔽q, which was one of the bottlenecks for fast sparse interpolation in practice. We revisit and slightly improve existing algorithms and then present new randomized ones based on the Graeffe transform. We report on our implementation in the Mathemagix computer algebra system, confirming that our ideas gain by an order of magnitude in practice.
```@inProceedings{GHL15,
title = {{Randomized Root Finding over Finite FFT-fields Using Tangent Graeffe Transforms}},
author = {Grenet, Bruno and van der Hoeven, Joris and Lecerf, Grégoire},
year = 2015,
booktitle = {Proceedings of ISSAC},
pages = {197{--}204},
doi = {10.1145/2755996.2756647},
publisher = {ACM},
hal = {hal-01104279},
}```
• See also the journal version. Implementation in the package Lacunaryx of Mathemagix.
We present a new algorithm for the computation of the irreducible factors of degree at most d, with multiplicity, of multivariate lacunary polynomials over fields of characteristic zero. The algorithm reduces this computation to the computation of irreducible factors of degree at most d of univariate lacunary polynomials and to the factorization of low-degree multivariate polynomials. The reduction runs in time polynomial in the size of the input polynomial and in d. As a result, we obtain a new polynomial-time algorithm for the computation of low-degree factors, with multiplicity, of multivariate lacunary polynomials over number fields, but our method also gives partial results for other fields, such as the fields of p-adic numbers or for absolute or approximate factorization for instance. The core of our reduction uses the Newton polygon of the input polynomial, and its validity is based on the Newton-Puiseux expansion of roots of bivariate polynomials. In particular, we bound the valuation of f(X,φ) where f is a lacunary polynomial and φ a Puiseux series whose vanishing polynomial has low degree.
```@inProceedings{Gre14,
title = {{Computing low-degree factors of lacunary polynomials: a Newton-Puiseux approach}},
author = {Grenet, Bruno},
year = 2014,
booktitle = {Proceedings of ISSAC},
pages = {224{--}231},
doi = {10.1145/2608628.2608649},
publisher = {ACM},
arxiv = {1401.4720},
}```
• A. Chattopadhyay, B. Grenet, P. Koiran, N. Portier, Y. Strozecki.
ISSAC, 2013.
See also the journal version. Implementation in the package Lacunaryx of Mathemagix.
The lacunary, or supersparse, representation of a multivariate polynomial P is a list of pairs (c,e) where c is a coefficient of P and e is a vector of exponent. Each pair defines a term of the polynomial, and P equals the sum of all these terms. The factorization of lacunary polynomial has been investigated in a series of papers by Cucker, Koiran and Smale (J. Symb. Comput., 1999), Lenstra (Number Theory in Progress, 1999), and Kaltofen and Koiran (ISSAC 2005 & 2006). In this paper, we are interested in more elementary proofs for some of these results. We focus on Kaltofen and Koiran’s results dealing with linear factors of bivariate lacunary polynomials. In particular, we give a polynomial-time algorithm to find linear factors of bivariate polynomials that is not based on heights of algebraic numbers. This simplification allows us to give a similar result for some fields of positive characteristic. Our main technical result is an upper bound on the valuation of polynomials of the form P(X,1+X) where P is a bivariate lacunary polynomial, and can be viewed as a generalization of a result of Hajós.
```@inProceedings{CGKPS13,
title = {{Factoring bivariate lacunary polynomials without heights}},
author = {Chattopadhyay, Arkadev and Grenet, Bruno and Koiran, Pascal and Portier, Natacha and Strozecki, Yann},
year = 2013,
booktitle = {Proceedings of ISSAC},
pages = {141{--}158},
doi = {10.1145/2465506.2465932},
publisher = {ACM},
arxiv = {1206.4224},
}```
• B. Grenet, P. Koiran, N. Portier.
J. Complexity, 2013.
The multivariate resultant is a fundamental tool of computational algebraic geometry. It can in particular be used to decide whether a system of n homogeneous equations in n variables is satisfiable (the resultant is a polynomial in the system's coefficients which vanishes if and only if the system is satisfiable). In this paper, we investigate the complexity of computing the multivariate resultant. First, we study the complexity of testing the multivariate resultant for zero. Our main result is that this problem is NP-hard under deterministic reductions in any characteristic, for systems of low-degree polynomials with coefficients in the ground field (rather than in an extension). In characteristic zero, we observe that this problem is in the Arthur-Merlin class AM if the generalized Riemann hypothesis holds true, while the best known upper bound in positive characteristic remains PSPACE. Second, we study the classical algorithms to compute the resultant. They usually rely on the computation of the determinant of an exponential-size matrix, known as Macaulay matrix. We show that this matrix belongs to a class of succinctly representable matrices, for which testing the determinant for zero is proved PSPACE-complete. This means that improving Canny's PSPACE upper bound requires either to look at the fine structure of the Macaulay matrix to find an ad hoc algorithm for computing its determinant, or to use altogether different techniques.
```@article{GKP13,
title = {{On the complexity of the multivariate resultant}},
author = {Grenet, Bruno and Koiran, Pascal and Portier, Natacha},
year = 2013,
journal = {J. Complexity},
volume = 29,
number = 2,
pages = {142{--}157},
doi = {10.1016/j.jco.2012.10.001},
arxiv = {1210.1451},
}```
• B. Grenet, Th. Monteil, S. Thomassé.
Linear Alg. Appl., 2013.
This paper studies Symmetric Determinantal Representations (SDR) in characteristic 2, that is the representation of a polynomial P by a symmetric matrix M such that P= det M, and where each entry of M is either a constant or a variable. We first give a necessary condition for a polynomial to have an SDR. It implies that some polynomials have n SDR, answering a question of Grenet et al. A large part of the paper is then devoted to the case of multilinear polynomials. We prove that the existence of an SDR for a multilinear polynomial is equivalent to the existence of a factorization of the polynomial in certain quotient rings. We develop some algorithms to test the factorizability in these rings and use them to find SDRs when they exist. Altogether, this gives us polynomial-time algorithms to factorize the polynomials in the quotient rings and to build SDRs.
```@article{GMT13,
title = {{Symmetric determinantal representations in characteristic{~}2}},
author = {Grenet, Bruno and Monteil, Thierry and Thomassé, Stéphan},
year = 2013,
journal = {Linear Alg. Appl.},
volume = 439,
number = 5,
pages = {1364{--}1381},
doi = {10.1016/j.laa.2013.04.022},
arxiv = {1210.5879},
}```
• Polynomial identity testing and arithmetic circuit lower bounds are two central questions in algebraic complexity theory. It is an intriguing fact that these questions are actually related. One of the authors of the present paper has recently proposed a "real τ-conjecture" which is inspired by this connection. The real τ-conjecture states that the number of real roots of a sum of products of sparse univariate polynomials should be polynomially bounded. It implies a superpolynomial lower bound on the size of arithmetic circuits computing the permanent polynomial. In this paper we show that the real τ-conjecture holds true for a restricted class of sums of products of sparse polynomials. This result yields lower bounds for a restricted class of depth-4 circuits: we show that polynomial size circuits from this class cannot compute the permanent, and we also give a deterministic polynomial identity testing algorithm for the same class of circuits.
```@inProceedings{GKPS11,
title = {{The Limited Power of Powering: Polynomial Identity Testing and a Depth-four Lower Bound for the Permanent}},
author = {Grenet, Bruno and Koiran, Pascal and Portier, Natacha and Strozecki, Yann},
year = 2011,
booktitle = {Proceedings of FSTTCS},
number = 13,
pages = {127{--}139},
doi = {10.4230/LIPIcs.FSTTCS.2011.127},
series = {LIPIcs},
publisher = {Dagstuhl},
arxiv = {1107.1434},
}```
• B. Grenet, E. L. Kaltofen, P. Koiran, N. Portier.
STACS, 2011.
See also the journal version and Chapters 3 & 4 of my PhD Thesis for simplified proofs of these results.
We deploy algebraic complexity theoretic techniques for constructing symmetric determinantal representations of weakly-skew circuits, which include formulas. Our representations produce matrices of much smaller dimensions than those given in the convex geometry literature when applied to polynomials having a concise representation (as a sum of monomials, or more generally as an arithmetic formula or a weakly-skew circuit). These representations are valid in any field of characteristic different from 2. In characteristic 2 we are led to an almost complete solution to a question of Bürgisser on the VNP-completeness of the partial permanent. In particular, we show that the partial permanent cannot be VNP-complete in a finite field of characteristic 2 unless the polynomial hierarchy collapses.
```@inProceedings{GKKP11c,
title = {{Symmetric Determinantal Representation of Weakly-Skew Circuits}},
author = {Grenet, Bruno and Kaltofen, Erich L. and Koiran, Pascal and Portier, Natacha},
year = 2011,
booktitle = {Proceedings of STACS},
number = 9,
pages = {543{--}554},
doi = {10.4230/LIPIcs.STACS.2011.543},
series = {LIPIcs},
publisher = {Dagstuhl},
hal = {hal-00573631},
}```
• B. Grenet, E. L. Kaltofen, P. Koiran, N. Portier.
Contemp. Math., 2011.
See also Chapters 3 & 4 of my PhD Thesis for simplified proofs of these results.
We deploy algebraic complexity theoretic techniques to construct symmetric determinantal representations of formulas and weakly skew circuits. Our representations produce matrices of much smaller dimensions than those given in the convex geometry literature when applied to polynomials having a concise representation (as a sum of monomials, or more generally as an arithmetic formula or a weakly skew circuit). These representations are valid in any field of characteristic different from 2. In characteristic 2 we are led to an almost complete solution to a question of Bürgisser on the VNP-completeness of the partial permanent. In particular, we show that the partial permanent cannot be VNP-complete in a finite field of characteristic 2 unless the polynomial hierarchy collapses.
```@inCollection{GKKP11j,
title = {{Symmetric Determinantal Representation of Formulas and Weakly Skew Circuits}},
author = {Grenet, Bruno and Kaltofen, Erich L. and Koiran, Pascal and Portier, Natacha},
year = 2011,
booktitle = {Randomization, Relaxation, and Complexity in Polynomial Equation Solving},
number = 556,
pages = {61{--}96},
doi = {10.1090/conm/556},
isbn = {978-0821852286},
editor = {Leonid Gurvits and Philippe P{\'e}bay and J. Maurice Rojas and David C. Thompson},
series = {Contemporary Mathematics},
publisher = {AMS},
arxiv = {1001.3804},
}```
• The Permanent versus Determinant problem is the following: Given an n×n matrix X of indeterminates over a field of characteristic different from two, find the smallest matrix M whose coefficients are linear functions in the indeterminates such that the permanent of X equals the determinant of M. We prove that the dimensions of M are at most 2n-1.
```@misc{Gre11,
title = {{An Upper Bound for the Permanent versus Determinant Problem}},
author = {Grenet, Bruno},
year = 2011,
howpublished = {Manuscript},
}```
• B. Grenet.
Compl. Syst., 2010.
In 1930, Gödel presented in Königsberg his famous Incompleteness Theorem, stating that some true mathematical statements are unprovable. Yet, this result gives us no idea about those independent (that is, true and unprovable) statements, about their frequency, the reason they are unprovable, and so on. Calude and Jürgensen proved in 2005 Chaitin's "heuristic principle" for an appropriate measure: the theorems of a finitely-specified theory cannot be significantly more complex than the theory itself. In this work, we investigate the existence of other measures, different from the original one, which satisfy this "heuristic principle". At this end, we introduce the definition of acceptable complexity measure of theorems.
```@article{Gre10,
title = {{Acceptable Complexity Measures of Theorems}},
author = {Grenet, Bruno},
year = 2010,
journal = {Compl. Syst.},
volume = 18,
number = 4,
pages = {403{--}425},
doi = {10.25088/ComplexSystems.18.4.403},
arxiv = {0910.0045},
}```
• B. Grenet, P. Koiran, N. Portier.
MFCS, 2010.
The multivariate resultant is a fundamental tool of computational algebraic geometry. It can in particular be used to decide whether a system of n homogeneous equations in n variables is satisfiable (the resultant is a polynomial in the system’s coefficients which vanishes if and only if the system is satisfiable). In this paper we present several NP-hardness results for testing whether a multivariate resultant vanishes, or equivalently for deciding whether a square system of homogeneous equations is satisfiable. Our main result is that testing the resultant for zero is NP-hard under deterministic reductions in any characteristic, for systems of low-degree polynomials with coefficients in the ground field (rather than in an extension). We also observe that in characteristic zero, this problem is in the Arthur-Merlin class AM if the generalized Riemann hypothesis holds true. In positive characteristic, the best upper bound remains PSPACE.
```@inProceedings{GKP10,
title = {{The Multivariate Resultant is NP-hard in Any Characteristic}},
author = {Grenet, Bruno and Koiran, Pascal and Portier, Natacha},
year = 2010,
booktitle = {Proceedings of MFCS},
number = 6281,
pages = {477{--}488},
doi = {10.1007/978-3-642-15155-2_42},
series = {LNCS},
publisher = {Springer},
arxiv = {0912.2607},
}```

### Theses

• B. Grenet.
PhD thesis, École Normale Supérieure de Lyon, 2012.
Abstract: Computational complexity is the study of the resources — time, memory, … — needed to algorithmically solve a problem. Within these settings, algebraic complexity theory is the study of the computational complexity of problems of algebraic nature, concerning polynomials.

In this thesis, we study several aspects of algebraic complexity. On the one hand, we are interested in the expressiveness of the determinants of matrices as representations of polynomials in Valiant's model of complexity. We show that symmetric matrices have the same expressiveness as the ordinary matrices as soon as the characteristic of the underlying field in different from two, but that this is not the case anymore in characteristic two. We also build the smallest known representation of the permanent by a determinant.

On the other hand, we study the computational complexity of algebraic problems. We show that the detection of roots in a system of n homogeneous polynomials in n variables in NP-hard. In line with the "VP = VNP?" question, which is the algebraic version of "P = NP?", we obtain a lower bound for the computation of the permanent of a matrix by an arithmetic circuit, and we point out the links between this problem and the polynomial identity testing problem. Finally, we give efficient algorithms for the factorization of lacunary bivariate polynomials.

Résumé : La complexité algorithmique est l'étude des ressources nécessaires — le temps, la mémoire, … — pour résoudre un problème de manière algorithmique. Dans ce cadre, la théorie de la complexité algébrique est l'étude de la complexité algorithmique de problèmes de nature algébrique, concernant des polynômes.

Dans cette thèse, nous étudions différents aspects de la complexité algébrique. D'une part, nous nous intéressons à l'expressivité des déterminants de matrices comme représentations des polynômes dans le modèle de complexité de Valiant. Nous montrons que les matrices symétriques ont la même expressivité que les matrices quelconques dès que la caractéristique du corps est différente de deux, mais que ce n'est plus le cas en caractéristique deux. Nous construisons également la représentation la plus compacte connue du permanent par un déterminant.

D'autre part, nous étudions la complexité algorithmique de problèmes algébriques. Nous montrons que la détection de racines dans un système de n polynômes homogènes à n variables est NP-difficile. En lien avec la question « VP = VNP ? », version algébrique de « P = NP ? », nous obtenons une borne inférieure pour le calcul du permanent d'une matrice par un circuit arithmétique, et nous exhibons des liens unissant ce problème et celui du test d'identité polynomiale. Enfin nous fournissons des algorithmes efficaces pour la factorisation des polynômes lacunaires à deux variables.

```@phdthesis{Gre12,
title = {{Repr{\'e}sentation des polyn{\^o}mes, algorithmes et bornes inf{\'e}rieures}},
author = {Grenet, Bruno},
institution = {{\'E}cole Normale Sup{\'e}rieure de Lyon},
year = 2012,
url = {http://www.theses.fr/en/2012ENSL0769},
}```
• B. Grenet.
Master's thesis, École Normale Supérieure de Lyon, 2009.
Le résultant est un polynôme permettant de déterminer si plusieurs polynômes donnés ont une racine commune. Canny a pu donner un algorithme PSPACE calculant le résultant à l’aide de calculs de déterminants, mais pose la question de sa complexité exacte. On s'intéresse ici à donner une estimation plus fine de cette complexité. D’une part, on montre que le résultant est dans AM, et qu’il est NP-difficile sous réduction probabiliste. D’autre part, les matrices en jeu étant descriptibles par des circuits de taille raisonnable, on montre que le calcul du déterminant pour de telles matrices est PSPACE-complet.
```@mastersthesis{Gre09,
title = {{Complexit{\'e} du r{\'e}sultant et des grands d{\'e}terminants}},
author = {Grenet, Bruno},
school = {{\'E}cole Normale Sup{\'e}rieure de Lyon},
year = 2009,
hal = {ensl-00431714},
}```

## Softwares

• Mathemagix is a free computer algebra and analysis system.

I am the author of the package Lacunaryx which provides factorization-related algorithms for lacunary polynomials.
Recipient of ISSAC 2015 Best Software Presentation award. See the paper for details.

• Sagemath (a.k.a. Sage) is a free open-source mathematics software system, built on top of many existing open-source packages: NumPy, SciPy, matplotlib, Sympy, Maxima, GAP, FLINT, R and many more.

My contributions mainly concern polynomials and power series.

• LinBox ecosystem: three free libraries for arithmetic and algebraic computations (Givaro), finite field dense linear algebra (FFLAS-FFPACK) and exact, high-performance linear algebra (Linbox).

I have contributed to basic arithmetic operations within Givaro, as well as fast parallel FFT implementations within Linbox.

## Talks

2023
Jul. 10.
Sparse polynomial interpolationAlgebraic complexity workshop, ICALP, Paderborn
2022
Feb. 11.
Sparse polynomial arithmeticSéminaire de l'équipe CAS³C³, Grenoble
2020
Mar. 2-6.
In-place poynomial arithmeticJNCF 2020, Marseille
2019
Apr. 11.
Memory-efficient polynomial arithmeticSéminaire de l'équipe AriC, LIP, Lyon
Jul. 15-18.
Generic reductions for in-place polynomial multiplicationISSAC 2019, Beijing
2018
Nov. 26.
Memory-efficient polynomial arithmeticSéminaire du LACL, Créteil
2017
March 23.
Root finding over finite fields using Graeffe transformsSéminaire de l'équipe CASYS du LJK, Grenoble
2016
June 28-30.
Root finding over finite fields using Graeffe transformsRAIM 2016, Banyuls
May 12.
Factorization of lacunary polynomialsSéminaires Verimag, Grenoble
Apr. 7.
Factorization of lacunary polynomialsSéminaire DALI, Perpignan
Feb. 7-12.
Bounded-degree factorization of lacunary polynomials (Bonus: PIT algorithms)WACT 2016, Tel Aviv
2015
Nov. 2-6.
Root finding over finite fields using Graeffe transformsJNCF 2015, Cluny
Sept. 21.
Lacunaryx: Computing bounded-degree factors of lacunary polynomialsMathemagix days, LIX, Palaiseau
July 6-9.
Lacunaryx: Computing bounded-degree factors of lacunary polynomialsSoftware demonstration at ISSAC 2015, Bath
Apr. 8.
Root finding over finite fieldsGroupe de travail MC2, LIP, Lyon
Feb. 11.
Root finding over finite fieldsGroupe de travail ECO/Escape, LIRMM, Montpellier
2014
Nov. 3.
Computing low-degree factors of lacunary polynomials: a Newton-Puiseux approachJNCF 2014, Marseille
July 23.
Computing low-degree factors of lacunary polynomials: a Newton-Puiseux approachISSAC 2014, Kobe
June 30 - July 3.
On the complexity of polynomial system solvingXXVèmes rencontres arithmétiques de Caen, Calcul formel et Méthodes effectives en Géométrie algébrique et arithmétique, Île de Tatihou
June 18.
Computing low-degree factors of lacunary polynomials: a Newton-Puiseux approachGroupe de travail MC2, ÉNS Lyon
May 26-30.
Computing low-degree factors of lacunary polynomials: a Newton-Puiseux approachMAP 2014, IHP, Paris
Jan. 24.
Computing low-degree factors of lacunary polynomials: a Newton-Puiseux approachSéminaire de l'équipe MAGMAT du PRiSM, Versailles
2013
Nov. 29-30.
Factoring lacunary polynomials: the easy way2èmes journées du GT CoA, Paris
Oct. 28.
Around Sparse PolynomialsSéminaire commun Max-SpecFun du LIX, Palaiseau
Oct. 7.
Determinantal Representations of PolynomialsSéminaire commun Max-SpecFun du LIX, Palaiseau
Sept. 20.
Complexity of the resultantPolsys Seminar, U. Paris 6
Sept. 19.
Complexity of the resultantSéminaire commun Max-SpecFun du LIX, Palaiseau
June 26-29.
Factoring bivariate lacunary polynomials without heightsISSAC 2013, Boston
June 18.
Representations of polynomials, algorithms and lower boundsSeminar at Århus Universitet, Århus
May 12-17.
Elementary algorithms for the factorization of bivariate lacunary polynomialsJNCF, Marseille
Apr. 18.
Factorization of lacunary polynomialsSéminaire Pampers de l'IRMAR, Rennes
Apr. 8-12.
Factorization of lacunary polynomialsEJC IM 2013, Perpignan
Mar. 14.
Representations of polynomials, algorithms and lower boundsSeminar at Institut Fourier, Grenoble
Feb. 25-26.
Representations of polynomials, algorithms and lower boundsSéminaire ECO du LIRMM, Montpellier
Feb. 1.
Factoring bivariate lacunary polynomials without heightsSéminaire de théorie des nombres du LMNO, Caen
Jan. 14-18.
Factoring bivariate lacunary polynomials without heightsComputational Counting Seminar, Dagstuhl
2012
Nov. 29.
Representations of polynomials, algorithms and lower boundsPhD defense, Lyon
Nov. 21-22.
The real τ-conjecture & lower bounds for the permanentFirst meeting of GT CoA, Paris
July 2-4.
Factoring bivariate lacunary polynomials without heightsMeeting of GT CMF, during the conference How Turing's machine changed the world?, Lyon
Jan. 21-27. Permanent versus DéterminantSemaine Ski-Études des L3 de l'ENS Lyon, Le Pleynet
2011
Dec. 12-14.
The Limited Power of Powering: Polynomial identity Testing and a Depth-four Lower Bound for the PermanentFSTTCS 2011, IIT Bombay, India
Nov. 14-18.
The Limited Power of Powering: Polynomial identity Testing and a Depth-four Lower Bound for the PermanentJNCF 2011, Marseille
Nov. 14-18.
Symmetric Determinantal Representations of Polynomials in Characteristic 2JNCF 2011, Marseille
Oct. 6-9.
Symmetric Determinantal Representations of Polynomials
Mar. 28 - Apr. 1. Permanent versus DéterminantEJC IM 2011, Amiens
Mar. 10-12.
Symmetric Determinantal Representations of Weakly-Skew CircuitsSTACS 2011, Dortmund
2010
Nov. 30.
Symmetric Determinantal Representations of PolynomialsComputational Counting Seminar, Dagstuhl
Nov. 26.
Symmetric Determinantal Representations of PolynomialsSéminaire de Calcul formel et Complexité de l'IRMAR, Rennes
Nov. 16.
Symmetric Determinantal Representations of PolynomialsSéminaire "Complexité, Logique et Informatique" de l'Équipe de Logique Mathématique de Jussieu, Paris
Sept. 30.
Symmetric Determinantal Representations of PolynomialsAlGCO Seminar at the LIRMM, Montpellier
Aug. 23-27.
The Multivariate Resultant is NP-hard in Any CharacteristicMFCS 2010, Brno
Mar. 29 - Apr. 2.
The Multivariate Resultant is NP-hard in Any CharacteristicEJC IM 2010, Chambéry
2009
Sept. 30.
Hardness of the ResultantVisitors Seminar of the Thematic Program FoCM, Fields Institute, Toronto
June 22-24. Difficulté du résultant et des grands déterminantsMeeting of GT CMF, ÉNS Cachan
2008
Nov. 12.
Acceptable Complexity Measures of TheoremsLIF's seminar, Marseille
Oct. 21 - Nov. 2.
Acceptable Complexity Measures of TheoremsNKS Midwest Conference '08, Bloomington

## My coauthors

Last modification : July 29., 2023