-
J.-G. Dumas, A. Galan, B. Grenet, A. Maignan, D. S. Roche.
We consider the private set union (PSU) problem, where two parties each hold a private set of elements, and they want one of the parties (the receiver) to learn the union of the two sets and nothing else. Our protocols are targeted for the unbalanced case where the receiver's set size is larger than the sender's set size, with the goal of minimizing the costs for the sender both in terms of communication volume and local computation time. This setting is motivated by applications where the receiver has significantly more data (input set size) and computational resources than the sender which might be realized on a small, low-power device. Asymptotically, we achieve communication cost linear in the sender's (smaller) set size, and computation costs for sender and receiver which are nearly-linear in their respective set sizes. To our knowledge, ours is the first algorithm to achieve nearly-linear communication and computation for PSU in this unbalanced setting. Our protocols utilize fully homomorphic encryption (FHE) and, optionally, linearly homomorphic encryption (LHE) to perform the necessary computations while preserving privacy. The underlying computations are based on univariate polynomial arithmetic realized within homomorphic encryption, namely fast multiplication, modular reduction, and multi-point evaluation. These asymptotically fast HE polynomial arithmetic algorithms may be of independent interest.
@inProceedings{DGGMR24,
title = {{Communication Optimal Unbalanced Private Set Union}},
author = {Dumas, Jean-Guillaume and Galan, Alexis and Grenet, Bruno and Maignan, Aude and Roche, Daniel S.},
year = 2025,
booktitle = {Proceedings of ACNS},
arxiv = {2402.16393},
hal = {hal-04475604},
note = {accepted},
}
-
P. Giorgi, B. Grenet, A. Perret du Cray, D. S. Roche.
We consider the classical problems of interpolating a polynomial given a black box for evaluation, and of multiplying two polynomials, in the setting where the bit-lengths of the coefficients may vary widely, so-called unbalanced polynomials. Let f ∈ ℤ[x] be an unknown polynomial and s, D be bounds on its total bit-length and degree, our new interpolation algorithm returns f with high probability using Õ(s log D) bit operations and O (s log D log s) black box evaluations. For polynomial multiplication, assuming the bit-length s of the product is not given, our algorithm has an expected running time of Õ(s log D), whereas previous methods for (resp.) dense or sparse arithmetic have at least Õ(sD) or Õ(s²) bit complexity.
@inProceedings{GGPR24,
title = {{Fast interpolation and multiplication of unbalanced polynomials}},
author = {Giorgi, Pascal and Grenet, Bruno and Perret du Cray, Armelle and Roche, Daniel S.},
year = 2024,
booktitle = {Proceedings of ISSAC},
pages = {437-446},
doi = {10.1145/3666000.3669717},
arxiv = {2402.10139},
}
-
We consider the simultaneously fast and in-place computation of the Euclidean polynomial modular remainder R(X) ≡ A(X) mod B(X) with A and B of respective degrees n and m ≤ n. Fast algorithms for this usually come at the expense of a linear amount of extra temporary space. In particular, they require to first compute and store the whole quotient Q(X) such that A = BQ+R.
We here propose an in-place algorithm (that is with only O(1) extra space) to compute the sole remainder, using the input space of A and B as intermediate storage but ultimately restoring them to their initial state. If M(k) denotes the complexity of a (not-in-place) algorithm to multiply two degree-k polynomials, our algorithm uses at most O(n/m M(m)log(m)) arithmetic operations, or O(n/m M(m)) if M(n) = Θ(n1+ε) for some ε > 0. We also propose variants that compute – still in-place and with the same kind of complexity bounds – the over-place remainder A(X) ≡ A(X) mod B(X), the accumulated remainder R(X) += A(X) mod B(X) and the accumulated modular multiplication R(X) += A(X)C(X) mod B(X).
To achieve this, we develop techniques for Toeplitz matrix operations whose output is also part of the input, and for convolutions. We reduce these tasks to accumulated polynomial multiplication, for which fast in-place algorithms have recently been developed.
@inProceedings{DuGre24rem,
title = {{In-place fast polynomial modular remainder}},
author = {Dumas, Jean-Guillaume and Grenet, Bruno},
year = 2024,
booktitle = {Proceedings of ISSAC},
pages = {26-35},
doi = {10.1145/3666000.3669672},
arxiv = {2302.13600},
hal = {hal-03979016},
}
-
This paper deals with simultaneously fast and in-place algorithms for formulae where the result has to be linearly accumulated: some output variables are also input variables, linked by a linear dependency. Fundamental examples include the in-place accumulated multiplication of polynomials or matrices, C += AB. The difficulty is to combine in-place computations with fast algorithms: those usually come at the expense of (potentially large) extra temporary space, but with accumulation the output variables are not even available to store intermediate values. We first propose a novel automatic design of fast and in-place accumulating algorithms for any bilinear formulae (and thus for polynomial and matrix multiplication) and then extend it to any linear accumulation of a collection of functions. For this, we relax the in-place model to any algorithm allowed to modify its inputs, provided that those are restored to their initial state afterwards. This allows us, in fine, to derive unprecedented in-place accumulating algorithms for fast polynomial multiplications and for Strassen-like matrix multiplications.
@inProceedings{DuGre24bil,
title = {{In-place accumulation of fast multiplication formulae}},
author = {Dumas, Jean-Guillaume and Grenet, Bruno},
year = 2024,
booktitle = {Proceedings of ISSAC},
pages = {16-25},
doi = {10.1145/3666000.3669671},
arxiv = {2307.12712},
hal = {hal-04167499},
}
-
P. Giorgi, B. Grenet, A. Perret du Cray.
Polynomial multiplication is known to have quasi-linear complexity in both the dense and the sparse cases. Yet no truly linear algorithm has been given in any case for the problem, and it is not clear whether it is even possible. This leaves room for a better algorithm for the simpler problem of verifying a polynomial product. While finding deterministic methods seems out of reach, there exist probabilistic algorithms for the problem that are optimal in number of algebraic operations.
We study the generalization of the problem to the verification of a polynomial product modulo a sparse divisor. We investigate its bit complexity for both dense and sparse multiplicands. In particular, we are able to show the primacy of the verification over modular multiplication when the divisor has a constant sparsity and a second highest-degree monomial that is not too large. We use these results to obtain new bounds on the bit complexity of the standard polynomial multiplication verification. In particular, we provide optimal algorithms in the bit complexity model in the dense case by improving a result of Kaminski and develop the first quasi-optimal algorithm for verifying sparse polynomial product.
@article{GGPdC23,
title = {{Polynomial modular product verification and its implications}},
author = {Giorgi, Pascal and Grenet, Bruno and Perret du Cray, Armelle},
year = 2023,
journal = {J. Symb. Comput.},
volume = 116,
pages = {98-129},
doi = {10.1016/j.jsc.2022.08.011},
arxiv = {2101.02142},
hal = {hal-03102121},
}
-
P. Giorgi, B. Grenet, A. Perret du Cray, D. S. Roche.
Manuscript, 2022.
We describe a straightforward method to generate a random prime q
such that the multiplicative group 𝔽q* also has a random
large prime-order subgroup. The described algorithm also yields this
order p as well as a p'th primitive root of unity ω. The methods
here are efficient asymptotically, but due to large constants may not
be very useful in practical settings.
@misc{GGPR22c,
title = {{Random primes in arithmetic progressions}},
author = {Giorgi, Pascal and Grenet, Bruno and Perret du Cray, Armelle and Roche, Daniel S.},
year = 2022,
howpublished = {Manuscript},
arxiv = {2202.05955},
}
-
P. Giorgi, B. Grenet, A. Perret du Cray, D. S. Roche.
Numerous algorithms call for computation over the integers modulo a
randomly-chosen large prime. In some cases, the quasi-cubic complexity
of selecting a random prime can dominate the total running time. We
propose a new variant of dynamic evaluation, applied to a
randomly-chosen (composite) integer. The transformation we propose can
apply to any algorithm in the algebraic RAM model, even allowing
randomization. The resulting transformed algorithm avoids any primality
tests and will, with constant positive probability, have the same
result as the original computation modulo a randomly-chosen prime. As
an application, we demonstrate how to compute the exact number of
nonzero terms in an unknown integer polynomial in quasi-linear time. We
also show how the same algorithmic transformation technique can be used
for computing modulo random irreducible polynomials over a finite
field.
@inProceedings{GGPR22b,
title = {{Random Primes without Primality Testing}},
author = {Giorgi, Pascal and Grenet, Bruno and Perret du Cray, Armelle and Roche, Daniel S.},
year = 2022,
booktitle = {Proceedings of ISSAC},
pages = {207-215},
doi = {10.1145/3476446.3536191},
arxiv = {2202.12073},
hal = {lirmm-03784821},
}
-
P. Giorgi, B. Grenet, A. Perret du Cray, D. S. Roche.
Given a way to evaluate an unknown polynomial with integer
coefficients, we present new algorithms to recover its nonzero
coefficients and corresponding exponents. As an application, we adapt
this interpolation algorithm to the problem of computing the exact
quotient of two given polynomials. These methods are efficient in terms
of the bit-length of the sparse representation, that is, the number of
nonzero terms, the size of coefficients, the number of variables, and
the logarithm of the degree. At the core of our results is a new Monte
Carlo randomized algorithm to recover a polynomial f(x) with integer
coefficients given a way to evaluate f(θ) mod m for any chosen
integers θ and m. This algorithm has nearly-optimal bit complexity,
meaning that the total bit-length of the probes, as well as the
computational running time, is softly linear (ignoring logarithmic
factors) in the bit-length of the resulting sparse polynomial. To our
knowledge, this is the first sparse interpolation algorithm with
soft-linear bit complexity in the total output size. For integer
polynomials, the best previously known results have at least a cubic
dependency on the bit-length of the exponents.
@inProceedings{GGPR22a,
title = {{Sparse Polynomial Interpolation and Division in Soft-linear Time}},
author = {Giorgi, Pascal and Grenet, Bruno and Perret du Cray, Armelle and Roche, Daniel S.},
year = 2022,
booktitle = {Proceedings of ISSAC},
pages = {459-468},
doi = {10.1145/3476446.3536173},
publisher = {ACM},
arxiv = {2202.08106},
hal = {lirmm-03784815},
note = {Distinguished Paper Award},
}
-
P. Giorgi, B. Grenet, A. Perret du Cray.
No polynomial-time algorithm is known to test whether a sparse polynomial G divides another sparse polynomial F. While computing the quotient Q=F quo G can be done in polynomial time with respect to the sparsities of F, G and Q, this is not yet sufficient to get a polynomial-time divisibility test in general. Indeed, the sparsity of the quotient Q can be exponentially larger than the ones of F and G. In the favorable case where the sparsity #Q of the quotient is polynomial, the best known algorithm to compute Q has a non-linear factor #G#Q in the complexity, which is not optimal.
In this work, we are interested in the two aspects of this problem. First, we propose a new randomized algorithm that computes the quotient of two sparse polynomials when the division is exact. Its complexity is quasi-linear in the sparsities of F, G and Q. Our approach relies on sparse interpolation and it works over any finite field or the ring of integers. Then, as a step toward faster divisibility testing, we provide a new polynomial-time algorithm when the divisor has a specific shape. More precisely, we reduce the problem to finding a polynomial S such that QS is sparse and testing divisibility by S can be done in polynomial time. We identify some structure patterns in the divisor G for which we can efficiently compute such a polynomial S.
@inProceedings{GGPdC21,
title = {{On exact division and divisibility testing for sparse polynomials}},
author = {Giorgi, Pascal and Grenet, Bruno and Perret du Cray, Armelle},
year = 2021,
booktitle = {Proceedings of ISSAC},
pages = {163{--}170},
doi = {10.1145/3452143.3465539},
publisher = {ACM},
arxiv = {2102.04826},
hal = {hal-03136945},
}
-
A. Chattopadhyay, B. Grenet, P. Koiran, N. Portier, Y. Strozecki.
Implementation in the package
Lacunaryx of Mathemagix.
We present a deterministic polynomial-time algorithm which computes the multilinear factors of multivariate lacunary polynomials over number fields. It is based on a new Gap theorem which allows to test whether P(X) = Σj aj Xαj (vX+t)βj (uX+w)γj is identically zero in polynomial time. Previous algorithms for this task were based on Gap Theorems expressed in terms of the height of the coefficients. Our Gap Theorem is based on the valuation of the polynomial and is valid for any field of characteristic zero. As a consequence we obtain a faster and more elementary algorithm. Furthermore, we can partially extend the algorithm to other situations, such as absolute and approximate factorizations.
We also give a version of our Gap Theorem valid for fields of large characteristic, and deduce a randomized polynomial-time algorithm to compute multilinear factors with at least three monomials of multivariate lacunary polynomials of finite fields of large characteristic. We provide NP-hardness results to explain our inability to compute binomial factors.
@article{CGKPS21,
title = {{Computing the multilinear factors of lacunary polynomials without heights}},
author = {Chattopadhyay, Arkadev and Grenet, Bruno and Koiran, Pascal and Portier, Natacha and Strozecki, Yann},
year = 2021,
journal = {J. Symb. Comput.},
volume = 104,
pages = {183{--}206},
doi = {10.1016/j.jsc.2020.04.013},
arxiv = {1311.5694},
hal = {hal-00936318},
}
-
P. Giorgi, B. Grenet, A. Perret du Cray.
We present a probabilistic algorithm to compute the product of two univariate sparse polynomials over a field with a number of bit operations that is quasi-linear in the size of the input and the output. Our algorithm works for any field of characteristic zero or larger than the degree. We mainly rely on sparse interpolation and on a new algorithm for verifying a sparse product that has also a quasi-linear time complexity. Using Kronecker substitution techniques we extend our result to the multivariate case.
@inProceedings{GGPdC20,
title = {{Essentially optimal sparse polynomial multiplication}},
author = {Giorgi, Pascal and Grenet, Bruno and Perret du Cray, Armelle},
year = 2020,
booktitle = {Proceedings of ISSAC},
pages = {202{--}209},
doi = {10.1145/3373207.3404026},
publisher = {ACM},
arxiv = {2001.11959},
hal = {hal-02476609},
}
-
P. Giorgi, B. Grenet, D. S. Roche.
We consider space-saving versions of several important operations on univariate polynomials, namely power series inversion and division, division with remainder, multi-point evaluation, and interpolation. Now-classical results show that such problems can be solved in (nearly) the same asymptotic time as fast polynomial multiplication. However, these reductions, even when applied to an in-place variant of fast polynomial multiplication, yield algorithms which require at least a linear amount of extra space for intermediate results. We demonstrate new in-place algorithms for the aforementioned polynomial computations which require only constant extra space and achieve the same asymptotic running time as their out-of-place counterparts. We also provide a precise complexity analysis so that all constants are made explicit, parameterized by the space usage of the underlying multiplication algorithms.
@inProceedings{GGR20,
title = {{Fast in-place algorithms for polynomial operations: division, evaluation, interpolation}},
author = {Giorgi, Pascal and Grenet, Bruno and Roche, Daniel S.},
year = 2020,
booktitle = {Proceedings of ISSAC},
pages = {210{--}217},
doi = {10.1145/3373207.3404061},
publisher = {ACM},
arxiv = {2002.10304},
hal = {lirmm-02493066},
}
-
We give a new simple and short ("one-line") analysis for the runtime of the well-known Euclidean Algorithm. While very short simple, the obtained upper bound in optimal.
@inProceedings{GV20,
title = {{One (more) line on the most Ancient Algorithm in History}},
author = {Grenet, Bruno and Volkovich, Ilya},
year = 2020,
booktitle = {Proceedings of SOSA},
pages = {15{--}17},
doi = {10.1137/1.9781611976014.3},
publisher = {SIAM},
arxiv = {1808.07957},
hal = {lirmm-02335368},
}
-
P. Giorgi, B. Grenet, D. S. Roche.
The polynomial multiplication problem has attracted considerable attention since the early days of computer algebra, and several algorithms have been designed to achieve the best possible time complexity. More recently, efforts have been made to improve the space complexity, developing modified versions of a few specific algorithms to use no extra space while keeping the same asymptotic running time.
In this work, we broaden the scope in two regards. First, we ask whether an arbitrary multiplication algorithm can be performed in-place generically. Second, we consider two important variants which produce only part of the result (and hence have less space to work with), the so-called middle and short products, and ask whether these operations can also be performed in-place.
To answer both questions in (mostly) the affirmative, we provide a series of reductions starting with any linear-space multiplication algorithm. For full and short product algorithms these reductions yield in-place versions with the same asymptotic time complexity as the out-of-place version. For the middle product, the reduction incurs an extra logarithmic factor in the time complexity only when the algorithm is quasi-linear.
@inProceedings{GGR19,
title = {{Generic reductions for in-place polynomial multiplication}},
author = {Giorgi, Pascal and Grenet, Bruno and Roche, Daniel S.},
year = 2019,
booktitle = {Proceedings of ISSAC},
pages = {187{--}194},
doi = {10.1145/3326229.3326249},
publisher = {ACM},
arxiv = {1902.02967},
hal = {lirmm-02003089},
}
-
B. Grenet, J. van der Hoeven, G. Lecerf.
Appl. Alg. Eng. Comm. Comput., 2016.
See also our
related paper for randomized algorithms and fast implementations.
We design new deterministic algorithms, based on Graeffe transforms, to compute all the roots of a polynomial which splits over a finite field 𝔽q. Our algorithms were designed to be particularly efficient in the case when the cardinality q–1 of the multiplicative group of 𝔽q is smooth. Such fields are often used in practice because they support fast discrete Fourier transforms. We also present a new nearly optimal algorithm for computing characteristic polynomials of multiplication endomorphisms in finite field extensions. This algorithm allows for the efficient computation of Graeffe transforms of arbitrary orders.
@article{GHL16,
title = {{Deterministic root finding over finite fields using Graeffe transforms}},
author = {Grenet, Bruno and van der Hoeven, Joris and Lecerf, Grégoire},
year = 2016,
journal = {Appl. Alg. Eng. Comm. Comput.},
volume = 27,
number = 3,
pages = {237{--}257},
doi = {10.1007/s00200-015-0280-5},
hal = {hal-01104251},
}
-
B. Grenet.
J. Symb. Comput., 2016.
ISSAC 2014 special issue
In this paper, we present a new method for computing bounded-degree factors of lacunary multivariate polynomials. In particular for polynomials over number fields, we give a new algorithm that takes as input a multivariate polynomial f in lacunary representation and a degree bound d and computes the irreducible factors of degree at most d of f in time polynomial in the lacunary size of f and in d. Our algorithm consists in a reduction of the problem to the univariate case on the one hand and to the irreducible factorization of multivariate low-degree polynomials on the other hand, which is valid for any field of zero characteristic.
The reduction algorithms we propose are elementary in that they only manipulate the exponent vectors of the input polynomial. The proof of correctness and the complexity bounds rely on the valuations of certain formal power series with rational exponents, called Puiseux series, and on considerations on the Newton polytope of the polynomial.
@article{Gre16,
title = {{Bounded-degree factors of lacunary multivariate polynomials}},
author = {Grenet, Bruno},
year = 2016,
journal = {J. Symb. Comput.},
volume = 75,
pages = {171{--}192},
doi = {10.1016/j.jsc.2015.11.013},
arxiv = {1412.3570},
hal = {hal-01094727},
note = {ISSAC 2014 special issue},
}
-
In this paper, we report on an implementation in the free software Mathemagix of lacunary factorization algorithms, distributed as a library called
Lacunaryx.
@article{Gre15,
title = {{Lacunaryx: Computing bounded-degree factors of lacunary polynomials}},
author = {Grenet, Bruno},
year = 2015,
journal = {ACM Comm. Comput. Algebra},
volume = 49,
number = 4,
pages = {121{--}124},
doi = {10.1145/2893803.2893807},
arxiv = {1506.03726},
hal = {lirmm-01163085},
note = {Best software presentation award},
}
-
B. Grenet, J. van der Hoeven, G. Lecerf.
Consider a finite field 𝔽q whose multiplicative group has smooth cardinality. We study the problem of computing all roots of a polynomial that splits over 𝔽q, which was one of the bottlenecks for fast sparse interpolation in practice. We revisit and slightly improve existing algorithms and then present new randomized ones based on the Graeffe transform. We report on our implementation in the Mathemagix computer algebra system, confirming that our ideas gain by an order of magnitude in practice.
@inProceedings{GHL15,
title = {{Randomized Root Finding over Finite FFT-fields Using Tangent Graeffe Transforms}},
author = {Grenet, Bruno and van der Hoeven, Joris and Lecerf, Grégoire},
year = 2015,
booktitle = {Proceedings of ISSAC},
pages = {197{--}204},
doi = {10.1145/2755996.2756647},
publisher = {ACM},
hal = {hal-01104279},
}
-
We present a new algorithm for the computation of the irreducible factors of degree at most d, with multiplicity, of multivariate lacunary polynomials over fields of characteristic zero. The algorithm reduces this computation to the computation of irreducible factors of degree at most d of univariate lacunary polynomials and to the factorization of low-degree multivariate polynomials. The reduction runs in time polynomial in the size of the input polynomial and in d. As a result, we obtain a new polynomial-time algorithm for the computation of low-degree factors, with multiplicity, of multivariate lacunary polynomials over number fields, but our method also gives partial results for other fields, such as the fields of p-adic numbers or for absolute or approximate factorization for instance.
The core of our reduction uses the Newton polygon of the input polynomial, and its validity is based on the Newton-Puiseux expansion of roots of bivariate polynomials. In particular, we bound the valuation of f(X,φ) where f is a lacunary polynomial and φ a Puiseux series whose vanishing polynomial has low degree.
@inProceedings{Gre14,
title = {{Computing low-degree factors of lacunary polynomials: a Newton-Puiseux approach}},
author = {Grenet, Bruno},
year = 2014,
booktitle = {Proceedings of ISSAC},
pages = {224{--}231},
doi = {10.1145/2608628.2608649},
publisher = {ACM},
arxiv = {1401.4720},
hal = {hal-00936319},
}
-
A. Chattopadhyay, B. Grenet, P. Koiran, N. Portier, Y. Strozecki.
The lacunary, or supersparse, representation of a multivariate polynomial P is a list of pairs (c,e) where c is a coefficient of P and e is a vector of exponent. Each pair defines a term of the polynomial, and P equals the sum of all these terms. The factorization of lacunary polynomial has been investigated in a series of papers by Cucker, Koiran and Smale (J. Symb. Comput., 1999), Lenstra (Number Theory in Progress, 1999), and Kaltofen and Koiran (ISSAC 2005 & 2006). In this paper, we are interested in more elementary proofs for some of these results. We focus on Kaltofen and Koiran’s results dealing with linear factors of bivariate lacunary polynomials. In particular, we give a polynomial-time algorithm to find linear factors of bivariate polynomials that is not based on heights of algebraic numbers. This simplification allows us to give a similar result for some fields of positive characteristic. Our main technical result is an upper bound on the valuation of polynomials of the form P(X,1+X) where P is a bivariate lacunary polynomial, and can be viewed as a generalization of a result of Hajós.
@inProceedings{CGKPS13,
title = {{Factoring bivariate lacunary polynomials without heights}},
author = {Chattopadhyay, Arkadev and Grenet, Bruno and Koiran, Pascal and Portier, Natacha and Strozecki, Yann},
year = 2013,
booktitle = {Proceedings of ISSAC},
pages = {141{--}158},
doi = {10.1145/2465506.2465932},
publisher = {ACM},
arxiv = {1206.4224},
hal = {ensl-00738542},
}
-
B. Grenet, P. Koiran, N. Portier.
The multivariate resultant is a fundamental tool of computational algebraic geometry. It can in particular be used to decide whether a system of n homogeneous equations in n variables is satisfiable (the resultant is a polynomial in the system's coefficients which vanishes if and only if the system is satisfiable). In this paper, we investigate the complexity of computing the multivariate resultant.
First, we study the complexity of testing the multivariate resultant for zero. Our main result is that this problem is NP-hard under deterministic reductions in any characteristic, for systems of low-degree polynomials with coefficients in the ground field (rather than in an extension). In characteristic zero, we observe that this problem is in the Arthur-Merlin class AM if the generalized Riemann hypothesis holds true, while the best known upper bound in positive characteristic remains PSPACE.
Second, we study the classical algorithms to compute the resultant. They usually rely on the computation of the determinant of an exponential-size matrix, known as Macaulay matrix. We show that this matrix belongs to a class of succinctly representable matrices, for which testing the determinant for zero is proved PSPACE-complete. This means that improving Canny's PSPACE upper bound requires either to look at the fine structure of the Macaulay matrix to find an ad hoc algorithm for computing its determinant, or to use altogether different techniques.
@article{GKP13,
title = {{On the complexity of the multivariate resultant}},
author = {Grenet, Bruno and Koiran, Pascal and Portier, Natacha},
year = 2013,
journal = {J. Complexity},
volume = 29,
number = 2,
pages = {142{--}157},
doi = {10.1016/j.jco.2012.10.001},
arxiv = {1210.1451},
hal = {ensl-00744385},
}
-
B. Grenet, Th. Monteil, S. Thomassé.
This paper studies Symmetric Determinantal Representations (SDR) in characteristic 2, that is the representation of a polynomial P by a symmetric matrix M such that P=det M, and where each entry of M is either a constant or a variable.
We first give a necessary condition for a polynomial to have an SDR. It implies that some polynomials have n SDR, answering a question of Grenet et al.
A large part of the paper is then devoted to the case of multilinear polynomials. We prove that the existence of an SDR for a multilinear polynomial is equivalent to the existence of a factorization of the polynomial in certain quotient rings. We develop some algorithms to test the factorizability in these rings and use them to find SDRs when they exist. Altogether, this gives us polynomial-time algorithms to factorize the polynomials in the quotient rings and to build SDRs.
@article{GMT13,
title = {{Symmetric determinantal representations in characteristic{~}2}},
author = {Grenet, Bruno and Monteil, Thierry and Thomassé, Stéphan},
year = 2013,
journal = {Linear Alg. Appl.},
volume = 439,
number = 5,
pages = {1364{--}1381},
doi = {10.1016/j.laa.2013.04.022},
arxiv = {1210.5879},
hal = {ensl-00830871},
}
-
B. Grenet, P. Koiran, N. Portier, Y. Strozecki.
Polynomial identity testing and arithmetic circuit lower bounds are two central questions in algebraic complexity theory. It is an intriguing fact that these questions are actually related. One of the authors of the present paper has recently proposed a "real τ-conjecture" which is inspired by this connection. The real τ-conjecture states that the number of real roots of a sum of products of sparse univariate polynomials should be polynomially bounded. It implies a superpolynomial lower bound on the size of arithmetic circuits computing the permanent polynomial.
In this paper we show that the real τ-conjecture holds true for a restricted class of sums of products of sparse polynomials. This result yields lower bounds for a restricted class of depth-4 circuits: we show that polynomial size circuits from this class cannot compute the permanent, and we also give a deterministic polynomial identity testing algorithm for the same class of circuits.
@inProceedings{GKPS11,
title = {{The Limited Power of Powering: Polynomial Identity Testing and a Depth-four Lower Bound for the Permanent}},
author = {Grenet, Bruno and Koiran, Pascal and Portier, Natacha and Strozecki, Yann},
year = 2011,
booktitle = {Proceedings of FSTTCS},
number = 13,
pages = {127{--}139},
doi = {10.4230/LIPIcs.FSTTCS.2011.127},
series = {LIPIcs},
publisher = {Dagstuhl},
arxiv = {1107.1434},
hal = {ensl-00607154},
}
-
B. Grenet, E. L. Kaltofen, P. Koiran, N. Portier.
We deploy algebraic complexity theoretic techniques for constructing symmetric determinantal representations of weakly-skew circuits, which include formulas. Our representations produce matrices of much smaller dimensions than those given in the convex geometry literature when applied to polynomials having a concise representation (as a sum of monomials, or more generally as an arithmetic formula or a weakly-skew circuit). These representations are valid in any field of characteristic different from 2. In characteristic 2 we are led to an almost complete solution to a question of Bürgisser on the VNP-completeness of the partial permanent. In particular, we show that the partial permanent cannot be VNP-complete in a finite field of characteristic 2 unless the polynomial hierarchy collapses.
@inProceedings{GKKP11c,
title = {{Symmetric Determinantal Representation of Weakly-Skew Circuits}},
author = {Grenet, Bruno and Kaltofen, Erich L. and Koiran, Pascal and Portier, Natacha},
year = 2011,
booktitle = {Proceedings of STACS},
number = 9,
pages = {543{--}554},
doi = {10.4230/LIPIcs.STACS.2011.543},
series = {LIPIcs},
publisher = {Dagstuhl},
hal = {hal-00573631},
}
-
B. Grenet, E. L. Kaltofen, P. Koiran, N. Portier.
See also Chapters 3 & 4 of my
PhD Thesis for simplified proofs of these results.
We deploy algebraic complexity theoretic techniques to construct symmetric determinantal representations of formulas and weakly skew circuits. Our representations produce matrices of much smaller dimensions than those given in the convex geometry literature when applied to polynomials having a concise representation (as a sum of monomials, or more generally as an arithmetic formula or a weakly skew circuit). These representations are valid in any field of characteristic different from 2. In characteristic 2 we are led to an almost complete solution to a question of Bürgisser on the VNP-completeness of the partial permanent. In particular, we show that the partial permanent cannot be VNP-complete in a finite field of characteristic 2 unless the polynomial hierarchy collapses.
@inCollection{GKKP11j,
title = {{Symmetric Determinantal Representation of Formulas and Weakly Skew Circuits}},
author = {Grenet, Bruno and Kaltofen, Erich L. and Koiran, Pascal and Portier, Natacha},
year = 2011,
booktitle = {Randomization, Relaxation, and Complexity in Polynomial Equation Solving},
number = 556,
pages = {61{--}96},
doi = {10.1090/conm/556},
isbn = {978-0821852286},
editor = {Leonid Gurvits and Philippe P{\'e}bay and J. Maurice Rojas and David C. Thompson},
series = {Contemporary Mathematics},
publisher = {AMS},
arxiv = {1001.3804},
hal = {ensl-00504925},
}
-
B. Grenet.
Manuscript, 2011.
The Permanent versus Determinant problem is the following: Given an n×n matrix X of indeterminates over a field of characteristic different from two, find the smallest matrix M whose coefficients are linear functions in the indeterminates such that the permanent of X equals the determinant of M. We prove that the dimensions of M are at most 2n–1.
@misc{Gre11,
title = {{An Upper Bound for the Permanent versus Determinant Problem}},
author = {Grenet, Bruno},
year = 2011,
howpublished = {Manuscript},
}
-
In 1930, Gödel presented in Königsberg his famous Incompleteness Theorem, stating that some true mathematical statements are unprovable. Yet, this result gives us no idea about those independent (that is, true and unprovable) statements, about their frequency, the reason they are unprovable, and so on. Calude and Jürgensen proved in 2005 Chaitin's "heuristic principle" for an appropriate measure: the theorems of a finitely-specified theory cannot be significantly more complex than the theory itself. In this work, we investigate the existence of other measures, different from the original one, which satisfy this "heuristic principle". At this end, we introduce the definition of acceptable complexity measure of theorems.
@article{Gre10,
title = {{Acceptable Complexity Measures of Theorems}},
author = {Grenet, Bruno},
year = 2010,
journal = {Compl. Syst.},
volume = 18,
number = 4,
pages = {403{--}425},
doi = {10.25088/ComplexSystems.18.4.403},
arxiv = {0910.0045},
}
-
B. Grenet, P. Koiran, N. Portier.
The multivariate resultant is a fundamental tool of computational algebraic geometry. It can in particular be used to decide whether a system of n homogeneous equations in n variables is satisfiable (the resultant is a polynomial in the system’s coefficients which vanishes if and only if the system is satisfiable). In this paper we present several NP-hardness results for testing whether a multivariate resultant vanishes, or equivalently for deciding whether a square system of homogeneous equations is satisfiable. Our main result is that testing the resultant for zero is NP-hard under deterministic reductions in any characteristic, for systems of low-degree polynomials with coefficients in the ground field (rather than in an extension). We also observe that in characteristic zero, this problem is in the Arthur-Merlin class AM if the generalized Riemann hypothesis holds true. In positive characteristic, the best upper bound remains PSPACE.
@inProceedings{GKP10,
title = {{The Multivariate Resultant is NP-hard in Any Characteristic}},
author = {Grenet, Bruno and Koiran, Pascal and Portier, Natacha},
year = 2010,
booktitle = {Proceedings of MFCS},
number = 6281,
pages = {477{--}488},
doi = {10.1007/978-3-642-15155-2_42},
series = {LNCS},
publisher = {Springer},
arxiv = {0912.2607},
hal = {ensl-00440842},
}