# Performance of PolynomialRing evaluation

Hi,

I am having a performance issue with the evaluation of a PolynomialRing which I find suspicious. I would like to understand if I am doing things correctly here.

The polynomial is calculated as a function of a set of matrices, whose logic is not relevant here.

In particular, the polynomial I am benchmarking has 16 variables and is 15.000 terms long, which means it has (on average) 120.000 multiplications and 15.000 sums for a total of 135.000 operations. On my laptop, a single evaluation takes 4 seconds, in other words a single operation every 30 us.

This seems weird, given the speed of modern computers and especially compared to the speed of calculating the polynomial - the representativeSetPoly in the code. Indeed I was expecting that part to be slower than the verification.

Is there a way I can evaluate the polynomial in a more efficient way (the verification part in the code)? I am attaching a code snippet to reproduce the issue

import time
import numpy as np
from sage.all import *
from tqdm import tqdm

PRIME_NUM = 109211909

variables = ["x" + str(counter) for counter in range(1, WORKLOAD_DIM + 1)]
P = PolynomialRing(GF(PRIME_NUM), variables, order='lex')

def calculateRowPoly(rowP):
poly = 1
for index, elem in enumerate(rowP):
poly = poly * (1 - P.monomial(*rowMask) * elem)
return poly

#=== Start finding solution (the polynomial)
representativeSetPoly = 0
t0solve = time.perf_counter()
representativeSetPoly = representativeSetPoly + calculateRowPoly(row)
t1solve = time.perf_counter()
print(f"== It took {t1solve - t0solve} seconds to find a solution!")
#=== End finding solution

print(type(representativeSetPoly))

#=== Start verification
numOrthoVecVerifier = 0
t0verify = time.perf_counter()
numOrthoVecVerifier += representativeSetPoly(*vec)

t1verify = time.perf_counter()
print(f"== It took {t1verify - t0verify} seconds to verify the solution")
#=== End verification

edit retag close merge delete

Sort by » oldest newest most voted

Thank you for reporting this problem. I can reproduce the issue, but am not sure what is causing it. There is a current ticket (#27261) about memory leaks with polynomial evaluation, which might be connected to this problem.

For now, there are at least two faster workarounds.

Define the evaluation homomorphism at the point vec and evaluate it at a polynomial f:

def ev1(f, vec):
P = f.parent()
phi = P.hom(list(vec), P.base_ring())
return phi(f)


Or, compute the residue of f modulo the point ideal J corresponding to the vector. As the result is a constant polynomial, we can convert it to the base ring.

def ev2(f, vec):
P = f.parent()
J = P.ideal([x-v for x, v in zip(P.gens(), vec)])
return P.base_ring()(J.reduce(f))


### Timings:

For reproducibility, I ran

np.random.seed(0)


sage: vec = workload[0][0]
sage: %time representativeSetPoly(*vec)
CPU times: user 14.6 s, sys: 20.2 ms, total: 14.6 s
Wall time: 14.6 s
0
sage: %timeit ev1(representativeSetPoly, vec)
213 ms ± 1.04 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
sage: %timeit ev2(representativeSetPoly, vec)
11 ms ± 156 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)


This shows that ev2 is the fastest. Maybe this is not unexpected as homomorphisms and evaluation/substitution can be used in more general contexts than reduction modulo an ideal.

more

With #27261 (where __call__ goes via generic Python evaluation) I got

sage: vec = workload[0][0]
sage: %timeit representativeSetPoly(*vec)
113 ms ± 1.9 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
sage: %timeit ev1(representativeSetPoly, vec)
108 ms ± 1.37 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
sage: %timeit ev2(representativeSetPoly, vec)
7.9 ms ± 131 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

( 2021-05-27 20:41:49 +0200 )edit

Thanks for confirming. It seems that the the generic evaluation is not such a bad workaround for #27261 after all.

( 2021-05-27 21:08:55 +0200 )edit