Why is PolynomialRing over SR much slower than SR?

asked 2021-03-05 14:23:18 +0200

step gravatar image

updated 2023-05-19 14:27:15 +0200

FrédéricC gravatar image

In my code, I was trying to invert a matrix whose entries are symbolic expressions that depend on a single variable (called "tau").

Because of the fact that in my code the dependency on tau is polynomial, at some point I realized, that, in principle, it would be better to use the FractionField over the PolynomialRing in one variable (tau) defined over the symbolic ring (SR) instead of using the symbolic ring with a new variable defined as var("tau"). I was expecting that this would have improved the performances because of the fact that now SageMath knows that there can not be any log(tau) or exp(tau) or something like that.

But, instead, everything became hundreds of times slower. In particular, the method solve_right of the matrix, for a matrix 4 x 4, requires more than half an hour instead of 0.1 seconds. Also, the simplify method seems to be a lot slower.

Do you know why this happens? I have tried to find which algorithm SageMath uses for solving a linear system, but I did not find anything. Does it uses a different algorithm for polynomial and for symbolic expressions? I suspect that for polynomial over a "standard" ring (like for example QQ), SageMath uses Singular while, when solving a system with a matrix on the symbolic ring, it uses Maxima. Therefore, maybe the problem is related with the fact that, when I use a polynomial ring over SR, Sage must use an internal implementation. Do you know if this is the case? And why this also affect the method simplify?

The following snippet generate two matrices with the same content (but using a different ring):

def build_matrix(poly_ring=False):
    if poly_ring:
        tau_ring = PolynomialRing(SR, 'tau')
        tau_field = FractionField(tau_ring)
        tau = tau_ring.gen(0)

        M = matrix(tau_field, 2, 2)
        f = vector(tau_field, [1, 1])
    else:
        tau = var('tau')
        M = matrix(SR, 2, 2)
        f = vector(SR, [1, 1])

    M[0, 0] = (e^(2/ 3) + 1) * tau + e^2
    M[0, 1] = e^-1 * tau + 3
    M[1, 0] = tau + 2
    M[1, 1] = 3 * tau + e^(1/ 2) + e^4

    return M, f

And here there are the benchmark that I see:

m1, f1 = build_matrix(True)
m2, f2 = build_matrix(False)

%timeit m1.inverse()
>> 345 ms ± 14.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

%timeit m2.inverse()
>> 2.57 ms ± 52.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

%timeit m1.solve_right(f1)
>> 370 ms ± 72.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

%timeit m2.solve_right(f2)
>> 2.5 ms ± 21.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Thank you!

edit retag flag offensive close merge delete

Comments

1

You should provide the whole code that allows to reproduce the issue, so that we can give a try.

tmonteil gravatar imagetmonteil ( 2021-03-06 10:30:23 +0200 )edit

Ideally, provide the simplest example you can of a matrix that illustrates this slowdown.

slelievre gravatar imageslelievre ( 2021-03-06 18:54:29 +0200 )edit

Sorry, I should have added a snippet to show the behaviour. Now I have added it to my question.

step gravatar imagestep ( 2021-03-06 23:30:49 +0200 )edit