Hello!

I am dealing with fairly complicated polynomials which need to be recalculated many times ~5000. I am looking for a way to optimize my code as the whole procedure took 11h for one of the cases meanwhile Mathematica was able to do it in 20 minutes.

I would like to use parallel computing, but I am not sure how to implement it. For now, letting SageMath run with many CPUs did not change the result.

```
sage.parallel.ncpus.ncpus()
Parallelism().set('tensor', nproc=16)
Parallelism().set(nproc=16)
Parallelism()
#Parameters
e1 = 1/3
e2 = 1/3
l1 = 1/4
l2 = 1
fi = pi/2
z = var('z')
x,y = var('x,y',domain='complex')
xs,ys = var('x,y',domain='real')
mu = var('mu',domain='complex')
zeta = x + I*y
zeta_bar = x - I*y
z_bar(z) = zeta_bar + e1/(z-l1*exp(I*fi)) + e2/(z+l1*exp(I*fi)) + (1-e1-e2)/(z-l2)
L(z) = zeta -z + e1/(z_bar(z)-l1*exp(-I*fi)) + e2/(z_bar(z)+l1*exp(-I*fi)) + (1-e1-e2)/(z_bar(z)-l2)
exp1 = L(z).numerator(normalize=true)
derivative_z(z) = e1/(z_bar(z)-l1*exp(-I*fi))^2 + e2/(z_bar(z)+l1*exp(-I*fi))^2 + (1-e1-e2)/(z_bar(z)-l2)^2
derivative_z_bar(z) = e1/(z-l1*exp(I*fi))^2 + e2/(z+l1*exp(I*fi))^2 + (1-e1-e2)/(z-l2)^2
magnification(z) = mu*(1-derivative_z(z)*derivative_z_bar(z))-1
exp2 = magnification(z).numerator(normalize=true)
def resultant(xs,ys):
temp1 = exp1.subs({x:xs,y:ys})
temp2 = exp2.subs({x:xs,y:ys})
QQI.<I> = QQ[i]
R.<mu> = QQI[]
S.<z> = R[]
poly1 = temp1.polynomial(None,ring=S)
poly2 = temp2.polynomial(None,ring=S)
poly3 = poly1.resultant(poly2)
return poly3
def magn(xs,ys):
roots = resultant(xs,ys).roots(ring=CDF)
mag = sum([abs(roots[k][0].n(50)) for k in range(len(roots))])
return log(mag)
x_list = []
y_list = []
for u in range(36):
x_list.append(u/35)
for u in range(1,36):
x_list.append(-u/35)
x_list.sort()
y_list = x_list
A = np.zeros([71,71])
for u in range(71):
for v in range(71):
A[u,v] = magn(x_list[u],y_list[v])
```

To get through the last loop it took 11,5 hours. Is there any way to make it run parallel? Or maybe another way to speed up the code?

Thank you for any insights!