Why is Numpy slower inside of a Sage notebook?

asked 2020-09-01 00:21:03 +0200

droberts01 gravatar image

updated 2020-09-02 07:20:31 +0200

The Issue

I'm running the following numpy benchmark inside of Sage, and comparing it with the results I get by just running Python.

import numpy as np
import time
n = int(10000)
A = np.random.randn(n,n).astype('float64')
B = np.random.randn(n,n).astype('float64')
start_time = time.time()
nrm = np.linalg.norm(np.matmul(A,B))
print(" took {} seconds ".format(time.time() - start_time))
print(" norm = ",nrm)

Output from my Python3 Jupyter notebook:

 took 3.2952768802642822 seconds 
 norm =  999954.1727829538

Output from my SageMath Jupyter notebook:

 took 35.73347020149231 seconds 
 norm =  999976.601519372

Wow. So Numpy runs 10x faster when run inside of a standard Python environment, rather than, e.g. inside of a Sage environment in a Jupyter notebook.

The Cause

I checked the version of numpy installed, especially the BLAS info, and got identical results for both the Sage and standard Python notebook environments. I then checked htop and found the issue: the Python notebook fully utilizes all of the threads on my AMD CPU (all 32 of them) whereas the Sage notebook only uses a single core. This is enough to explain the massive speed difference on my machine.

Is there a way to enable Jupyter's Sage environment to give Numpy access to all of the CPU cores? I tried adjusting the SAGE_NUM_THREADS environment variable to be greater than one but when I launch the Jupyter service and then open a Sage environment SAGE_NUM_THREADS is somehow auto-set back to 1. This bug doesn't show up when I launch the standard Python environment using the same Jupyter service.

Related:

https://ask.sagemath.org/question/445...

Note: Something interesting to note: this slowdown is not observed on my Intel CPU, which uses MKL and runs at the same speed independent of which environment is calling Numpy.

edit retag flag offensive close merge delete