Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Workaround: use sharedmemory and multiprocessing packages (sharedmemory is available via pip). I tried the following code within a separate .py file:

import sharedmem
import multiprocessing as mp
import numpy as np

N = 1024 * 2
K = 256

_lock = mp.Lock()


def f(_shared_buffer):
    partial_result = np.ones((N, N), dtype=np.complex128)
    _lock.acquire()
    _shared_buffer += partial_result
    _lock.release()


def test():
    shared_buffer = sharedmem.empty((N, N), dtype=np.complex128)
    pool = mp.Pool(4)
    pool.map(f, [shared_buffer for _ in range(K)])
    return shared_buffer

Advantages of this workaround:

  • Executes as fast as the original code with no parallelism (about 5 seconds on my PC)

Disadvantages:

  • Requires usage of the additional package sharedmem (although this package is available via pip)
  • I was not able to pass _lock object to f() from within Sage's Notebook and have to put the code in a separate file, than import and only then execute
  • The buffer from np.ndarray becomes sharedmem.sharedmem.anonymousmemmap (not sure if that is a disadvantage, I guess one can convert it back to numpy's type pretty fast)