1 | initial version |
I did not inspect deep causes (like a pipe being full), but a quic look at https://docs.python.org/2/library/multiprocessing.html indicates that doing queue.get()
before p.join()
seems to fix the issue. Just replace
for p in processes:
p.join()
results = [queue.get() for p in processes]
with
results = [queue.get() for p in processes]
for p in processes:
p.join()
2 | No.2 Revision |
I did not inspect deep causes (like a pipe being full), but a quic quick look at https://docs.python.org/2/library/multiprocessing.html indicates that doing queue.get()
should be done before p.join()
, and it seems to fix the issue. Just replace
for p in processes:
p.join()
results = [queue.get() for p in processes]
with
results = [queue.get() for p in processes]
for p in processes:
p.join()