Ask Your Question

RAM problem in script execution

asked 2013-08-22 12:19:50 +0200

mresimulator gravatar image

Hi experts!

I wrote the next code. In 1-2 hours of execution time the RAM of my laptop (8gb) is filled and the sistem crash:

from scipy.stats import uniform    
import numpy as np

np.savetxt(here 'cant_de_cadenas' is saved as .csv in folder 'directory')

import time

for N in cant_de_cadenas:

    for u in srange (100):
        array_1 = uniform.rvs(loc=-2,scale=4, size=N)
        array_2 = uniform.rvs(loc=-2,scale=4, size=N)
        array_3 = uniform.rvs(loc=-1,scale=7, size=N)
        array_4 = 1/np.tan(array_3)
        array_5 = uniform.rvs(loc=-1,scale=1, size=N)
        array_6 = function(array_5)
        array_7 = function(array_6, array_5 and array_4)
        array_8 = function(array_1 and array_7)
        array_9 = function(array_2 and array_7)


        for j in srange(N+4):
            if j>0:
                two arrays (C and D) with len=j-1 are created

                for k in srange (j):
                    if C[k]<=0 and D[k]<=0:

            if j+1<N+4:
                two arrays (C and D) with len=((N+4)-j-1) are created

                for k in srange ((N+4)-j-1):
                    if C[k]<=0 and D[k]<=0:

        An algorithm with matrix M is executed and values 'b1' and 'b2' are generated

        Some values in M_hor are changed and an algorithm with matrix M_hor is executed and values 'b3' and 'b4' are generated

        Some values in M_hor are changed and an algorithm with matrix M_hor is executed and values 'b5' and 'b6' are generated



Like you can see, len(A1)=...=len(A6)=len( cant_de_cadenas) (because in An are the average values of 100 repetitions -included in Bn-).

While many arrays are created (array_1,array_2 etc , with lenght N), in each one of the 100 cycles 'for u in srange (100)' these arrays are overwritten (with the same name). The same applies to B1,...,B6 arrays: in each one of the len(cant_de_cadenas) cyles 'for N in srange (cant_de_cadenas)' these arrays are overwritten (with the same name).

I tryed with gc and gc.collect() and nothing work! In addition, i cant use memory_profiler module in sage.

What am I doing wrong? Why the memory becomes full while running (starts with 10% of RAM used and in 1-2hour is totally full used)?

Please help me, I'm totally stuck!

Thanks a lot!

edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted

answered 2013-08-22 13:14:54 +0200

ppurka gravatar image

updated 2013-08-22 13:27:30 +0200

  1. You need to determine how much memory your matrices are taking. I am not sure how much memory each integer takes. But let us assume that each entry of the matrix is an integer of 32 bits which is 4 bytes. Then you have a matrix of size 4x600x600 which is more than 1M. You are running this for 100 iterations, and have at least 3 matrices in each iterations which already gives 300M of memory. And then you have 4 different values of N, which gives 1.2G of memory. And this is a lower estimate assuming that none of the other objects are taking up any significant memory, no more matrices are being created in your code, and assuming that numpy/python is not doing any garbage collection.

  2. You need to rethink how you want to run your simulation. Are all the entries of your matrix getting allocated some value in the for loop? If so, then do not reinitialize a new zero matrix every time.

  3. Can you redo your computations without copying the matrices? If not, maybe you want to just copy the values of the matrix M manually into the other two matrices so that you are not allocating new memory every time. The program will be slower but you will not run out of memory.

Eventually, you will have to think about your simulation and decide how you want to do it, what simplifications you can perform, etc.

Also, you may want to run sage under ulimit like this: ulimit -v <for example half of your total ram> in order to prevent system crashes. If you want a shell command here it is

$ ulimit -v $(( $( free | sed -n -e '/^Mem:/{s/^Mem:[ ]*\([0-9]\{1,\}\) .*$/\1/p}' )/2 ))
$ /path/to/sage
edit flag offensive delete link more

answered 2013-08-22 16:31:00 +0200

mresimulator gravatar image

Hi experts!

Two questions about the answer:

  1. Item 1 of the answer. Lets take an array M with size 300MB. When this is overwritten (for example, in each iteration), have we 300MB occupated in RAM memory or just 300MB? In the case that we have just 300MB, there should be no problem, so why have I this RAM issue? In the case of the RAM is acumulated, how can I do for free RAM memory occupated for the 'old' array?

  2. How can I do for specify the RAM limit? (for example 6GB). What happen if 6GB is achieved ?

Waiting for your answers.

Thanks for the help!!

edit flag offensive delete link more


1. The whole point of my earlier answer was computing the case when the matrices are not "overwritten". Look at point 3 of my earlier answer. 2. Specify the ram (actually, the virtual memory) in kilobytes. You could give the command `ulimit -v 6000000` to have a limit of approximately 6G. Search in Google for more examples.

ppurka gravatar imageppurka ( 2013-08-23 02:54:41 +0200 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools


Asked: 2013-08-22 12:19:50 +0200

Seen: 410 times

Last updated: Aug 22 '13