Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Memory blowup with MILP

Hello, I have been running some code on our Sage server and it ends up using a huge amount of memory, up to around 50 GB, and then eventually stops running. No error message is generated. It's just that the green bar is gone and it isn't doing anything. This code should generate the problem. When I ran this, it ran up to around 300,000 graphs and then stopped. It should run through all 12.7 million without stopping.

count = 0

for g in graphs.nauty_geng("10"):
    count = count + 1
    if count % 50000 == 0:
        print count
    vcc = int(round(g.complement().chromatic_number(algorithm = "MILP")))
    if vcc == 9:
        print g.graph6_string()

Now, I talked to Jason Grout later and he mentioned that if I changed it to algorithm = "CP", not only does the memory problem not occur, it is also faster on smaller graphs. But, there is still the problem with the MILP algorithm that should be addressed at some point, I assume.

Thanks

Memory blowup with MILP

Hello, I have been running some code on our Sage server and it ends up using a huge amount of memory, up to around 50 GB, and then eventually stops running. No error message is generated. It's just that the green bar is gone and it isn't doing anything. This code should generate the problem. When I ran this, it ran up to around 300,000 graphs and then stopped. It should run through all 12.7 million without stopping.

count = 0

for g in graphs.nauty_geng("10"):
    count = count + 1
    if count % 50000 == 0:
        print count
    vcc = int(round(g.complement().chromatic_number(algorithm = "MILP")))
    if vcc == 9:
        print g.graph6_string()

Now, I talked to Jason Grout later and he mentioned that if I changed it to algorithm = "CP", not only does the memory problem not occur, it is also faster on smaller graphs. But, there is still the problem with the MILP algorithm that should be addressed at some point, I assume.

Thanks

UPDATE: I tried the gc.collect() and want to say it doesn't do anything.

g = graphs.PetersenGraph()
for count in [1..100000]:
    if count % 5000 == 0:
        print count
        print get_memory_usage()
        gc.collect()
        print get_memory_usage()
    test = g.independent_set()

the get_memory_usage() keeps getting higher and higher and higher as I do this. The way I got it to reset was to save and quit the worksheet.

Not a huge deal because we have a better fix coming, it appears. But, just wanted to point out that gc.collect() doesn't seem to do anything as far as I can tell.

Memory blowup with MILP

Hello, I have been running some code on our Sage server and it ends up using a huge amount of memory, up to around 50 GB, and then eventually stops running. No error message is generated. It's just that the green bar is gone and it isn't doing anything. This code should generate the problem. When I ran this, it ran up to around 300,000 graphs and then stopped. It should run through all 12.7 million without stopping.

count = 0

for g in graphs.nauty_geng("10"):
    count = count + 1
    if count % 50000 == 0:
        print count
    vcc = int(round(g.complement().chromatic_number(algorithm = "MILP")))
    if vcc == 9:
        print g.graph6_string()

Now, I talked to Jason Grout later and he mentioned that if I changed it to algorithm = "CP", not only does the memory problem not occur, it is also faster on smaller graphs. But, there is still the problem with the MILP algorithm that should be addressed at some point, I assume.

Thanks

UPDATE: UPDATE: I tried the gc.collect() and want to say it doesn't do anything.

g = graphs.PetersenGraph()
for count in [1..100000]:
    if count % 5000 == 0:
        print count
        print get_memory_usage()
        gc.collect()
        print get_memory_usage()
    test = g.independent_set()

the get_memory_usage() keeps getting higher and higher and higher as I do this. The way I got it to reset was to save and quit the worksheet.

Not a huge deal because we have a better fix coming, it appears. But, just wanted to point out that gc.collect() doesn't seem to do anything as far as I can tell.

Memory blowup with MILP

Hello, I have been running some code on our Sage server and it ends up using a huge amount of memory, up to around 50 GB, and then eventually stops running. No error message is generated. It's just that the green bar is gone and it isn't doing anything. This code should generate the problem. When I ran this, it ran up to around 300,000 graphs and then stopped. It should run through all 12.7 million without stopping.

count = 0

for g in graphs.nauty_geng("10"):
    count = count + 1
    if count % 50000 == 0:
        print count
    vcc = int(round(g.complement().chromatic_number(algorithm = "MILP")))
    if vcc == 9:
        print g.graph6_string()

Now, I talked to Jason Grout later and he mentioned that if I changed it to algorithm = "CP", not only does the memory problem not occur, it is also faster on smaller graphs. But, there is still the problem with the MILP algorithm that should be addressed at some point, I assume.

Thanks

UPDATE: I tried the gc.collect() and want to say it doesn't do anything.

g = graphs.PetersenGraph()
for count in [1..100000]:
    if count % 5000 == 0:
        print count
        print get_memory_usage()
        gc.collect()
        print get_memory_usage()
    test = g.independent_set()

the get_memory_usage() keeps getting higher and higher and higher as I do this. The way I got it to reset was to save and quit the worksheet.

Not a huge deal because we have a better fix coming, it appears. But, just wanted to point out that gc.collect() doesn't seem to do anything as far as I can tell.

UPDATE 2: The patch didn't work. There is still a memory leak. Here is the exact code I am running this time, after the patch was installed to our Sage server.

count = 0

for g in graphs.nauty_geng("10"):
    if count % 50000 == 0:
        print count
        print get_memory_usage()
    count = count + 1

    vcc = int(round(g.chromatic_number("MILP")))

The memory usage at 0 was 808. At 50000, it was 3145. At 100000, it was 8106.

Memory blowup with MILP

Hello, I have been running some code on our Sage server and it ends up using a huge amount of memory, up to around 50 GB, and then eventually stops running. No error message is generated. It's just that the green bar is gone and it isn't doing anything. This code should generate the problem. When I ran this, it ran up to around 300,000 graphs and then stopped. It should run through all 12.7 million without stopping.

count = 0

for g in graphs.nauty_geng("10"):
    count = count + 1
    if count % 50000 == 0:
        print count
    vcc = int(round(g.complement().chromatic_number(algorithm = "MILP")))
    if vcc == 9:
        print g.graph6_string()

Now, I talked to Jason Grout later and he mentioned that if I changed it to algorithm = "CP", not only does the memory problem not occur, it is also faster on smaller graphs. But, there is still the problem with the MILP algorithm that should be addressed at some point, I assume.

Thanks

UPDATE: I tried the gc.collect() and want to say it doesn't do anything.

g = graphs.PetersenGraph()
for count in [1..100000]:
    if count % 5000 == 0:
        print count
        print get_memory_usage()
        gc.collect()
        print get_memory_usage()
    test = g.independent_set()

the get_memory_usage() keeps getting higher and higher and higher as I do this. The way I got it to reset was to save and quit the worksheet.

Not a huge deal because we have a better fix coming, it appears. But, just wanted to point out that gc.collect() doesn't seem to do anything as far as I can tell.

UPDATE 2: The patch didn't work. There is still a memory leak. Here is the exact code I am running this time, after the patch was installed to our Sage server.

count = 0

for g in graphs.nauty_geng("10"):
    if count % 50000 == 0:
        print count
        print get_memory_usage()
    count = count + 1

    vcc = int(round(g.chromatic_number("MILP")))

The memory usage at 0 was 808. At 50000, it was 3145. At 100000, it was 8106.8106. At 200000, we're up to 14369.