Ask Your Question

imnvsh's profile - activity

2023-09-28 07:54:06 +0200 received badge  Famous Question (source)
2022-07-06 10:59:36 +0200 received badge  Notable Question (source)
2022-04-26 21:25:09 +0200 received badge  Popular Question (source)
2022-03-04 23:00:40 +0200 received badge  Notable Question (source)
2022-01-20 22:47:49 +0200 received badge  Popular Question (source)
2022-01-13 19:02:08 +0200 received badge  Notable Question (source)
2021-08-11 13:02:35 +0200 received badge  Popular Question (source)
2021-07-26 01:56:33 +0200 received badge  Notable Question (source)
2020-12-02 18:30:51 +0200 received badge  Popular Question (source)
2020-06-04 03:23:32 +0200 received badge  Popular Question (source)
2019-06-06 20:36:08 +0200 commented question Explicitly clean all memory usage

I think my Sagemath 8.4 is still in Python 2.7

2019-06-05 17:45:28 +0200 asked a question Explicitly clean all memory usage

class State(): def __init__(self): self.value = []

state0 = State()
state0.value = range(10^8)

At this point, a huge memory is located. After processing state0.value, I set it back to empty to continue another process.

state0.value = []

However, the memory is not fully cleaned. Therefore, I cannot continue another process due to my limitation of memory on my computer, and I have to close Sagemath 8.4 to get back to the fresh memory. It is better to iterate, instead of using memory like this; however, I hope that an explicit memory clean exists in Sagemath. Besides it, I use range(10^8) to illustrate my actual 10^8 data in type of set to ease the understanding, so please do not support on re-formulating this usage of range.

2019-03-06 13:34:16 +0200 commented question Save/load huge dictionary

Thank you @vdelecroix.

Side information: my dictionary is in format of keys as tuple (int,int) and values as a list [int]*2110.

2019-03-04 18:42:32 +0200 commented question Save/load huge dictionary

@vdelecroix Thank you for your reponse. Could you give me an example? Or something best matched to Sagemath based on your experience? I see this one supported in Sagemath, but not really sure about its quality: http://doc.sagemath.org/html/en/refer...

2019-03-04 17:08:10 +0200 asked a question Save/load huge dictionary

I have a huge dictionary about 50 GB. After generating this dictionary, I do not have any space left on my memory. I still run Sagemath standard save(mydict, 'myfile'). However, the save runs almost forever.

What should I do? Storing it in multiple files is also fine to me. I really need to load it again to use in the future.

Maybe another approach is helpful. Besides the above dictionary, I have another huge redundant dictionary mydict2, which I tried using del mydict2 to get some extra memory for the above Sagemath save; however, the memory usage still stays the same as before calling del mydict2. I guess its keys are still stored in memory. I do not need keys from mydict2, but its value is used in mydict.

2019-03-03 01:25:24 +0200 asked a question Divide Combinations(n,k) into multiple parts (n > 1110)

I asked a similar question here: https://ask.sagemath.org/question/453.... I closed it, because the answer matched the question there. However, now I have an extra question related to it:

C = Combinations(2110, 3)
C_cardinality = binomial(2110, 3) # 1.563.429.820
N = C_cardinality / 95 # Number of parts = 95 (chunk size)
R = range(0, C_cardinality , N)

Because C_cardinality is too big, I would like to divide C into multiple parts, which are then used on multiple computers to process separately. In order to be sure that these multiple parts are totally different but covering the whole C, I use C's list:

c0 = C.list()[R[0]:R[0]+N]
c1 = C.list()[R[1]:R[1]+N]
c2 = C.list()[R[2]:R[2]+N]
...

However, my computer's RAM (memory 128GB) is not enough to store the C's list. Therefore, I iterate C and store in a CSV instead:

def storeCSV(data, fo, wtype='a'):
    with open(fo, wtype) as wFile:
        write_file = csv.writer(wFile)
        write_file.writerow([data])
index = 0
count = 0
for c in C:
    storeCSV(c, "part_%s.csv" % index)
    count += 1
    if count == N:
        count = 0
        index += 1
        print index

Unfortunately, this CSV write is too slow, i.e ~6000 write/min, which means I have to wait about 3800 hours for the whole C to finish.

You might wonder why I need to divide C into multiple parts. After having these parts, I will run the following code on different computers with a purpose to collect information among set of 3 numbers:

v = MatrixSpace(GF(2), 3,9)
g3 = (c for c in c0 if block_matrix(3, 1, [v[c[0]], v[c[1]], v[c[2]]]).rank() >= 6) # or c1, c2, ..., cN
rel = dict()
for g in list(map(tuple,g3)):
    # (1, 2, 3)
    rel.setdefault(g[::2], []).append(g[1]) # '[1, 3]': 2
    rel.setdefault(g[:2], []).append(g[-1]) # '[1, 2]': 3
    rel.setdefault(g[1:], []).append(g[0])  # '[2, 3]': 1

Specifically, I got all Combinations of 3 numbers within c0, c1,...,cN. Then, I filter what I need and store in g3. For all g3 of format (A, B, C), I collect (A,B), (B,C), (A,C) as keys and C,A,B as values respectively. The most important reason I must collect this in a Python dictionary is that I need to retrieve any rel[(X, Y)] later on.

I wish the problem is described clearly and someone can support. Thank a lot !!!

2019-03-01 12:38:22 +0200 commented question problem loading large file

Is there any update on this matter? I am having the same problem here (the generator was suggested by @slelievre to process further)

v = MatrixSpace(GF(2), 3, 6)
rank = 6
im = identity_matrix(3)
g3 = (c for c in Combinations(len(v), 3) if block_matrix(3, 2, [ [ im,v[c[0]] ], [ im,v[c[1]] ], [ im,v[c[2]] ] ]).rank() >= rank)
g3_ = []
for g in g3:
   g3_.append(g)
save(g3_, 'g3_.sobj')

After 7 days, I got the 1.84 GB file (renamed and uploaded by me, no virus, only Sagemath format file): https://ufile.io/kakhi

After downloading/having the file, we load the file:

g3 = load('g3_.sobj')

However, it is too large for my RAM. Is there any solution? We might have to split it before "load". Besides it, better solution to save data is appreciated.

2019-02-25 03:15:10 +0200 asked a question Count number of ones in a set of matrix

I am having the following code:

v = MatrixSpace(GF(2),2,2)
c = [0]*2
for vv in v.list():
    t=sum(vv)
    c = [x+y for x,y in zip(c,t)]
    print c

My goal is to count how many '1s' that I have for each column within the set of matrices. For example with the first 4 matrices in the set:

[0 0]  [1 0]  [0 1]  [0 0] 
[0 0], [0 0], [0 0], [1 0]

I expect to receive [2,1] as a result, i.e. there are 2 of '1s' appear in the 1st column and 1 of '1s' appears in the 2nd column. However, I got [0, 1], because it's binary base.

Thank you for reading my problem and support :)

2019-02-13 08:44:34 +0200 commented question Mapping 2-dimensional matrix index to a list

Are the individual parts always blocks without gaps in them? No, the parts could be either blocks with gaps or block without gaps.

For more example: (E1) The 1st and 3rd rows could be chosen as parts. (E2) 3 parts: the 1st column, the top-right cell, and the bottom-right cell. The rest is not chosen. Note: a) Parts could be the choice that does not the whole matrix as Example (E2) b) Parts' shapes are only square or rectangle, e.g no L-shape, /-shape, -shape, #-shape, or +-shape and so on.

2019-02-13 01:54:13 +0200 asked a question Mapping 2-dimensional matrix index to a list

Hi all,

I have a following matrix:

mat = matrix([[1,5,7],[3,10,12],[0,5,3]])
[ 1  5  7]
[ 3 10 12]
[ 0  5  3]

I got 3 parts from the matrix:

A = mat[[0,1,2],[0]]
[1]
[3]
[0]
B = mat[[1,2],[1,2]]
[10 12]
[ 5  3]
C = mat[[0],[1,2]]
[5 7]

To know that these 3 parts are not overlapping to each other, I have an idea to map the original matrix to 1-dimension array as IDs for each cell:

tt = copy(mat)
row = mat.nrows()
col = mat.ncols()
for x in range(row):
    for y in range(col):
        tt[x,y] = x*col+y
sage: tt
[0 1 2]
[3 4 5]
[6 7 8]

Then again I go over all coordinate that I took for A, B and C to collect these mapped ID. Regarding to A,

cells_A = []
for i in range(0, 3):
    for j in range(0, 1):
        cells_A.append(tt[i, j])
sage: cells_A
[0, 3, 6]

Regarding to B:

cells_B = []
for i in range(1, 3):
    for j in range(1, 3):
        cells_B.append(tt[i, j])
sage: cells_B
[4, 5, 7, 8]

Regarding to C, similarly we have:

sage: cells_C = [1,2]

If size of union from these 3 cells is equal to the total size of A, B, and C, then I conclude no overlapping among them.

area = sum([A.nrows()*A.ncols(), B.nrows()*B.ncols(), C.nrows()*C.ncols()])
if len(set.union(*[set(cells_A), set(cells_B), set(cells_C)])) == area:
    print("No overlapping parts")

However, this way requires lots of work and SLOW. Is there any already SageMath's Matrix feature supporting some steps above, especially a way to improve/avoid mapping index?

2019-02-12 00:14:29 +0200 received badge  Scholar (source)
2019-02-09 09:48:34 +0200 received badge  Supporter (source)
2019-02-08 23:51:52 +0200 commented question Sagemath heap size limit

I edited my post adding v:

v = MatrixSpace(GF(2), 2, 6)

@dan_fulea, I would like to share with you further process. Example:

g3 = [(1, 2, 3), (1, 2, 4), (1, 2, 5), (2, 3, 5), (1, 3, 5)]

1) g3 is a list of all c involved in good combinations, i.e. condition m.rank() >= 4 is satisfied (its result is still a huge list, which we can see by running @slelievre suggestion sum(1 for g in g3), which almost non-stop) 2) I process each list in g3 to learn relationship of any pairs, e.g:

relatives =
{(1, 2): [3, 4, 5],
 (1, 3): [2, 5],
 (1, 4): [2],
 (1, 5): [2, 3],
 (2, 3): [1, 5],
 (2, 4): [1],
 (2, 5): [1, 3],
 (3, 5): [2, 1]}

3) Consider (1,2,3), we know (1,2) can go with 4, or 5 besides 3. If all (1 ... (more)

2019-02-08 23:09:19 +0200 received badge  Editor (source)
2019-02-07 21:26:55 +0200 received badge  Student (source)
2019-02-07 16:02:42 +0200 asked a question Sagemath heap size limit

Hi all, I am not new to Python, but new to Sagemath.

My code:

v = MatrixSpace(GF(2), 2, 6)
size = binomial(4096,3)
g3 = []
for c in Combinations(range(4096), 3):
   m = block_matrix(3, 1, [v[c[0]], v[c[1]], v[c[2]]])
   if m.rank() >= 4:
      g3.append(c)

The variable "g3" raises up to about 20 GB in total. Sagemath catches "Memory Error" at some point; therefore, I divide "g3" into 1920 different parts to save. Now I need to process further with "g3", i.e. I need to assign "g3" parts again to some variables to use. The 1st solution that I think of is to create 1920 different variables to store the whole "g3" in my code; however, this way is a bit inconvenient.

Is there any better solution?

For example, increasing the limit of "list" size (python list type []), which might help me to save up to 11.444.858.880 lists in the list "g3" (about 11 billion is the size from binomial(4096,3)). I have a computer of 128 GB RAM, and it is very nice if I can utilize the computer's strength.

There is an old topic on this: trac.sagemath.org/ticket/6772. However, I do not really get the idea there.

I wish to be supported :-) Thank you a lot !!!