Ask Your Question
3

large groebner basis calculations

asked 2010-09-29 21:25:07 +0200

Bill Page gravatar image

updated 2015-01-14 14:05:30 +0200

FrédéricC gravatar image

I know there are a number of different packages in Sage capable of computing Groebner basis but I don't have much experience with these tools. I am wondering which option and specifically which routine might be most suitable for very large (50,000 generators) but relatively sparse (polynomials of degree less than 4 and with 4 or fewer terms over 64 variables). I am interested in basis and prime decomposition in relation to classifying low dimensional Frobenius algebras. I am currently using groeber and the primdec library from Singular in Sage but I am up against significant performance limits.

Also, does any have an recommended methods of pre-processing sparse sets of generators prior to calling groebner? And are there any packages capable of exploiting multi-processor architecture?

edit retag flag offensive close merge delete

2 Answers

Sort by » oldest newest most voted
2

answered 2010-10-01 18:50:48 +0200

mhampton gravatar image

I think Singular is your best bet right now in Sage. There are a lot of options you can try beyond the default behavior, but its hard to know what will be best for a given system. For some things I've worked with the facstd commend in Singular was very helpful, you can do something like:

myideal = myring.ideal(generators)
fstd = singular(myideal).facstd()
bases = [[myring(f) for f in fs] for fs in fstd]

where of course you would have to define the generators and ring. Also, you probably already know this but using different term orders can have an enormous effect on the running time.

I don't think there is anything parallelizable in Sage for Groebner bases at the moment.

Another thing I sometimes find helpful with big systems is to compute Groebner bases of subsets of the generators. This is trivial to parallelize, and sometimes merging those subsets is faster than doing them all at once. It also can give you some insight into the structure of the system.

edit flag offensive delete link more
1

answered 2010-11-22 04:52:00 +0200

Hi!

Sorry, I did not notice the message before.

There exists no 'best variant' for large systems.

The ugly answer is, that you have to try several variants and the most important options, at least redTail... You can also try several orderings. Usually, the best ordering is dp, but sometimes you can arrange the variables in a smart block ordering structure, which makes things much easier.

Maybe, you can also try slimgb, which has had good results in the past for large system, but systems of these number of variables are reallly hard. Having 50000 equations however is awesome and improves the possibility to compute the GB.

The core algorithms std and slimgb both implement their own preprocessing.

Regardings primary decomposition of such big systems. This is more difficult than Groebner bases. So you can essentially forget about it, unless there are some special conditions. In fact, when you're ideal is such over determined and you might only have a few solutions with multiplicity 1, the prime decomposition is equivalent to solving.

I think you did not mention the characteristic.

Cheers, Michael

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

Stats

Asked: 2010-09-29 21:25:07 +0200

Seen: 4,516 times

Last updated: Nov 22 '10