How to Give Up on a GAP Computation After 3 Seconds (Without Segfaulting)?
I'm trying to write some code that checks if "random" groups with trivial abelianization are trivial. Obviously checking if a group is trivial is undecidable in general, so at the risk of throwing off my statistics I would like to bail on the computation (and try another random presentation) if it takes longer than 3 seconds to check triviality.
Unfortunately, I only know two tricks I've seen for timing out a computation:
- use some version of alarm(3) and catch the error, as described in https://ask.sagemath.org/question/50331/ (and others)
- use the @fork decorator, as described in https://ask.sagemath.org/question/61797/
and both seem to cause GAP to randomly segfault, as also described here https://discuss.python.org/t/segfault... (though the solutions there are variants of solutions (1) and don't work for me :/)
I've also tried to make GAP give up on the computation quickly by changing the optional limit argument to G.cardinality(), which is supposed to change the "maximal number of cosets before the computation is aborted", but when I do this I get messages that say "#I Coset table calculation failed -- trying with bigger table limit", which makes me think that it's increasing the limit that I intentionally made small.
I think that this problem comes from the way that GAP uses interrupts internally (which I don't understand), as described here https://doc.sagemath.org/html/en/refe.... It's my impression that both the "alarm" method and the "@fork" method use interrupts to escape from the computation, and it makes sense that this might cause race-condition-y problems with GAP's own signal handling.
The question, then, is what should I do? My code is so short that hopefully it counts as a minimal example without much change. I'm including it as a pastebin here: https://pastebin.com/m40UdC0S
Thanks in advance for your help!
Why do you care about segfaulting of GAP (especially in the forked session) ?
Sorry, I was clearly tired when I wrote this. Using @fork is fine in the sense that the segfault is contained and the code keeps running. But it's much much slower, probably because of the overhead of starting a new process each time. Each of the calls is independent, though, so it's possible that more actively using parallelism will speed things up quite a bit since I'm forking anyways... Would you like me to open a new question to ask about that?
Actually, I think I figured it out. It's still slower than I'd like, but I think it's fast enough to work with if I leave it running overnight. Thanks! ^_^