Using SageMath in a Unix pipe without creating files

I would like to run Sage as the middle of a Unix pipe, so that some program generates Sage input, pipes it to Sage, and another program reads the output.

$MyProgram1 | sage | MyProgram2 > final_output  In principle, MyProgram1 will run for days, weeks or months producing hundreds of millions of inputs for Sage to process, while MyProgram2 will look out for the rare successful inputs. I know that if I have a small amount of input, then I can create a file, say "input.sage" and then run "sage input.sage", but this creates an auxiliary file "input.sage.py" before it does anything else, and so is not suitable for hundreds of millions of inputs/outputs. edit retag close merge delete Comments @Gordon Please provide a minimal example of MyProgram1, MyProgram2, and the final output you would like. ( 2016-06-03 07:01:11 -0500 )edit 3 answers Sort by » oldest newest most voted You can try: sage -c <<< MyProgram1 | MyProgram2 > final_output  Also, imagine MyProgram1 contains the following (do not forget the ';' between the commands): #!/bin/bash echo 'print 2+2' sleep 4 echo ';' echo 'print "toto"'  You can do, for example:  sage -c$(./MyProgram1) | sed 's/4/5/g' > final_output


and get, in final_output:

5
toto

more

Thanks for your answer, but for some reason I am still having problems . if I use the triple <<< symbol, then I get nothing at all - no errors, but no output. I made the MyProgram1 file exactly as in your example, and when I run it from the shell, it produces what I expect, but when I pipe it in to Sage then nothing happens. (This is on Mac OSX command line, but I can't see why this would matter.)

For the second method I get a bit further, in that I can get some output, but not exactly what I want. My Sage code starts with a couple of function definitions and then lots and lots of calls using those newly defined functions. If I include the function definitions, then I get errors, but if I just have regular Sage code, then it works.

( 2016-06-02 08:51:12 -0500 )edit

As far as I know sage is not aware of piped inputs. It does have the -c switch by which it can run commands "on-the-fly". You can use a while loop and collect all the code into a string and pass it on to sage. For this, you need to tell the while loop to stop collecting the code when some special string is present. Let us say the string is EOD. Then the following is the basic syntax of the sage part of the pipelines:

# IFS='' is needed to preserve leading spaces
# -r is needed to read in raw mode, for example to preserve \ line continuation
while IFS='' read -r line; do
if [[ "${line}" = "EOD" ]]; then sage -c "${thecode}"; thecode=""
else
thecode="${thecode}${line}"
fi
done


Can not seem to answer this question in full! It seems my shell code is creating issues with this website!! Anyway, here is the example in pastebin: http://pastebin.com/c3mb6USs

Notes:

1. the three parts of the pipelines will not run in parallel in general. If you have myprogram1 | shell code | myprogram2 then when any of the three parts are running, the other parts might be idle. If the sage part is the slowest, which it could be because it is launching a new sage process every time, then this will remain the bottleneck.
2. the shell code is really just a shell code, so you can create a bash shell script, called "sage_shell_code" for example, and then just run it as myprogram1 | sage_shell_code | myprogram2.
more

To have Sage just run a single command, use sage -c "command", for instance

\$ sage -c "print 2 + 2"

more