I want to get efficient code for a repetitive computation on a "grid" of arguments, using a runtime-defined function (passed as an argument). Basically :
def myfunc(f, somerange, someotherrange):
initstorage()
for x in somerange:
for y in someotherrange:
store(x,y,dosomething(f,x,y))
return storage
In order to accelerate things, I'd like to write myfunc in cython.
I can safely assume that my numerical arguments (x
and y
) will be coerced to double, so I can further accelerate things gy declaring them as typed. The problem is the function argument f
. I can test the nature of f
(e. g. before initstorage()
and set dosomething
to a special_case function according to this nature.
There are basically four cases :
a symbolic expression : it seems that, in most cases, it pays to generate a
fast_callable
.a
fast_callable
.a Python function or a lambda expression.
a Cython function declared as
cpdef
, or a C/C++ function suitably imported.
For this last case, I know the solution : it is enough to declare the relevant function signature with ctypedef
, and use this type in the declaration of my function :
ctypedef double (*mytype)(double, double)
and use it to declare myfunc
:
cpdef myfunc(mytype, double, double): #see above...
with the benefit of acceleration due to static typing.
Can something analogous be done in the three other cases (or possibly in the fast_callable
and Python function cases, an expression being convertible to the fast_callable
case) ?