# Timing cached functions

I have defined a recursive function that call itself several times, often asking for the same result that was previously computed. In this setting, defining the function as cached makes perfect sense.

But when i try to timeit, i always get very low timings (i assume the function is computed once and then the cache is called several times, making the average timing very low). I could define the function as non cached, but in that case i can not measure how much i gain by caching the intermediate calls from the function to itself.

Is there some way to say timeit to empty the cache each time the function is run?

edit retag close merge delete

I think I'm confused about what you want. It sounds like you just want to know the difference in timings with or without the caching, but you could get that by running the function with and without the cache decorator . . .

( 2011-08-23 14:14:08 -0600 )edit

Sort by » oldest newest most voted

Ok, i think i didn't express my question clearly. Let try to ilustrate it with a simple example.

Imagine i want to write a function that computes the n'th term of the fibonaci series. Consider this function

@cached_function
def fibo(n):
if n==0 or n==1:
return 1
return fibo(n-1)+fibo(n-2)


This function calls itself twice, without the cached decorator, fibo(4) would compute fibo(3) and fibo(2); but to compute fibo(3) it wouold compute fibo(2) and fibo(1), and again, to compute fibo(2), it would compute fibo(1) and fibo(0). That is, it would have to compute several times the same intermediate steps. With the cached decorator, this is avoided, having to compute each value between 0 and n-1 only once.

But if i want to time the cached version of the function versus the non cached version, timeit is not useful, since for the non cached version it repeats several times the same computation, and then gives an average of the time spent; as in the cached version it computes it once and then reads the cache several times, giving an unrealistic average.

more

As niles suggested, why not create a function identical to this one, but replacing "fibo" with "fibo2" everywhere? Don't make "fibo2" cached. Then time "fibo" vs. "fibo2". There are also useful methods for cached functions, like "clear_cache" and "precompute" which you might find helpful. See the documentation: http://sagemath.org/doc/reference/sage/misc/cachefunc.html.

( 2011-08-24 07:07:17 -0600 )edit

You could include an optional argument which clears the function's cache:

@cached_function def fibo(n, clear_cache=False):
if clear_cache:
fibo.clear_cache()
if n==0 or n==1:
return 1
return fibo(n-1, clear_cache)+fibo(n-2, clear_cache)


You could modify this to only remove certain inputs from the cache (just remove certain keys from fibo.cache).

Also, you can use %time instead of timeit, or tell timeit to just run once:

sage: timeit('fibo(12, True)',number=1, repeat=1)
1 loops, best of 1: 4.69 ms per loop

sage: timeit('fibo(12, False)',number=1, repeat=1)
1 loops, best of 1: 192 µs per loop


But is this really any better than @John Palmieri's suggestion?

more

Expanding on what niles said, you could have one function which is not cached and another that is. Run timeit on each one in the same Sage session.

If you have a method which is cached, then you can get around that by doing

sage: a = matrix(...)
# this will be fast because it calls the cached version:
sage: timeit('a.my_great_cached_method()')
# this won't be as fast, because the cache gets reset
# for each instance of the class:
sage: timeit('matrix(...).my_great_cached_method()')


There is no built-in way to tell timeit to ignore the cache, though.

more

If one uses the @cached_method decorator, then one can clear the cache: a.my_great_cached_method.clear_cache(). If one uses it in the commands that are passed to "timeit", one could avoid to measure the time spent for creating the matrix, hence, getting more accurate results.

( 2011-08-23 20:10:20 -0600 )edit