Ask Your Question
0

Elementwise functions for deep learning

asked 2021-03-23 21:01:14 +0100

this post is marked as community wiki

This post is a wiki. Anyone with karma >750 is welcome to improve it.

Let's say I want to define a softmax function over a vector input.

In Python/Numpy that would look something like this

X = np.array([x1, x2, x3]) 
numerators = np.exp(X)
denominator = np.sum(numerators)
softmax_probs = numerators / denominators # [e^x1 / sum(e^x1, e^x2, e^x3), e^x2 / sum(e^x1, e^x2, e^x3),...

What would be a good way of going about this in sage?

edit retag flag offensive close merge delete

1 Answer

Sort by » oldest newest most voted
0

answered 2021-03-24 00:26:33 +0100

Emmanuel Charpentier gravatar image

updated 2021-03-24 00:49:49 +0100

What about

def softmax_probs(L):
    R=list(map(exp, L))
    S=sum(R)
    return map(lambda u:u/S, R)
L=list(var("a, b, c"))
list(softmax_probs(L))
[e^a/(e^a + e^b + e^c), e^b/(e^a + e^b + e^c), e^c/(e^a + e^b + e^c)]

?

The point of the local variables is to avoid recomputing the same quantities more than once :

sage: %timeit list(map(lambda u:exp(u)/sum(map(exp, L)), L))
70.4 µs ± 942 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
sage: %timeit list(softmax_probs(L))
31.2 µs ± 699 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

Stats

Asked: 2021-03-23 21:01:14 +0100

Seen: 197 times

Last updated: Mar 24 '21