1 | initial version |

I think you might be misunderstanding what the parallel decorator does to a function (or at least not understanding it in the limited way I do!). As I understand it, the basic functionality of `@parallel`

is to convert a function which takes a single input to one which takes a list of inputs, and runs the original function on each item in the list. The matter is slightly confused because the new "decorated" function has the same name as the original one, so from a user's point of view, it's hard to tell the difference between the two (and of course, that's the point of decorators :). The "parallel" functionality is really just doing each of those separate computations on separate processors.

`@parallel`

doesn't really analyze or optimize the actual workings of your function. Thus, I don't think you need to write the extra "list processing" functions, but just decorate `calArcLength`

if you want to compute the arc lengths of a bunch of curves in parallel. And I think you are right that it won't speed up something like `numerical_integral`

on just a single computation.

So, in answer to the question in the title, I would say use `@parallel`

when you want to apply the same function to a long list of inputs -- write the function, and let `@parallel`

distribute the separate function calls to separate processes.

I'm not sure if an example is necessary at this point, but I don't think there are enough `@parallel`

examples, so I'll extend the one given by @benjaminfjones:

```
@parallel
def hard_computation(n,d=1,show_out=False):
for j in range(n):
sleep(.1)
if show_out:
print "factoring",n,"/",d,".."
return str(factor(floor(n/d)))
```

The function `hard_computation`

just simulates some long calculation. The following is just it's basic functionality, and would work with or without the @parallel decorator:

```
sage: hard_computation(16,2,True)
factoring 16 / 2 ..
'2^3'
sage: %time hard_computation(12)
'2^2 * 3'
CPU time: 0.00 s, Wall time: 1.20 s
```

Note that, even though the `for`

loop could be parallelized, `@parallel`

doesn't speed this up. Here's what `@parallel`

makes possible:

```
sage: r = hard_computation([2*n for n in range(3,10)]) #this is instantaneous
sage: r
<generator object __call__ at 0x10d559e10>
```

So `@parallel`

just sets up a generator object. None of the `hard_computation`

code is run until you start getting items from `r`

. Here's what I do:

```
for x in r:
print x
```

Which returns:

```
(((6,), {}), '2 * 3')
(((8,), {}), '2^3')
(((10,), {}), '2 * 5')
(((12,), {}), '2^2 * 3')
(((14,), {}), '2 * 7')
(((16,), {}), '2^4')
(((18,), {}), '2 * 3^2')
```

In each line of output, `x[0]`

holds the arguments for the computation, and `x[1]`

holds the output. This is important because the order of the output is just the order that the processes finish in, not the order that they're called. For

```
r = hard_computation([2*n for n in range(3,10)])
for x in r:
print x
```

I get the following (I'm running this on a machine with two processors):

```
(((12,), {}), '2^2 * 3')
(((14,), {}), '2 * 7')
(((10,), {}), '2 * 5')
(((8,), {}), '2^3')
(((4,), {}), '2^2')
(((6,), {}), '2 * 3')
(((2,), {}), '2')
```

And lastly I'll just give an example of calling `hard_computation`

with more inputs. Basically, you can give either a tuple of inputs, or a dict of keyword inputs, but not a combination of the two:

```
r = hard_computation([(2*n,3,True) for n in range(3,10)])
s = hard_computation([{'n':2*n,'d':3,'show_out':True} for n in range(3,10)])
for x in r:
print x
factoring 6 / 3 ..
(((6, 3, True), {}), '2')
factoring 8 / 3 ..
(((8, 3, True), {}), '2')
factoring 10 / 3 ..
(((10, 3, True), {}), '3')
factoring 12 / 3 ..
(((12, 3, True), {}), '2^2')
factoring 14 / 3 ..
(((14, 3, True), {}), '2^2')
factoring 16 / 3 ..
(((16, 3, True), {}), '5')
factoring 18 / 3 ..
(((18, 3, True), {}), '2 * 3')
```

Note the difference in `x[0]`

here:

```
for x in s:
print x
factoring 6 / 3 ..
(((), {'show_out': True, 'd': 3, 'n': 6}), '2')
factoring 8 / 3 ..
(((), {'show_out': True, 'd': 3, 'n': 8}), '2')
factoring 10 / 3 ..
(((), {'show_out': True, 'd': 3, 'n': 10}), '3')
factoring 12 / 3 ..
(((), {'show_out': True, 'd': 3, 'n': 12}), '2^2')
factoring 14 / 3 ..
(((), {'show_out': True, 'd': 3, 'n': 14}), '2^2')
factoring 16 / 3 ..
(((), {'show_out': True, 'd': 3, 'n': 16}), '5')
factoring 18 / 3 ..
(((), {'show_out': True, 'd': 3, 'n': 18}), '2 * 3')
```

p.s. I think something like this should probably be included in the reference manual for `@parallel`

; suggestions for how to improve it would be welcome :)

2 | No.2 Revision |

I think you might be misunderstanding what the parallel decorator does to a function (or at least not understanding it in the limited way I do!). As I understand it, the basic functionality of `@parallel`

is to convert a function which takes a single input to one which takes a list of inputs, and runs the original function on each item in the list. The matter is slightly confused because the new "decorated" function has the same name as the original one, so from a user's point of view, it's hard to tell the difference between the two (and of course, that's the point of decorators :). The "parallel" functionality is really just doing each of those separate computations on separate processors.

`@parallel`

doesn't really analyze or optimize the actual workings of your function. Thus, I don't think you need to write the extra "list processing" functions, but just decorate

if you want to compute the arc lengths of a bunch of curves in parallel. And I think you are right that it won't speed up something like ~~calArcLength~~calcArcLength`numerical_integral`

on just a single computation.

So, in answer to the question in the title, I would say use `@parallel`

when you want to apply the same function to a long list of inputs -- write the function, and let `@parallel`

distribute the separate function calls to separate processes.

I'm not sure if an example is necessary at this point, but I don't think there are enough `@parallel`

examples, so I'll extend the one given by @benjaminfjones:

```
@parallel
def hard_computation(n,d=1,show_out=False):
for j in range(n):
sleep(.1)
if show_out:
print "factoring",n,"/",d,".."
return str(factor(floor(n/d)))
```

The function `hard_computation`

just simulates some long calculation. The following is just it's basic functionality, and would work with or without the @parallel decorator:

```
sage: hard_computation(16,2,True)
factoring 16 / 2 ..
'2^3'
sage: %time hard_computation(12)
'2^2 * 3'
CPU time: 0.00 s, Wall time: 1.20 s
```

Note that, even though the `for`

loop could be parallelized, `@parallel`

doesn't speed this up. Here's what `@parallel`

makes possible:

```
sage: r = hard_computation([2*n for n in range(3,10)]) #this is instantaneous
sage: r
<generator object __call__ at 0x10d559e10>
```

So `@parallel`

just sets up a generator object. None of the `hard_computation`

code is run until you start getting items from `r`

. Here's what I do:

```
for x in r:
print x
```

Which returns:

```
(((6,), {}), '2 * 3')
(((8,), {}), '2^3')
(((10,), {}), '2 * 5')
(((12,), {}), '2^2 * 3')
(((14,), {}), '2 * 7')
(((16,), {}), '2^4')
(((18,), {}), '2 * 3^2')
```

In each line of output, `x[0]`

holds the arguments for the computation, and `x[1]`

holds the output. This is important because the order of the output is just the order that the processes finish in, not the order that they're called. For

```
r = hard_computation([2*n for n in range(3,10)])
for x in r:
print x
```

I get the following (I'm running this on a machine with two processors):

```
(((12,), {}), '2^2 * 3')
(((14,), {}), '2 * 7')
(((10,), {}), '2 * 5')
(((8,), {}), '2^3')
(((4,), {}), '2^2')
(((6,), {}), '2 * 3')
(((2,), {}), '2')
```

And lastly I'll just give an example of calling `hard_computation`

with more inputs. Basically, you can give either a tuple of inputs, or a dict of keyword inputs, but not a combination of the two:

```
r = hard_computation([(2*n,3,True) for n in range(3,10)])
s = hard_computation([{'n':2*n,'d':3,'show_out':True} for n in range(3,10)])
for x in r:
print x
factoring 6 / 3 ..
(((6, 3, True), {}), '2')
factoring 8 / 3 ..
(((8, 3, True), {}), '2')
factoring 10 / 3 ..
(((10, 3, True), {}), '3')
factoring 12 / 3 ..
(((12, 3, True), {}), '2^2')
factoring 14 / 3 ..
(((14, 3, True), {}), '2^2')
factoring 16 / 3 ..
(((16, 3, True), {}), '5')
factoring 18 / 3 ..
(((18, 3, True), {}), '2 * 3')
```

Note the difference in `x[0]`

here:

```
for x in s:
print x
factoring 6 / 3 ..
(((), {'show_out': True, 'd': 3, 'n': 6}), '2')
factoring 8 / 3 ..
(((), {'show_out': True, 'd': 3, 'n': 8}), '2')
factoring 10 / 3 ..
(((), {'show_out': True, 'd': 3, 'n': 10}), '3')
factoring 12 / 3 ..
(((), {'show_out': True, 'd': 3, 'n': 12}), '2^2')
factoring 14 / 3 ..
(((), {'show_out': True, 'd': 3, 'n': 14}), '2^2')
factoring 16 / 3 ..
(((), {'show_out': True, 'd': 3, 'n': 16}), '5')
factoring 18 / 3 ..
(((), {'show_out': True, 'd': 3, 'n': 18}), '2 * 3')
```

p.s. I think something like this should probably be included in the reference manual for `@parallel`

; suggestions for how to improve it would be welcome :)

3 | added link to trac ticket |

I think you might be misunderstanding what the parallel decorator does to a function (or at least not understanding it in the limited way I do!). As I understand it, the basic functionality of `@parallel`

is to convert a function which takes a single input to one which takes a list of inputs, and runs the original function on each item in the list. The matter is slightly confused because the new "decorated" function has the same name as the original one, so from a user's point of view, it's hard to tell the difference between the two (and of course, that's the point of decorators :). The "parallel" functionality is really just doing each of those separate computations on separate processors.

`@parallel`

doesn't really analyze or optimize the actual workings of your function. Thus, I don't think you need to write the extra "list processing" functions, but just decorate `calcArcLength`

if you want to compute the arc lengths of a bunch of curves in parallel. And I think you are right that it won't speed up something like `numerical_integral`

on just a single computation.

So, in answer to the question in the title, I would say use `@parallel`

when you want to apply the same function to a long list of inputs -- write the function, and let `@parallel`

distribute the separate function calls to separate processes.

I'm not sure if an example is necessary at this point, but I don't think there are enough `@parallel`

examples, so I'll extend the one given by @benjaminfjones:

```
@parallel
def hard_computation(n,d=1,show_out=False):
for j in range(n):
sleep(.1)
if show_out:
print "factoring",n,"/",d,".."
return str(factor(floor(n/d)))
```

The function `hard_computation`

just simulates some long calculation. The following is just it's basic functionality, and would work with or without the @parallel decorator:

```
sage: hard_computation(16,2,True)
factoring 16 / 2 ..
'2^3'
sage: %time hard_computation(12)
'2^2 * 3'
CPU time: 0.00 s, Wall time: 1.20 s
```

Note that, even though the `for`

loop could be parallelized, `@parallel`

doesn't speed this up. Here's what `@parallel`

makes possible:

```
sage: r = hard_computation([2*n for n in range(3,10)]) #this is instantaneous
sage: r
<generator object __call__ at 0x10d559e10>
```

So `@parallel`

just sets up a generator object. None of the `hard_computation`

code is run until you start getting items from `r`

. Here's what I do:

```
for x in r:
print x
```

Which returns:

```
(((6,), {}), '2 * 3')
(((8,), {}), '2^3')
(((10,), {}), '2 * 5')
(((12,), {}), '2^2 * 3')
(((14,), {}), '2 * 7')
(((16,), {}), '2^4')
(((18,), {}), '2 * 3^2')
```

In each line of output, `x[0]`

holds the arguments for the computation, and `x[1]`

holds the output. This is important because the order of the output is just the order that the processes finish in, not the order that they're called. For

```
r = hard_computation([2*n for n in range(3,10)])
for x in r:
print x
```

I get the following (I'm running this on a machine with two processors):

```
(((12,), {}), '2^2 * 3')
(((14,), {}), '2 * 7')
(((10,), {}), '2 * 5')
(((8,), {}), '2^3')
(((4,), {}), '2^2')
(((6,), {}), '2 * 3')
(((2,), {}), '2')
```

And lastly I'll just give an example of calling `hard_computation`

with more inputs. Basically, you can give either a tuple of inputs, or a dict of keyword inputs, but not a combination of the two:

```
r = hard_computation([(2*n,3,True) for n in range(3,10)])
s = hard_computation([{'n':2*n,'d':3,'show_out':True} for n in range(3,10)])
for x in r:
print x
factoring 6 / 3 ..
(((6, 3, True), {}), '2')
factoring 8 / 3 ..
(((8, 3, True), {}), '2')
factoring 10 / 3 ..
(((10, 3, True), {}), '3')
factoring 12 / 3 ..
(((12, 3, True), {}), '2^2')
factoring 14 / 3 ..
(((14, 3, True), {}), '2^2')
factoring 16 / 3 ..
(((16, 3, True), {}), '5')
factoring 18 / 3 ..
(((18, 3, True), {}), '2 * 3')
```

Note the difference in `x[0]`

here:

```
for x in s:
print x
factoring 6 / 3 ..
(((), {'show_out': True, 'd': 3, 'n': 6}), '2')
factoring 8 / 3 ..
(((), {'show_out': True, 'd': 3, 'n': 8}), '2')
factoring 10 / 3 ..
(((), {'show_out': True, 'd': 3, 'n': 10}), '3')
factoring 12 / 3 ..
(((), {'show_out': True, 'd': 3, 'n': 12}), '2^2')
factoring 14 / 3 ..
(((), {'show_out': True, 'd': 3, 'n': 14}), '2^2')
factoring 16 / 3 ..
(((), {'show_out': True, 'd': 3, 'n': 16}), '5')
factoring 18 / 3 ..
(((), {'show_out': True, 'd': 3, 'n': 18}), '2 * 3')
```

p.s. I think something like this should probably be included in the reference manual for `@parallel`

~~; suggestions ~~ (see ticket 11462). Suggestions for how to improve it would be ~~welcome ~~welcome, either here or on the ticket page :)

Copyright Sage, 2010. Some rights reserved under creative commons license. Content on this site is licensed under a Creative Commons Attribution Share Alike 3.0 license.