Lawrence D'Oliveiro wrote:
In message <77as23f1fhj3...@mid.uni-berlin.de>, Diez B. Roggisch wrote:

But reduce()? I can't see how you can parallelize reduce(). By its
nature, it has to run sequentially: it can't operate on the nth item
until it is operated on the (n-1)th item.
That depends on the operation in question. Addition for example would
work. My math-skills are a bit too rusty to qualify the exact nature of
the operation, commutativity springs to my mind.

Associativity:

    ((A op B) op C) = (A op (B op C))

So for example

   A op B op C op D

could be grouped into

  (A op B) op (C op D)

and the two parenthesized subexpressions evaluated concurrently. But this would only give you a speedup that was logarithmic in the number of op's.

It is worth noting, I think, that many realistic operations are not associative. If op combines a summary and an item to create an updated summary, it will probably croak or give a wrong answer when given a summary and a summary. Combining two summaries might be possible, but would be a different operation.

For a specific example, consider list.append(some_list, some_item) versus list.extend(some_list, another_list). Imagine for that moment that these methods returned some_list. Then

[].append(a).append(b).append(c).append(d) # parentheses left out

would have to be conceptually converted to the associative

[].extend([a]).extend([b]).extend([c]).extend([d])

before being grouped something like

(([].extend([a])).extend([b].extend([c]))).extend([d])

Of course, one would not do the conversion to the slower associative operation unless one knew that there would be a compensating speedup due to the grouping.

Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to