On Sun, Mar 21, 2010 at 1:04 PM, Matt <macourt...@gmail.com> wrote: > Throwing in my 2 cents: > > (def chunk-size 2000) > > (defn sum-tree-part [nums start length] > (reduce > #(+ %1 (nth nums (+ start %2))) > 0 > (range length))) > > (defn sum-partition[nums] > (reduce + > (pmap #(sum-tree-part nums % chunk-size) > (range 0 (count nums) chunk-size))))
Some of the performance gain you observed can be attributed to the built in reduce function. I found it several times faster than a simple sequential recur loop. This is not surprising if you look into internals of the reduce function: [...] (if (chunked-seq? s) (recur f (.reduce (chunk-first s) f val) (chunk-next s)) (recur f (f val (first s)) (next s))) (seq []) returns a chunked-seq but to use it efficiently one needs to iterate over chunks, like in the example above, not over single list elements. One more reason to use higher order functions. Well, that rises the bar for a competing parallel reduce equivalent (sure we can make any O(n) parallel reduce faster than the sequential one simply by using sufficiently large sequential chunks or heavy enough computations but I'm still aiming at optimizing a general case). Andrzej -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, send email to clojure@googlegroups.com Note that posts from new members are moderated - please be patient with your first post. To unsubscribe from this group, send email to clojure+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/clojure?hl=en To unsubscribe from this group, send email to clojure+unsubscribegooglegroups.com or reply to this email with the words "REMOVE ME" as the subject.