Hi Lee,

Would it be difficult to try the following version of 'pmap'? It doesn't use futures but executors instead so at least this could help narrow the problem down... If the problem is due to the high number of futures spawned by pmap then this should fix it...

(defn- with-thread-pool* [num-threads f]
(let [pool (java.util.concurrent.Executors/newFixedThreadPool num-threads)]
    (try (f pool)
      (finally
        (when pool (.shutdown pool))))))

(defmacro with-thread-pool [[name num-threads] & body]
  `(with-thread-pool* ~num-threads (fn [~name] ~@body)))

(defn pmapp [f coll]
  (with-thread-pool [pool (.. Runtime getRuntime availableProcessors)]
    (doall
      (map #(.get %)
        (.invokeAll pool
          (map (partial partial f)  coll))))))

And btw, reducers are great but usually you need to reformulate your problem..r/map is not parallel. It is serial but with no intermediate allocation (unlike lazy-seqs). If you want to make it parallel you need to r/fold the result of r/map (the reducer) providing reducing & combining fns. However, is still doesn't make sense to be slower than map...If you 've followed Rich's example with the pie-maker consider this:

Both map & r/map will return immediately. The difference is that map will build a recipe for the first item whereas r/map will build a recipe for the entire coll. In other words, the lazy-pie-maker assistant provides 1 apple at a time (after asking for an apple) but the reducer-pie-maker-assistant, as soon as you ask for the 1st apple it will do the entire bag (without intermediate 'asking'). The lazy recipe is recursive whereas the reducer-based one looks like a stream...It should be still faster, albeit not terribly faster. You need r/fold to see any parallelism...


Jim






On 09/12/12 05:19, Lee Spector wrote:
On Dec 8, 2012, at 8:16 PM, Marek Šrank wrote:
Yep, reducers, don't use lazy seqs. But they return just sth. like transformed 
functions, that will be applied when building the collection. So you can use 
them like this:

     (into [] (r/map burn (doall (range 4)))))

See 
http://clojure.com/blog/2012/05/08/reducers-a-library-and-model-for-collection-processing.html
and http://clojure.com/blog/2012/05/15/anatomy-of-reducer.html for more info...

Thanks Marek. This does fix the "too quick to be true" issue, but alas, on my 
mac laptop (a 4 core intel i7):

57522.869 msecs for (time (into [] (r/map burn (doall (range 4)))))

58263.312 msecs for (time (doall (map burn (doall (range 4)))))

So while I'm not getting a terrible slowdown from using the reducers version of 
map, I'm also not getting any speedup over the single-thread map.

We should try this on our other architectures too, but it doesn't look 
promising.

  -Lee

--
Lee Spector, Professor of Computer Science
Cognitive Science, Hampshire College
893 West Street, Amherst, MA 01002-3359
lspec...@hampshire.edu, http://hampshire.edu/lspector/
Phone: 413-559-5352, Fax: 413-559-5438


--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en

Reply via email to