Stefan Kamphausen <ska2...@googlemail.com> writes:

> Chunked seqs are supposed to realize more elements than you
> consume. That's for performance reasons.  But since you will only ever
> apply side-effect-free functions to seqs, that will make no
> difference, no?

Sorry, yes, I'm talking about within the code of `pmap'.  It creates a
lazy seq of futures of application of the passed-in function to the
passed-in collection via (map #(future (f %)) coll).  Realizing elements
of *that* seq has the side-effect of allocating/spawning a thread from
the futures thread-pool.  If `coll' can be turned into a chunked seq,
then the futures will be realized -- and threads allocated or spawned --
in chunks of 32.  If `coll' cannot be turned into chunked seq, then only
(+ 2 #CPUS) threads will be allocated/spawned at a time.

I think clarifying that has convinced me that this is definitely bug,
just because the side-effects are inconsistent.  I don't think that the
chunkability (chunkiness?) of the collection argument should affect the
degree of parallelism.

-Marshall

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en

Reply via email to