On Mon, Mar 22, 2010 at 9:40 PM, Rich Hickey wrote:
>
> You really do need to look at par, and fork/join as well. In particular,
> your desire for even partitioning and balancing is obviated by work
> stealing. If you think about it, you'll see that that would become
> immediately necessary as soo
On Mar 21, 2010, at 8:29 AM, Andrzej wrote:
On Sun, Mar 21, 2010 at 6:37 PM, Jarkko Oranen
wrote:
Rich has done some work on using the JDK7 ForkJoin Library to
parallelise map and reduce over vectors, since they are already
internally structured as trees. It hasn't been touched in a while,
On Mon, Mar 22, 2010 at 5:39 AM, Michał Marczyk
wrote:
>
> Not sure if I want to draw any particular conclusion from this...
> Probably not, since I'm not yet done wondering to which degree I might
> be correct in thinking it. :-) I do want to stress once again the need
> for benchmarking with exp
On Sun, Mar 21, 2010 at 1:04 PM, Matt wrote:
> Throwing in my 2 cents:
>
> (def chunk-size 2000)
>
> (defn sum-tree-part [nums start length]
> (reduce
> #(+ %1 (nth nums (+ start %2)))
> 0
> (range length)))
>
> (defn sum-partition[nums]
> (reduce +
> (pmap #(sum-tree-part nums % chu
On 21 March 2010 13:29, Andrzej wrote:
> Yesterday I looked at the implementation of the PersistentVector
> class, trying to figure out how to exploit its internal structure to
> decompose the vector. I hit several issues though:
One thing that comes to my mind after reading through your list is
On 20 March 2010 18:29, Andrzej wrote:
> Thanks, that's what I was going to do next. You're certainly right
> about using fewer, more coarse-grained partitions. BTW, you can check
> the number of available processors using (.. Runtime getRuntime
> availableProcessors) (that's how pmap is doing it)
On Sun, Mar 21, 2010 at 6:37 PM, Jarkko Oranen wrote:
>
> Rich has done some work on using the JDK7 ForkJoin Library to
> parallelise map and reduce over vectors, since they are already
> internally structured as trees. It hasn't been touched in a while, but
> as far as I know the code lives in th
On Mar 19, 6:53 pm, Andrzej wrote:
> I've been toying with various implementations of reduce-like
> functions, trying to do something "smarter" than a simple iteration
> over a collection of data. This hasn't worked out very well, my
> implementation is a lot (~50x) slower than a simple loop. Co
Throwing in my 2 cents:
(def chunk-size 2000)
(defn sum-tree-part [nums start length]
(reduce
#(+ %1 (nth nums (+ start %2)))
0
(range length)))
(defn sum-partition[nums]
(reduce +
(pmap #(sum-tree-part nums % chunk-size)
(range 0 (count nums) chunk-size
; Save the
On Sat, Mar 20, 2010 at 5:43 PM, Michał Marczyk
wrote:
> Well, you're calling subvec about twice as many times (I guess) as
> there are elements in your input vector. Then you're calling count at
> least once for each of the intermediate vectors (and twice for those
> which will be split further).
On 19 March 2010 17:53, Andrzej wrote:
> I've been toying with various implementations of reduce-like
> functions, trying to do something "smarter" than a simple iteration
> over a collection of data. This hasn't worked out very well, my
> implementation is a lot (~50x) slower than a simple loop.
On Sat, Mar 20, 2010 at 2:15 AM, Sean Devlin wrote:
> What type of improvement are you expecting to see?
>
> 1. A linear improvement based on throwing more cores at the problem?
Yes.
> In this case you would need to use pmap to compute items in parallel.
I haven't looked at pmap yet. Thanks fo
What type of improvement are you expecting to see?
1. A linear improvement based on throwing more cores at the problem?
In this case you would need to use pmap to compute items in parallel.
Your implementation appears to be single threaded.
2. An algorithmic improvement, like going from a DFT to
13 matches
Mail list logo