Re: fast parallel reduction into hash-set/map

2014-03-17 Thread Jules
Re: GPU and "memory is not contiguous" - you need to take a look at e.g.http://www.anandtech.com/show/7677/amd-kaveri-review-a8-7600-a10-7850k/6 "GPU can now access the entire CPU address space without any copies" Heterogeneous System Architecture (HSA) is already here. The h/w and libraries ar

Re: fast parallel reduction into hash-set/map

2014-03-17 Thread Jean Niklas L'orange
Hello, On Saturday, March 15, 2014 11:37:11 PM UTC+1, Jules wrote: > > 2. I've had a look at rrb-vector - very fast for catvec - agreed, but > there does appear to be a performance penalty in terms of conj[!]-ing - I > can post my results if you are interested. I read the Bagwell paper on > wh

Re: fast parallel reduction into hash-set/map

2014-03-15 Thread Jules
Ghadi, Thanks for you posting - please see the reply I have just posted. I'll give foldcat a look - sounds interesting. With regards to my code for splicing maps/sets together efficiently, I spent some time looking at how that could be integrated with core.reducers and did some timings : (def

Re: fast parallel reduction into hash-set/map

2014-03-15 Thread Jules
Sorry that it has taken me so long to reply to this - I wanted to make sure that I had a look at rrb-vector and properly digest the points that you have made. 1. I've dumped the supervector idea :-) I ran some timings and as the number of levels of supervector increased, iteration times degrade

Re: fast parallel reduction into hash-set/map

2014-02-20 Thread Ghadi Shayban
Jules, For recombination of parallel reductions into a vector, have you looked at foldcat? It works really well, and it's one of those wonderful gems in clojure.core waiting to be noticed. It gives you

Re: fast parallel reduction into hash-set/map

2014-02-20 Thread Jean Niklas L'orange
Hi Jules, On Thursday, February 20, 2014 11:59:03 PM UTC+1, Jules wrote: > > Subvec provides a view on a subseq of a vector that behaves like a full > vector. Supervec provides a similar view that makes two vectors behave like > a single one > This data structure ("supervec") is usually known a

Re: fast parallel reduction into hash-set/map

2014-02-20 Thread Jules
all true. Also, if you look at the internal structure of PersistentVector (perhaps using me seqspert lib - announced this evening), you would see that PersistentVector is actually implemented as a tree. Recombination through several levels of supervec could simply be thought of as extending thi

Re: fast parallel reduction into hash-set/map

2014-02-20 Thread Alan Busby
On Fri, Feb 21, 2014 at 8:51 AM, wrote: > One note about your SuperVecs idea though, it seems that using that > approach we could get a deeply nested tree to represent our a vector, and I > think this is the exact thing vectors are trying to avoid.. In general I think this is true for vectors,

Re: fast parallel reduction into hash-set/map

2014-02-20 Thread Jules
te: >> >> Hi, Andy. >> >> No I haven't - Thanks for the pointer - I'll take a look. >> >> I have a very specific usecase - the speeding up of the re-combination of >> the results of parallel reductions. >> >> I've found that the cost

Re: fast parallel reduction into hash-set/map

2014-02-20 Thread shlomivaknin
tion often impacts badly on the > overall timing of such a reduction, but with a little thought we can speed > it up because we know that both seqs involved in the recombination are of > the same type... > > Given the proliferation of cores and FP's promise of simple and eleg

Re: fast parallel reduction into hash-set/map

2014-02-20 Thread Jules
Hi, Andy. No I haven't - Thanks for the pointer - I'll take a look. I have a very specific usecase - the speeding up of the re-combination of the results of parallel reductions. I've found that the cost of this recombination often impacts badly on the overall timing of such a

Re: fast parallel reduction into hash-set/map

2014-02-20 Thread Andy Fingerhut
into my clojure fork on github - see > beginning of this thread. Seqspert is also at github - see the announcement > I posted this evening. > > Finally 'supervec' is currently defined as: > > (defn supervec [l r] (APersistentVector$SuperVector. {} l r)) > > I am thinking o

Re: fast parallel reduction into hash-set/map

2014-02-20 Thread Jules
e-combination with another seq of the same type. This could be leveraged by e.g. 'into' to deliver significant performance benefits and footprint reduction. more as and when, Jules On Saturday, 15 February 2014 23:06:24 UTC, Jules wrote: > > Guys, > > I've been playing

Re: fast parallel reduction into hash-set/map

2014-02-17 Thread Jules
ter way of combining two >> hash-maps other than to read each entry from one and create a new >> corresponding association in the other. This means that each recombination >> in the above process essentially repeats most of the work already performed >> in the previous r

Re: fast parallel reduction into hash-set/map

2014-02-17 Thread Jules
Alex, thanks for the suggestion - I'll look at collection-check and raise the appropriate JIRA when I am happier with the code / idea. Jules On Monday, 17 February 2014 13:21:28 UTC, Alex Miller wrote: > > It is too late, but an enhancement jira would be appropriate. I would > highly encourag

Re: fast parallel reduction into hash-set/map

2014-02-17 Thread Jules
I've started doing some more serious testing and have not encountered any problems so far. I was a bit worried about the interaction of splice and transient/persistent!, but have not encountered any problems yet. I am going to read through the code again to satisfy myself that there is no issue

Re: fast parallel reduction into hash-set/map

2014-02-17 Thread Glen Mailer
rocess essentially repeats most of the work already performed > in the previous reduction stage. > > Hash-sets are implemented via hash-maps, and simpler with which to > demonstrate this problem: > > user=> (def a (doall (range 1000))) > #'user/a > user=> (de

Re: fast parallel reduction into hash-set/map

2014-02-17 Thread Alex Miller
It is too late, but an enhancement jira would be appropriate. I would highly encourage some generative tests in such a patch and perhaps looking at https://github.com/ztellman/collection-check. With simple.check moving into contrib as test.check, we expect to be able to use test.check within Cl

Re: fast parallel reduction into hash-set/map

2014-02-17 Thread Jean Niklas L'orange
On Sunday, February 16, 2014 11:49:38 AM UTC+1, Mikera wrote: > > Wow - that's a pretty big win. I think we should try and get this into > Clojure ASAP. > > Are we too late for 1.6? > Yeah, this is probably too late for 1.6 =/ Anyway, cool stuff you got going on here. I'm playing around with s

Re: fast parallel reduction into hash-set/map

2014-02-16 Thread Jules
... specifically: getting as many associations into a hash-map as >>>>> as >>>>> I can in as short a time as possible. >>>>> >>>>> My understanding of the reason for this is that reducers practice a >>>>> divide and conquer strat

Re: fast parallel reduction into hash-set/map

2014-02-16 Thread Jules
t;> divide and conquer strategy. The incoming sequence is divided up. Each >>>> sub-sequence is reduced into a sub-result (possibly in parallel) and then >>>> the sub-results are combined into the the final outgoing result. >>>> >>>> Unfortu

Re: fast parallel reduction into hash-set/map

2014-02-16 Thread Mikera
a >>> divide and conquer strategy. The incoming sequence is divided up. Each >>> sub-sequence is reduced into a sub-result (possibly in parallel) and then >>> the sub-results are combined into the the final outgoing result. >>> >>> Unfortunately, the

Re: fast parallel reduction into hash-set/map

2014-02-16 Thread Jules
an to read each entry from one and create a new >> corresponding association in the other. This means that each recombination >> in the above process essentially repeats most of the work already performed >> in the previous reduction stage. >> >> Hash-sets are implemented

Re: fast parallel reduction into hash-set/map

2014-02-16 Thread Mikera
is means that each recombination > in the above process essentially repeats most of the work already performed > in the previous reduction stage. > > Hash-sets are implemented via hash-maps, and simpler with which to > demonstrate this problem: > > user=> (def a (doall (range

Re: fast parallel reduction into hash-set/map

2014-02-15 Thread Jules
gt;> hash-maps other than to read each entry from one and create a new >> corresponding association in the other. This means that each recombination >> in the above process essentially repeats most of the work already performed >> in the previous reduction stage. >> &

Re: fast parallel reduction into hash-set/map

2014-02-15 Thread Alex Miller
o > hash-maps other than to read each entry from one and create a new > corresponding association in the other. This means that each recombination > in the above process essentially repeats most of the work already performed > in the previous reduction stage. > > Hash-sets a

fast parallel reduction into hash-set/map

2014-02-15 Thread Jules
em to be a better way of combining two hash-maps other than to read each entry from one and create a new corresponding association in the other. This means that each recombination in the above process essentially repeats most of the work already performed in the previous reduction stage. Has

Re: reduction to tail recursion

2012-07-24 Thread Mimmo Cosenza
Thanks so much Alan. Very clear. Mimmo On Tuesday, July 24, 2012 8:19:00 AM UTC+2, Alan Malloy wrote: > > I don't see how the definition of primitive-recursive is relevant to this. > You can transform primitive recursion to tail recursion if you allocate a > "stack" of your own on the heap, and

Re: reduction to tail recursion

2012-07-24 Thread nicolas.o...@gmail.com
On Mon, Jul 23, 2012 at 3:14 PM, Mimmo Cosenza wrote: > hi Merek and thanks for the link. But it does not answer my question. > I was looking for a demonstration of the reducibility of (not tail) > recursion to tail recursion. Or there is a demonstration of that, or nobody > could say that a (not

Re: reduction to tail recursion

2012-07-23 Thread Alan Malloy
I don't see how the definition of primitive-recursive is relevant to this. You can transform primitive recursion to tail recursion if you allocate a "stack" of your own on the heap, and don't mind that it might grow without bound. Or you can transform to continuation-passing or trampolines (fun

Re: reduction to tail recursion

2012-07-23 Thread Christian Mueller
No, not any recursion can be expressed as a tail recursion. One example: the Ackermann function (http://en.wikipedia.org/wiki/Ackermann_function) On Saturday, July 21, 2012 11:15:33 AM UTC+2, Mimmo Cosenza wrote: > > Hi, > a very basic question. Any "not tail recursion" code can be reduced to >

Re: reduction to tail recursion

2012-07-23 Thread nicolas.o...@gmail.com
Yes. For example, you can have a look at CPS transformation. If you want tail recursion and not tail calls, you can add trampolining to the mix. -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, send email to clojure@googlegroup

Re: reduction to tail recursion

2012-07-23 Thread Mimmo Cosenza
hi Merek and thanks for the link. But it does not answer my question. I was looking for a demonstration of the reducibility of (not tail) recursion to tail recursion. Or there is a demonstration of that, or nobody could say that a (not tail) recursion definition is always reducible to a tail re

Re: reduction to tail recursion

2012-07-21 Thread mnicky
Discussion about this is in the penultimate paragraph of http://c2.com/cgi/wiki?TailCallOptimization and btw, there's difference between: - tail recursion - function calls itself - tail call - function calls whatever Marek. On Saturday, July 21, 2012 11:15:33 AM UTC+2, Mimmo Cosenza wrote: >

reduction to tail recursion

2012-07-21 Thread Mimmo Cosenza
Hi, a very basic question. Any "not tail recursion" code can be reduced to tail recursion code? Thanks Mimmo -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, send email to clojure@googlegroups.com Note that posts from new mem

Re: Iteration or reduction on a transient map

2012-05-16 Thread Michał Marczyk
Well, I don't know if it's an explicit design goal of transients to not support these kinds of operations or if noone felt the need for them strongly enough to implement them. I believe transients are not meant to be the final solution to the problems which are currently addressed through their us

Re: Iteration or reduction on a transient map

2012-05-16 Thread nicolas.o...@gmail.com
So, if I have a transient and want to reduce it and then continue to use it transiently, I need to call persistent!, reduce and transient again. Is there a reason for that? (The documentation says that read operation are allowed. And it is actually possible to write a reduce for a transient vecto

Re: Iteration or reduction on a transient map

2012-05-16 Thread Michał Marczyk
If you want to reduce the map, there's nothing to be gained from the map being transient -- just use reduce-kv (or reduce pre-1.4) on the persistent map. If you want to map over the entries, again, there's nothing to be gained from the map being transient. (But if you want to pour the transformed

Iteration or reduction on a transient map

2012-05-16 Thread nicolas.o...@gmail.com
Dear all, Is there a way to apply a function to all members of a transient map? Best regards, Nicolas. -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, send email to clojure@googlegroups.com Note that posts from new members a

Re: reduction

2009-01-05 Thread Stuart Sierra
On Mon, Jan 5, 2009 at 11:37 AM, Christophe Grand wrote: > Stuart, > > Do you think rec-cat, rec-cons and reductions belong to seq-utils? > > Christophe > Find by me. I won't claim ownership over seq-utils or str-utils. As far as I'm concerned, any contrib committers can add to them. -Stuart

Re: reduction

2009-01-05 Thread Christophe Grand
Chouser a écrit : > On Mon, Jan 5, 2009 at 9:53 AM, Christophe Grand > wrote: > >> Chouser a écrit : >> >>> This isn't in clojure.core yet (any reason why not?) so would you mind >>> if I add it to clojure.contrib.seq-utils? Or of course you can do it >>> if you prefer. :-) >> hat do y

Re: reduction

2009-01-05 Thread Christophe Grand
especially for rec-cons and rec-cat. The > 'reduction' in your blog makes a nice example, but I assume it runs > slower than your version above. > I never liked my iterate-based function: it is overly complicated (I wrote it because the intuition that iterate can be used kept nag

Re: reduction

2009-01-05 Thread Chouser
On Mon, Jan 5, 2009 at 9:53 AM, Christophe Grand wrote: > > Chouser a écrit : >> On Dec 12 2008, 3:35 pm, Christophe Grand >> wrote: >> >>> I was sure it was a job for iterate: >>> >>> (defn reductions >>> "Returns a lazy seq

Re: reduction

2009-01-05 Thread Christophe Grand
Chouser a écrit : > On Dec 12 2008, 3:35 pm, Christophe Grand > wrote: > >> I was sure it was a job for iterate: >> >> (defn reductions >> "Returns a lazy seq of the intermediate values of the reduction (as >> per reduce) of coll by f, starti

Re: reduction

2009-01-02 Thread Chouser
On Dec 12 2008, 3:35 pm, Christophe Grand wrote: > I was sure it was a job for iterate: > > (defn reductions >   "Returns a lazy seq of the intermediate values of the reduction (as >   per reduce) of coll by f, starting with init." >   ([f coll] >    (if (seq co

Re: reduction

2008-12-15 Thread Rich Hickey
On Dec 12, 3:35 pm, Christophe Grand wrote: > I was sure it was a job for iterate: > > (defn reductions > "Returns a lazy seq of the intermediate values of the reduction (as > per reduce) of coll by f, starting with init." > ([f coll] >(if (seq coll)

Re: reduction

2008-12-12 Thread Christophe Grand
I was sure it was a job for iterate: (defn reductions "Returns a lazy seq of the intermediate values of the reduction (as per reduce) of coll by f, starting with init." ([f coll] (if (seq coll) (for [s (iterate (fn [[x & s]] (if s

Re: reduction

2008-12-08 Thread Rich Hickey
On Dec 7, 4:10 pm, Christophe Grand <[EMAIL PROTECTED]> wrote: > Rich Hickey a écrit :> I think the problem is that in the original and > subsequent versions, > > work was being done in the current case that needn't be (checking the > > status of coll), and that we need more laziness than lazy-

Re: reduction

2008-12-07 Thread Christophe Grand
Rich Hickey a écrit : > I think the problem is that in the original and subsequent versions, > work was being done in the current case that needn't be (checking the > status of coll), and that we need more laziness than lazy-cons gives > us (we need to delay evaluation of one argument to the recur

Re: reduction

2008-12-07 Thread Rich Hickey
computing the entire chain up to each > > result -- O(n^2), demanding n stack frames for the nth result? > > The problem is that to compute the rest (reduction f init coll) you call > (reduction f init coll) again. So, as you said, at each step you are > computing the entire c

Re: reduction

2008-12-07 Thread Christophe Grand
e nth result? > The problem is that to compute the rest (reduction f init coll) you call (reduction f init coll) again. So, as you said, at each step you are computing the entire chain up to each result. The expansion of you code (by replacing (reduction f init coll) by its definition) is

Re: reduction

2008-12-06 Thread Chouser
On Sat, Dec 6, 2008 at 4:27 AM, Christophe Grand <[EMAIL PROTECTED]> wrote: > > But I don't think it's still O(n). I searched for a way to define > recursively such lists but the only way I found involves using mutation. > Now with an atom it must be cleaner. Your comprehension of such things is

Re: reduction

2008-12-06 Thread Christophe Grand
Chouser a écrit : > How about this one? Same results as in my previous post. Still as > lazy as possible. Plus it's so cute! > > (defn reduction > "Returns a lazy seq of the intermediate values of the reduction (as > per reduce) of coll by f, starting with init.&

Re: reduction

2008-12-05 Thread Chouser
How about this one? Same results as in my previous post. Still as lazy as possible. Plus it's so cute! (defn reduction "Returns a lazy seq of the intermediate values of the reduction (as per reduce) of coll by f, starting with init." ([f coll] (if (seq coll) (l

Re: reduction

2008-12-05 Thread Chouser
ch successive 'rest'. But I found it tricky to fully solving this last problem. Here is a definition that almost completely solves the problem: (defn reduction "Returns a lazy seq of the intermediate values of the reduction (as per reduce) of coll by f, starting with i

Re: reduction

2008-12-05 Thread Chouser
Third time's charm? The previous versions of 'reduction' returned nil for empty collection when no init was given. This version follows 'reduce' more closely, calling the given function with no arguments: user=> (re

Re: reduction

2008-12-05 Thread Chouser
On Fri, Dec 5, 2008 at 10:50 AM, Chouser <[EMAIL PROTECTED]> wrote: > Google groups files section is having issues. > Here's 'reduction' as discussed in IRC, mostly written by Rich. I messed it up anyway -- tried to use if-let too early. Try thi

Re: reduction

2008-12-05 Thread Randall R Schulz
On Friday 05 December 2008 08:00, Rich Hickey wrote: > On Dec 5, 10:50 am, Chouser <[EMAIL PROTECTED]> wrote: > > Google groups files section is having issues. > > Here's 'reduction' as discussed in IRC, mostly written by Rich. > > Attachments are

Re: reduction

2008-12-05 Thread Rich Hickey
On Dec 5, 10:50 am, Chouser <[EMAIL PROTECTED]> wrote: > Google groups files section is having issues. > Here's 'reduction' as discussed in IRC, mostly written by Rich. > Attachments are preferred - thanks! Rich --~--~-~--~~~---~--~-

reduction

2008-12-05 Thread Chouser
Google groups files section is having issues. Here's 'reduction' as discussed in IRC, mostly written by Rich. --Chouser --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "Clojure" group. To