Re: Transducers improve performance more than expected

2016-05-11 Thread JvJ
That is very good to know. On Tuesday, 10 May 2016 20:30:41 UTC-7, Alex Miller wrote: > > range is reducible and boils down to just a local loop in most cases, so > shouldn't create any heap garbage (well, other than whatever your reducing > function does). > > See: > > https://github.com/cloju

Re: Transducers improve performance more than expected

2016-05-10 Thread Alex Miller
range is reducible and boils down to just a local loop in most cases, so shouldn't create any heap garbage (well, other than whatever your reducing function does). See: https://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/LongRange.java#L229-L238 Additionally, it can act as a ch

Re: Transducers improve performance more than expected

2016-05-10 Thread Timothy Baldridge
In addition, as of 1.7, (range 1000) no longer creates a lazy sequence. It creates something that acts a bit like a sequence, but is reducable. So doing something like (reduce + 0 (range 1000)) is super fast and creates almost no garbage at all. On Tue, May 10, 2016 at 5:46 PM, Alan Thompson wrot

Re: Transducers improve performance more than expected

2016-05-10 Thread Alan Thompson
I don't understand what you mean. '(range 1000)' produces a lazy sequence, and '(reduce + ...)' doesn't hold onto the head of the lazy sequence. Therefore, each element can be GC'd as soon as added into the running total, the the lazy sequence only produces new elements as they are requested by the

Re: Transducers improve performance more than expected

2016-05-10 Thread JvJ
That brings me to another thing I've wondered about. It is a typical clojure idiom to do something like (reduce + (range 1000)). But, unlike imperative loops, this will cache all those 1000 elements. This can kind of bloat memory, especially with large sequences? How can you get around it (ot

Re: Transducers improve performance more than expected

2016-05-10 Thread Alex Miller
Because some of the time you don't want caching. For example, if you want to (later) reduce over a large (larger than memory even) external resource. eductions allow you to define the source in one spot but defer the (eager) reduction until later. On Tuesday, May 10, 2016 at 11:22:24 AM UTC-5,

Re: Transducers improve performance more than expected

2016-05-10 Thread JvJ
In that case, why aren't eductions just lazy sequences? On Monday, 9 May 2016 16:07:55 UTC-7, Alex Miller wrote: > > eductions are non-caching (will re-perform their work each time they are > used), so most of the time I would say lazy sequences are preferable. > > On Monday, May 9, 2016 at 4:54:

Re: Transducers improve performance more than expected

2016-05-09 Thread Alex Miller
eductions are non-caching (will re-perform their work each time they are used), so most of the time I would say lazy sequences are preferable. On Monday, May 9, 2016 at 4:54:48 PM UTC-5, JvJ wrote: > > In a similar vein, do you think that eductions are generally a better idea > than lazy sequenc

Re: Transducers improve performance more than expected

2016-05-09 Thread JvJ
In a similar vein, do you think that eductions are generally a better idea than lazy sequences/for comprehensions? On Sunday, 8 May 2016 22:24:15 UTC-7, Herwig Hochleitner wrote: > > My theory has been, that transducer stacks inline much better, hence allow > for more optimizations by the jit. >

Re: Transducers improve performance more than expected

2016-05-08 Thread Herwig Hochleitner
My theory has been, that transducer stacks inline much better, hence allow for more optimizations by the jit. In particular I suspect that escape analysis works better on them, so the compiler can even move some of the remaining allocations to the stack. To verify this, try running with -verbose:g

Re: Transducers improve performance more than expected

2016-05-08 Thread Rangel Spasov
In my experience, the more intermediate collections you eliminate, the more you gain: *Transducers:* (criterium.core/with-progress-reporting (criterium.core/quick-bench (into [] (comp (map inc)) (range 10 Execution time mean : 3.803073