Yeah, rewrite that to not hold the head of "lines". In particular, get rid
of "counts" entirely, get rid of "(nth lines ...)", and use something like
(loop [lines (seq lines)
res {}]
(if lines
(recur (next lines) (update-res res (first lines)))
res))
as your loop.
On Tue, Nov
hi , guys
I wrote the following codes to parse log files.
it's alright to parse small one.
But with big log files , i got the following error:
"OutOfMemoryError GC overhead limit exceeded clojure.core/line-seq
(core.clj:2679)"
(defn parse-file
""
[file]
(with-open [rdr (io/reader file)]
Just a guess: You put 500 (or so) actions to the internal agent
queue and the agent isn't fast enough to keep up. The queue grows very
fast and goes out of memory.
On Wed, Aug 7, 2013 at 3:09 PM, Jérémie Campari
wrote:
> Hello !
>
> I don't understand why my code rai
Hello !
I don't understand why my code raise an Out of memory exception.
I have an agent that call a function which append a line to the "test.log"
file. The out of memory is on PersistentHashMap $
BitmapIndexedNode.assoc(PersistentHashMap.java:624).
(use 'clojure.java
On Saturday, July 6, 2013 4:01:31 PM UTC-4, tbc++ wrote:
>
> Go blocks are GC'd but not until they complete running. The problem is
> that you're creating go blocks faster than they can run. Creating go blocks
> is very cheap, taking/putting into channels is also cheap but not quite as
> cheap
2013/7/6 MikeM :
> Got an out of memory when experimenting with core.async channels in go
> blocks. The following is a simple example.
>
> (defn go-loop
> []
> (let [c0 (chan)]
> (while true
> (go
> (>! c0 1))
> (go
> (println (
> ;(.start (Th
Go blocks are GC'd but not until they complete running. The problem is that
you're creating go blocks faster than they can run. Creating go blocks is
very cheap, taking/putting into channels is also cheap but not quite as
cheap. Therefore the outer loop will eventually allocate so many blocks
that
On Saturday, July 6, 2013 11:46:51 AM UTC-4, David Nolen wrote:
>
> This isn't a bug, you're in a infinite loop constructing go blocks. You
> should probably move the loops into the go blocks.
>
> I assumed go blocks are garbage collected when they go out of scope, but
maybe I don't understan
This isn't a bug, you're in a infinite loop constructing go blocks. You
should probably move the loops into the go blocks.
David
On Sat, Jul 6, 2013 at 7:31 AM, MikeM wrote:
> Got an out of memory when experimenting with core.async channels in go
> blocks. The following is
Got an out of memory when experimenting with core.async channels in go
blocks. The following is a simple example.
(defn go-loop
[]
(let [c0 (chan)]
(while true
(go
(>! c0 1))
(go
(println (http://groups.google.com/group/clojure?hl=en
---
You received this message because you
If there was a Clojure -> LLVM -> CUDA pipeline... Sorry just thinking
about possibilities
On Jun 16, 2013 3:19 AM, "Jim - FooBar();" wrote:
> Hi guys,
>
> I tried for fun to write a parallel brute-force password cracker. I
> particularly thought that if I can generate lazily all the possible
>
Have you tried wrapping recursive calls with lazy-seq?
(defn brute-force "..."
([...] ...)
([check-pred possibilities]
(lazy-seq
(apply brute-force 4 check-pred possibilities
Here's a token-types rewrite
(def token-types
(let [-chars #(map char (range (int %1) (-> %2 int inc
Hi guys,
I tried for fun to write a parallel brute-force password cracker. I
particularly thought that if I can generate lazily all the possible
combinations, I'll have no trouble finding the match (memory-wise).
something like this:
(def token-types "All the possible characters grouped."
{
derstand my problem. The exact same code throws out of
> memory when I change map to pmap.
>
> My monthly data is evenly divided into 30 sets. For e.g total monthly
> data = 9 records, daily data size for each day = 3000 records. I
> am trying to achieve performance gain by proce
You didn't understand my problem. The exact same code throws out of
memory when I change map to pmap.
My monthly data is evenly divided into 30 sets. For e.g total monthly
data = 9 records, daily data size for each day = 3000 records. I
am trying to achieve performance gain by processin
<
shoeb.bhinderw...@gmail.com> wrote:
> Problem summary: I am running out of memory using pmap but the same code
> works with regular map function.
>
> My problem is that I am trying to break my data into sets and process them
> in parallel. My data is for an entire month and
Problem summary: I am running out of memory using pmap but the same code
works with regular map function.
My problem is that I am trying to break my data into sets and process them
in parallel. My data is for an entire month and I am breaking it into 30/31
sets - one for each day. I run a
On Thu, Dec 23, 2010 at 11:29 AM, Paul Mooser wrote:
> So, doesn't this represent a bug at least ? I'm sometimes confused
> when this sort of issue doesn't get more attention, and I'm uncertain
> what the process is for filing a bug, since my impression is that we
> are supposed to have issues va
So, doesn't this represent a bug at least ? I'm sometimes confused
when this sort of issue doesn't get more attention, and I'm uncertain
what the process is for filing a bug, since my impression is that we
are supposed to have issues validated by discussion on the group
before filing an actual tic
On Wed, Dec 22, 2010 at 6:38 PM, David Nolen wrote:
> On Wed, Dec 22, 2010 at 6:10 PM, Ken Wesson wrote:
>>
>> On Wed, Dec 22, 2010 at 6:08 PM, Chris Riddoch wrote:
>> > On Wed, Dec 22, 2010 at 3:46 PM, David Nolen
>> > wrote:
>> >> A entire collection of 5e7 *objects* is being realized into me
On Wed, Dec 22, 2010 at 6:10 PM, Ken Wesson wrote:
> On Wed, Dec 22, 2010 at 6:08 PM, Chris Riddoch wrote:
> > On Wed, Dec 22, 2010 at 3:46 PM, David Nolen
> wrote:
> >> A entire collection of 5e7 *objects* is being realized into memory as it
> is
> >> being reduced down to a single value to be
On Wed, Dec 22, 2010 at 6:08 PM, Chris Riddoch wrote:
> On Wed, Dec 22, 2010 at 3:46 PM, David Nolen wrote:
>> A entire collection of 5e7 *objects* is being realized into memory as it is
>> being reduced down to a single value to be stored into a var. I would expect
>> this to perform poorly in a
On Wed, Dec 22, 2010 at 3:46 PM, David Nolen wrote:
> A entire collection of 5e7 *objects* is being realized into memory as it is
> being reduced down to a single value to be stored into a var. I would expect
> this to perform poorly in any language.
Range doesn't return a lazy seq? Or reduce so
On Wed, Dec 22, 2010 at 5:32 PM, Chris Riddoch wrote
>
> If the workarounds mentioned actually work (I haven't tried) I really
> don't understand why. This *looks* like a genuine bug to me, but I
> really don't know Clojure's internals well enough (yet) to be able to
> have the slightest hint wh
On Wed, Dec 22, 2010 at 9:54 AM, Laurent PETIT wrote:
> 2010/12/22 Jeff Palmucci
>>
>> I've worked around this sort of thing in the past by wrapping the
>> initialization in a closure. My macros:
>
> Couldn't just it be a wrap with (let [] ), and let the choice of running it
> once or not by choo
p; body]
> `(^{:once true} fn* ~args ~...@body))
>
> (defmacro top-level-run "work around a memory leak in the repl"
> [& body]
> `((once-fn []
> ~...@body)))
>
> You'll find that:
>
> (def out_of_mem (top-level-run (reduce + 0 (range 5
top-level-run "work around a memory leak in the repl"
[& body]
`((once-fn []
~...@body)))
You'll find that:
(def out_of_mem (top-level-run (reduce + 0 (range 50000
does not run out of memory.
--
You received this message because you are subscribed to the Goo
e I just changed the settings
> in the source, before install.
>
> There's probably a list of pros/cons to upping the default heap size
> that you may want to consider.
>
> Tim
>
> On Dec 21, 7:09 am, Miles Trebilco wrote:
> > Why does this cause an out of memory err
d the settings
in the source, before install.
There's probably a list of pros/cons to upping the default heap size
that you may want to consider.
Tim
On Dec 21, 7:09 am, Miles Trebilco wrote:
> Why does this cause an out of memory error:
>
> (def out_of_mem
> (reduce + 0 (range 5000
On Tue, Dec 21, 2010 at 9:09 AM, Miles Trebilco wrote:
> Why does this cause an out of memory error:
>
> (def out_of_mem
> (reduce + 0 (range 5000)))
>
> while this does not:
>
> (def not_out_of_mem
> (let [result 0]
> (reduce + result (range 5000
&g
Why does this cause an out of memory error:
(def out_of_mem
(reduce + 0 (range 5000)))
while this does not:
(def not_out_of_mem
(let [result 0]
(reduce + result (range 5000
and neither does this in the REPL:
(reduce + 0 (range 5000)))
- Miles
--
You received this
Thank Ken, your suggestion solved my problem with the OOM exception.
I tried your suggestion to run it in parallel but I didn't see much
difference. Instead I called future on the let call and that helped
the performance.
On Dec 17, 2:55 pm, Ken Wesson wrote:
> On Fri, Dec 17, 2010 at 5:39 PM, c
On Fri, Dec 17, 2010 at 5:39 PM, clj123 wrote:
> (defn persist-rows
> [headers rows id]
> (let [mrows (transform-rows rows id)]
> (with-db *db* (try
> (apply insert-into-table
> :my-table
> [:col1 :col2 :col3]
> mrows)))
> nil ))
>
> (d
a database large number of records, however
> > it's not scaling correctly. For 100 records it takes 10 seconds, for
> > 100 records it takes 2 min to save. But for 2500000 records it
> > throws Java Heap out of memory exception.
>
> > I've tried separtin
On Thu, Dec 16, 2010 at 09:19, clj123 wrote:
> Hello,
>
> I'm trying to insert in a database large number of records, however
> it's not scaling correctly. For 100 records it takes 10 seconds, for
> 100 records it takes 2 min to save. But for 250 records it
You might be coming to near OOM with using in-memory processing but
don't know it, and the batched (lazy) version is probably holding onto
data creating the mem leak. Would you be able to post the relevant
source?
--
You received this message because you are subscribed to the Google
Groups "Cloj
Hello,
I'm trying to insert in a database large number of records, however
it's not scaling correctly. For 100 records it takes 10 seconds, for
100 records it takes 2 min to save. But for 250 records it
throws Java Heap out of memory exception.
I've tried separting the rec
On Tue, Nov 23, 2010 at 9:08 PM, Rick Moynihan wrote:
> On 23 November 2010 19:01, Ken Wesson wrote:
>> On Tue, Nov 23, 2010 at 7:49 AM, Laurent PETIT
>> wrote:
>>> try
>>> (def x #(iterate inc 1))
>>> (take 1 (drop 10 (x))
>>>
>>> if you do not want to blow up the memory.
>>
>> I wonde
On 23 November 2010 19:01, Ken Wesson wrote:
> On Tue, Nov 23, 2010 at 7:49 AM, Laurent PETIT
> wrote:
>> try
>> (def x #(iterate inc 1))
>> (take 1 (drop 10 (x))
>>
>> if you do not want to blow up the memory.
>
> I wonder if an uncached lazy seq variant that cannot hold onto its
> head
>> try
>> (def x #(iterate inc 1))
>> (take 1 (drop 10 (x))
>>
>> if you do not want to blow up the memory.
>
> I wonder if an uncached lazy seq variant that cannot hold onto its
> head would be useful to have in core?
I would argue that such a feature wouldn't be very useful. Let's
consid
On Tue, Nov 23, 2010 at 7:49 AM, Laurent PETIT wrote:
> try
> (def x #(iterate inc 1))
> (take 1 (drop 10 (x))
>
> if you do not want to blow up the memory.
I wonder if an uncached lazy seq variant that cannot hold onto its
head would be useful to have in core?
--
You received this mess
2010/11/23 DarkMagus
>
> % (def x (iterate inc 1))
> % (take 1 (drop 1 x))
>
> Can someone, please, explain to me why the above code crashes with an
> out of memory exception? it works fine when the drop number is small.
> But as I increase the drop number clojure
% (def x (iterate inc 1))
% (take 1 (drop 1 x))
Can someone, please, explain to me why the above code crashes with an
out of memory exception? it works fine when the drop number is small.
But as I increase the drop number clojure starts getting slower and
slower until the point where I
On Wed, Nov 4, 2009 at 6:46 AM, John Harrop wrote:
> On Tue, Nov 3, 2009 at 1:53 AM, Alex Osborne wrote:
>>
>> The new loop uses the outer-let to get around this:
>> (let [G__13697 s
>> [x & xs] G__13697
>> y xs]
>> (loop* [G__13697 G__13697
>> y y]
>> (let [[x &
On Tue, Nov 10, 2009 at 7:21 AM, Rich Hickey wrote:
> Right - pervasive locals clearing will definitely do the trick here.
> Interestingly, when I was at Microsoft and asked them about handling
> this issue for the CLR they stated plainly it wasn't an issue at all -
> their system can fully detec
On Wed, Nov 4, 2009 at 8:47 AM, Christophe Grand wrote:
>
> On Tue, Nov 3, 2009 at 7:27 PM, Paul Mooser wrote:
>>
>> Ah -- I hadn't understood that when using destructuring, that
>> subsequent bindings could refer to the destructured elements. I should
>> have, since clojure "only" has let*, and
I imagine he's just busy. At this point, I plan to create a ticket on
assembla, if that's possible - I think I just need to create a login
and then file it.
On Nov 9, 2:07 pm, John Harrop wrote:
> On Mon, Nov 9, 2009 at 4:31 PM, Rock wrote:
> > I've been following this thread, and I must say I'
On Mon, Nov 9, 2009 at 4:31 PM, Rock wrote:
> I've been following this thread, and I must say I'm puzzled that Rich
> hasn't said anything at all about this issue yet. It seems important
> enough to hear his own opinion.
My observation over the past few months is that Rich has long absences awa
I've been following this thread, and I must say I'm puzzled that Rich
hasn't said anything at all about this issue yet. It seems important
enough to hear his own opinion.
On 6 Nov, 18:56, Paul Mooser wrote:
> So, I've been hoping that Rich (or someone?) would weigh in on this,
> and give the go
It does make me wonder, however, if having the lazy-seq cache things
is sort of conflating laziness and consistency, since as you point
out, not all ISeq implementations do any sort of caching.
I wonder if it would be interesting to decompose it into 'lazy-
seq' (uncached), and 'cached-seq'. I un
I completely understand the difference between the ISeq interface, and
the particular implementation (lazy-seq) that results in these
problems. It would be fairly straightforward, I think, to write some
kind of uncached-lazy-seq which doesn't exhibit these problems, but
I've felt that is sidestepp
On Tue, Nov 3, 2009 at 11:51 PM, Mark Engelberg
wrote:
>
> Clojure's built-in "range" function (last time I looked) essentially
> produces an uncached sequence. And that makes a lot of sense.
'range' has since changed and now produces a chunked lazy seq
(master branch post-1.0).
> Producing th
Well, I care (conceptually) more about the fix being made, rather than
the exact timeframe. If we had to wait until clojure-in-clojure, I
think I could live with that, since the issue can be readily avoided.
We'll see if Rich has a chance to chime-in to acknowledge whether or
not he considers this
On Tue, Nov 3, 2009 at 7:27 PM, Paul Mooser wrote:
>
> Ah -- I hadn't understood that when using destructuring, that
> subsequent bindings could refer to the destructured elements. I should
> have, since clojure "only" has let*, and this behavior seems
> consistent with that, for binding.
>
> Ee
On Tue, Nov 3, 2009 at 1:53 AM, Alex Osborne wrote:
> The new loop uses the outer-let to get around this:
>
> (let [G__13697 s
> [x & xs] G__13697
> y xs]
> (loop* [G__13697 G__13697
> y y]
> (let [[x & xs] G__13697
>y y]
>...)))
>
Now
I agree that seqs carry a large degree of risk. You have to work very
hard to avoid giving your large sequences a name, lest you
accidentally "hang on to the head".
In Clojure's early days, I complained about this and described some of
my own experiments with uncached sequences. Rich said he wa
I understand the pragmatism of your approach, but it's really
unfortunate. Seqs are a really convenient abstraction, and the ability
to model arbitrarily large or infinite ones (with laziness) is really
useful. In my opinion, only using seqs when all of the data can be fit
into memory really under
In the particular case given below, I'd assume that during the
invocation of print-seq, the binding to "s" (the head of the sequence)
would be retained, because my mental model for the execution
environment of a function is that it is the environment in which they
were declared, extended with the
We encountered similar problems at work trying to wrap I/O up into lazy
seq's. The problem is that it is very easy to accidentally hold on to the
head of a seq while enumerating it's elements. In addition, we had problems
with not closing file descriptors. A common pattern was to open a file,
pr
On Tue, Nov 3, 2009 at 5:19 PM, Paul Mooser wrote:
>
> I understand the pragmatism of your approach, but it's really
> unfortunate. Seqs are a really convenient abstraction, and the ability
> to model arbitrarily large or infinite ones (with laziness) is really
> useful. In my opinion, only using
Ah -- I hadn't understood that when using destructuring, that
subsequent bindings could refer to the destructured elements. I should
have, since clojure "only" has let*, and this behavior seems
consistent with that, for binding.
Eeww. It seems like quite a thorny issue to solve, even if simple to
Paul Mooser wrote:
> Good job tracking down that diff -- upon looking at it, unfortunately,
> I obviously don't understand the underlying issue being fixed (the
> inter-binding dependencies) because the "old code" basically matches
> what I would think would be the way to avoid introducing this in
Good job tracking down that diff -- upon looking at it, unfortunately,
I obviously don't understand the underlying issue being fixed (the
inter-binding dependencies) because the "old code" basically matches
what I would think would be the way to avoid introducing this in an
outer let form -- clear
This is great advice, of course. On the other hand, I feel it's
important to be explicitly clear about which forms will hold on to
(seemingly) transient data. Certain things are explicitly clear about
this (such as the docstring for doseq), and this particular case is
unfortunate because in the co
On Mon, Nov 2, 2009 at 2:39 PM, Christophe Grand wrote:
> Right now I can't see how loop can be made to support both cases.
> Hopefully someone else will.
In the meantime, remember that it's always worth trying to implement
seq-processing in terms of map, reduce, filter, for, and friends if
poss
Hi Paul,
It's indeed surprising and at first glance, it looks like a bug but
after researching the logs, this let form was introduced in the
following commit
http://github.com/richhickey/clojure/commit/288f34dbba4a9e643dd7a7f77642d0f0088f95ad
with comment "fixed loop with destructuring and inter-
I'm a little surprised I haven't seen more response on this topic,
since this class of bug (inadvertently holding onto the head of
sequences) is pretty nasty to run into, and is sort of awful to debug.
I'm wondering if there's a different way to write the loop macro so
that it doesn't expand into
>From looking at the source code the loop macro, it looks like this
might be particular to destructuring with loop, rather than being
related to destructuring in general ?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google
Group
A user on IRC named hiredman had the excellent idea (which should have
occurred to me, but didn't) to macroexpand my code.
A macro expansion of
(loop [[head & tail] (repeat 1)] (recur tail))
results in:
(let* [G__10 (repeat 1)
vec__11 G__10
head (clojure.core/nth vec__11 0 ni
I actually restructured my code (not the toy example posted here) to
avoid the destructuring, and was disappointed to find it also
eventually blows up on 1.6 as well. I'm reasonably certain in that
case that I'm not holding on to any of the sequence (since I don't
refer to it outside the invocatio
On Fri, Oct 30, 2009 at 3:15 PM, Paul Mooser wrote:
> Is this behavior due to some artifact of destructuring I'm not aware
> of (or something else I'm missing), or is there a bug? If it sounds
> like a bug, can anyone else reproduce?
>
> Thanks!
I vaguely remember something like this coming up
I was working with a large data set earlier today, and I had written a
loop/recur where I was passing in a huge seq to the first iteration,
and I was surprised when I ran out of heap space, because I was very
careful not to hold on to the head of the seq, and I though that loop
ended up rebinding
(ns Trial
(:use queries)
(:import (java.io BufferedReader IOException InputStreamReader)
java.net.URL
javafiles.Porter))
;;
(def *server* "fiji4.ccs.neu
Sorry to resurrect this, but I noticed that there isn't an issue to
track this - is this something unlikely to be fixed officially for
1.0 ? The workaround you posted certainly works for me, but I just
wanted to make sure the actual core.clj filter implementation receives
the fix eventually.
On D
On Sat, Dec 13, 2008 at 5:51 AM, Rich Hickey wrote:
> No you can't, for the same reasons you can't for Iterator or
> Enumeration seqs. Again it comes down to abstractions, and the
> abstraction for (seq x) is one on persistent collections. It presumes
> that (seq x) is referentially transparent,
On Dec 13, 2008, at 2:18 AM, Mark Engelberg wrote:
>
> On Fri, Dec 12, 2008 at 9:28 PM, Rich Hickey
> wrote:
>> I think it's very important not to conflate different notions of
>> sequences. Clojure's model a very specific abstraction, the Lisp
>> list,
>> originally implemented as a singly
On Fri, Dec 12, 2008 at 9:28 PM, Rich Hickey wrote:
> I think it's very important not to conflate different notions of
> sequences. Clojure's model a very specific abstraction, the Lisp list,
> originally implemented as a singly-linked list of cons cells. It is a
> persistent abstraction, first/s
On Fri, Dec 12, 2008 at 10:09 PM, Mark Engelberg
wrote:
>
> On Fri, Dec 12, 2008 at 5:28 PM, Paul Mooser wrote:
>>
>> On Dec 12, 3:15 pm, "Mark Engelberg" wrote:
>>>And in fact, it turns out that in those languages, uncached lazy lists end
>>>up rarely used.
>>
>> Did you mean that the cached
On Fri, Dec 12, 2008 at 5:28 PM, Paul Mooser wrote:
>
> On Dec 12, 3:15 pm, "Mark Engelberg" wrote:
>>And in fact, it turns out that in those languages, uncached lazy lists end up
>>rarely used.
>
> Did you mean that the cached lazy lists are rarely used? Or does
> everyone actually choose to u
On Dec 12, 3:15 pm, "Mark Engelberg" wrote:
>And in fact, it turns out that in those languages, uncached lazy lists end up
>rarely used.
Did you mean that the cached lazy lists are rarely used? Or does
everyone actually choose to use the cached ones?
--~--~-~--~~~---
On Dec 12, 1:03 pm, Paul Mooser wrote:
> On Dec 12, 6:37 am, Rich Hickey wrote:
>
> > I'm appreciate the time you and others have spent on this, and will
> > improve filter, but I'm not sure where you are getting your
> > presumptions about lazy sequences. They are not a magic bullet that
> >
if you store something, and it requires the program to go off and look
> in the "slow part of memory" to find it, you're much worse off than if
> you had just recomputed.
>
> Sometimes, if you know you are going to be traversing a sequence more
> than once, you would be
On Friday 12 December 2008 15:15, Mark Engelberg wrote:
> ...
>
> --Mark
Not being nearly sophisticated enough in Clojure, FP or the relevant
concepts to say anything other than "that all makes complete sense to
me," I wonder only what would be the impact on existing programs were
the default
you need better
performance when traversing a lazy sequence multiple times, you may
benefit from explicitly realizing the result of the intermediate lazy
computations, or using a cached lazy sequence if that's what you
need." On the other hand, if you keep things as they are, I can
pretty m
On Dec 12, 6:37 am, Rich Hickey wrote:
> I'm appreciate the time you and others have spent on this, and will
> improve filter, but I'm not sure where you are getting your
> presumptions about lazy sequences. They are not a magic bullet that
> makes working with data bigger than memory transparent
On Dec 12, 12:29 am, "Mark Engelberg"
wrote:
> On Mon, Dec 8, 2008 at 6:51 PM, Rich Hickey wrote:
>
> I don't have the latest build of Clojure with atoms, so I
> reimplemented Rich's filter solution using refs, turning:
>
> > (defn filter
> > [pred coll]
> > (let [sa (atom (seq coll))
>
On Mon, Dec 8, 2008 at 6:51 PM, Rich Hickey wrote:
I don't have the latest build of Clojure with atoms, so I
reimplemented Rich's filter solution using refs, turning:
> (defn filter
> [pred coll]
> (let [sa (atom (seq coll))
> step (fn step []
> (when-let [s @sa
On Dec 8, 2008, at 8:56 PM, Stephen C. Gilardi wrote:
> I think I finally see the problem. The "rest expression" in filter's
> call to lazy-cons has a reference to "coll" in it. That's all it
> takes for coll to be retained during the entire calculation of the
> rest.
>
> (defn filter
> "
On Mon, Dec 8, 2008 at 5:56 PM, Stephen C. Gilardi <[EMAIL PROTECTED]> wrote:
> I think I finally see the problem. The "rest expression" in filter's call to
> lazy-cons has a reference to "coll" in it. That's all it takes for coll to
> be retained during the entire calculation of the rest.
>
Well
On Mon, Dec 8, 2008 at 5:56 PM, Stephen C. Gilardi <[EMAIL PROTECTED]> wrote:
> I think I finally see the problem. The "rest expression" in filter's call to
> lazy-cons has a reference to "coll" in it. That's all it takes for coll to
> be retained during the entire calculation of the rest.
>
Well
I think I finally see the problem. The "rest expression" in filter's
call to lazy-cons has a reference to "coll" in it. That's all it takes
for coll to be retained during the entire calculation of the rest.
(defn filter
"Returns a lazy seq of the items in coll for which
(pred item) return
On Dec 8, 2008, at 8:40 PM, Stephen C. Gilardi wrote:
This looked very promising to me. For one thing, I remembered that
the root of the big chain of LazyCons objects in memory (as
displayed by the YourKit profiler) was "f".
Now it's "r" and is listed as a "stack local".
--Steve
smime.
On Dec 8, 2008, at 7:45 PM, Mark Engelberg wrote:
I have an idea to try, but I'm not set up to build the java sources on
my computer, so maybe someone else can run with it:
This looked very promising to me. For one thing, I remembered that the
root of the big chain of LazyCons objects in me
I share your concern about the LazyCons problem - hopefully Rich and
others are looking into this.
I have continued to experiment to see if I can gain some understanding
that might help with a solution.
The following is something I thought of today, and I'd like to see if
others get the same res
Is there a place we should file an official bug, so that Rich and the
rest of the clojure people are aware of it? I imagine that they may
have read this thread, but I'm not sure if there is an "official"
process to make sure these things get addressed.
As I said in a previous reply, it's not clea
Has anyone made progress on this bug?
The simplest form of the bug was this:
(defn splode [n]
(doseq [i (filter #(= % 20) (map inc (range n)))]))
This blows the heap, but it shouldn't.
I find this deeply troubling, because if this doesn't work, it
undermines my faith in the implementation of
On Dec 7, 1:52 am, Chouser <[EMAIL PROTECTED]> wrote:
> On Sun, Dec 7, 2008 at 1:16 AM, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> > I'm also running into, what I believe to be, the same problem. Every
> > time I run the following code I get "java.lang.OutOfMemoryError: Java
> > heap spac
On Sun, Dec 7, 2008 at 1:16 AM, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> I'm also running into, what I believe to be, the same problem. Every
> time I run the following code I get "java.lang.OutOfMemoryError: Java
> heap space".
>
> (use 'clojure.contrib.duck-streams)
> (count (line-seq (r
I'm also running into, what I believe to be, the same problem. Every
time I run the following code I get "java.lang.OutOfMemoryError: Java
heap space".
(use 'clojure.contrib.duck-streams)
(count (line-seq (reader "big.csv")))
If I change "count" to "dorun" then it will return without problem.
On Dec 6, 8:38 pm, puzzler <[EMAIL PROTECTED]> wrote:
> Maybe LazyCons shouldn't cache. Make LazyCons something that executes
> its function every time. For most things, it's not a problem because
> sequences are often traversed only once. If a person wants to cache
> it for multiple traversals
1 - 100 of 131 matches
Mail list logo