On Fri, Jan 21, 2011 at 6:43 PM, Aaron Cohen wrote:
>> max-key uses destructuring, which was one of the culprits for
>> unexpectedly holding onto the head of lists before locals clearing was
>> added.
>
> This part of what I said is garbage, I'm sorry. I looked at max-key
> too quickly, but there
> max-key uses destructuring, which was one of the culprits for
> unexpectedly holding onto the head of lists before locals clearing was
> added.
This part of what I said is garbage, I'm sorry. I looked at max-key
too quickly, but there isn't any destructuring there.
That doesn't change that I th
On Thu, Jan 20, 2011 at 11:57 PM, Mark Engelberg
wrote:
> On Thu, Jan 20, 2011 at 6:51 AM, Andreas Liljeqvist wrote:
>> I am sorry, I can't seem to reproduce the behavior at the moment :(
>> Mark, please tell me that I am not delusional...
>
> I definitely exhausted the heap running your program.
FWIW, no heap error on my system (Clojure 1.2, Java 1.6.0_13 -server).
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient wit
On Thu, Jan 20, 2011 at 6:51 AM, Andreas Liljeqvist wrote:
> I am sorry, I can't seem to reproduce the behavior at the moment :(
> Mark, please tell me that I am not delusional...
I definitely exhausted the heap running your program. I was using
Clojure 1.1, Java 1.6.0_21 with -server -Xmx1600M
I am sorry, I can't seem to reproduce the behavior at the moment :(
Mark, please tell me that I am not delusional...
Will try at home also.
2011/1/20 ka
> Andreas, How are you running that? Also what do you see in the heap
> dump and what is the jvm heap size?
>
> --
> You received this message
Andreas, How are you running that? Also what do you see in the heap
dump and what is the jvm heap size?
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are
Thank you, that explains the failure of the lazy-cseq using top-level defs.
I really hope it gets fixed, Clojure is going to be a hard sell if I have to
explain things like this to my coworkers :(
Anyhow there is still the problem with nonlazy cseq blowing the heap.
(defn cseq [n]
(if (= 1 n)
David:
Here is a link to the exact source code of the program I was running
for my recently published results in the sister thread titled "Problem
with garbage collection? Was Euler 14".
https://github.com/jafingerhut/clojure-benchmarks/blob/master/collatz/collatz.clj-1.clj
Is this code al
This same problem was raised recently:
https://groups.google.com/group/clojure/browse_thread/thread/df4ae16ab0952786?tvc=2&q=memory
It isn't a GC problem, it is an issue in the Clojure compiler.
The issue seems to only affect top-level defs. At the top-level:
(reduce + (range 1000))
-
>> Running in VisualVM suggests that % of Used Heap is actually very
>> less. So the problem is that GC isn't running often enough, so the JVM
>> has to keep allocating more memory.
By problem I meant, in my case, of too much memory consumption.
> Odd. I'd expect the JVM to run a GC immediately
On Wed, Jan 19, 2011 at 6:48 PM, ka wrote:
> Running in VisualVM suggests that % of Used Heap is actually very
> less. So the problem is that GC isn't running often enough, so the JVM
> has to keep allocating more memory.
Odd. I'd expect the JVM to run a GC immediately before reporting that
the
To me this looks totally fine, max-key should keep at most two
sequences in memory. I don't think there should be any difference
between the non-lazy and lazy versions as the depth of cseq is ~500.
The non-lazy version works for me (no heap error) for inputs 1M, 2M,
4M, but for 4M the java process
On Mon, Jan 17, 2011 at 11:55 AM, Andreas Liljeqvist wrote:
> I don't see why the cseq's have to be lazy, they are at the most 525
> elements long.
> shouldn't each sequence only be produced when it is reduced in max-key and
> then discarded?
You're right, the chains aren't as long as I thought.
I don't see why the cseq's have to be lazy, they are at the most 525
elements long.
shouldn't each sequence only be produced when it is reduced in max-key and
then discarded?
But it makes a difference:
(defn cseq [n]
(if (= 1 n)
[1]
(lazy-seq (cons n (cseq (if (even? n)
Your cseq is not lazy, and some of the sequences can be quite long, so
it wouldn't surprise me if that's the source of your problem.
You can test if this is the problem by doing something like:
(dorun (map cseq (range 1 100))) which removes the max-key from
the computation entirely.
You'll pr
16 matches
Mail list logo