On Wed, Aug 4, 2010 at 12:35 PM, David Nolen <dnolen.li...@gmail.com> wrote:

> On Wed, Aug 4, 2010 at 12:05 PM, Lee Spector <lspec...@hampshire.edu>wrote:
>
>> Thanks! I hadn't really thought about that. I realized there'd be a lot of
>> gc -- part of what makes my "burn" reliably slow is all of the allocation --
>> but I didn't consider how that would affect the concurrency. I don't yet
>> understand what's going on, but I can see how this can be a part of the
>> story. I normally run my real code with -Xmx8000m -XX:+UseParallelGC (more
>> on this below), but I wasn't doing that here.
>>
>> Can anyone suggest a good alternative to my "burn" function that doesn't
>> allocate so much? A lot of the other simple things I've tried end up getting
>> optimized away or have other problems e.g. with potentially shared state.
>>
>
> 8gb of ram seems kinda low to me for this kind of microbenchmark,
> especially when running 16 or 48 cores. But I'm no expert. As a far-fetched
> comparison when I played around with Aleph on 8-core box, doing nothing but
> hitting the server as fast as possible ate up 2gb of RAM. Seems like there
> was less allocation there than here (I couldn't even serve more than 20k
> reqs a second much less 1e6).
>
> Why not just replace burn with a large number of arithmetic operations?
> Memory issues have a less chance of coming into play then.
>

Primitive arithmetic operations. Like (+ (int a) (int b))


>
> David
>

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en

Reply via email to