On Sat, Aug 1, 2009 at 10:04, Henrik K<h...@hege.li> wrote:
>
> On Sat, Aug 01, 2009 at 12:04:08AM -0700, Linda Walsh wrote:
>> Well -- it's not just the cores -- what was the usage of the cores that
>> were being used?  were 3 out the 8 'pegged'?  Are these 'real' cores, or
>> HT cores?  In the Core2 and P4 archs, HT's actually slowed down a good
>> many workloads unless they were tightly constructed to work on the same
>> data in cache.  Else, those HT's did just enough extra work to block cache
>> contents more than anything else.
>
> I really doubt there's HT involved in a recent looking 8 core 16GB machine..
>
>> What's the disk I/O look like?  I mean don't just focus on idle cores --
>> if the wait is on disk, maybe the cores can't get the data fast enough.
>
> As we already guessed, AWL (BerkeleyDB) caused disk I/O and slowness. For
> heavy loads you need to use SQL (or maybe the better BDB plugin in 3.3 if we
> get it working).
>
>> If the network is involved, well, that's a drag on any message checking.
>> I'm seeing times of .3msgs/sec, but I think that's with networking turned
>> on.  Pretty Ugly.
>
> It affects single messages, but not total throughput. With network checks
> you just dedicate a lot more childs. Waiting for network responses takes no
> CPU time, thus you can process more messages simultaneously.

although you will also need to allocate more memory, as well, to
ensure that no swapping takes place.

-- 
--j.

Reply via email to