Hello devzero,
Would be nice to see if that throughput in your configuration would be
possible with OS 2008.11, or is from the enhancements from 105b above. You are
running 110b right?
Leal
[ http://www.eall.com.br/blog ]
--
This message posted from opensolaris.org
__
james.ma...@sun.com said:
> I'm not yet sure what's broken here, but there's something pathologically
> wrong with the IO rates to the device during the ZFS tests. In both cases,
> the wait queue is getting backed up, with horrific wait queue latency
> numbers. On the read side, I don't understand
And I've blogged about it at
http://mbruning.blogspot.com/2009/03/faster-memstat-for-mdb.html
max
Ben Rockwood wrote:
m...@bruningsystems.com wrote:
Hi Jim,
Jim Mauro wrote:
mdb's memstat is cool in how it summarizes things, but it takes a very
long time to run on large systems. mems
>Please send "zpool status" output.
bash-3.2# zpool status
Pool: rpool
Status: ONLINE
scrub: Keine erforderlich
config:
NAMESTATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c0t0d0s0 ONLINE 0 0 0
Fehler: Keine bekannten Datenfe
Hello Jim,
i double checked again - but it`s like i told:
echo zfs_prefetch_disable/W0t1 | mdb -kw
fixes my problem.
i did a reboot and only set this single param - which immediately makes the
read troughput go up from ~2 MB/s to ~30 MB/s
>I don't understand why disabling ZFS prefetch solv
m...@bruningsystems.com wrote:
> Hi Jim,
> Jim Mauro wrote:
>>
>> mdb's memstat is cool in how it summarizes things, but it takes a very
>> long time to run on large systems. memstat is walking page lists, so
>> it should be quite accurate.
>> If you can live with the run time of ::memstat, it's cu