Hello Peter,

Saturday, December 15, 2007, 7:45:50 AM, you wrote:

>> Use a faster processor or change to a mirrored configuration.
>> raidz2 can become processor bound in the Reed-Soloman calculations
>> for the 2nd parity set.  You should be able to see this in mpstat, and to
>> a coarser grain in vmstat.

PS> Hmm. Is the OP's hardware *that* slow? (I don't know enough about the Sun
PS> hardware models)

PS> I have a 5-disk raidz2 (cheap SATA) here on my workstation, which is an X2
PS> 3800+ (i.e., one of the earlier AMD dual-core offerings). Here's me dd:ing 
to
PS> a file on FreeBSD on ZFS running on that hardware:

PS> promraid     741G   387G      0    380      0  47.2M
PS> promraid     741G   387G      0    336      0  41.8M
PS> promraid     741G   387G      0    424    510  51.0M
PS> promraid     741G   387G      0    441      0  54.5M
PS> promraid     741G   387G      0    514      0  19.2M
PS> promraid     741G   387G     34    192  4.12M  24.1M
PS> promraid     741G   387G      0    341      0  42.7M
PS> promraid     741G   387G      0    361      0  45.2M
PS> promraid     741G   387G      0    350      0  43.9M
PS> promraid     741G   387G      0    370      0  46.3M
PS> promraid     741G   387G      1    423   134K  51.7M
PS> promraid     742G   386G     22    329  2.39M  10.3M
PS> promraid     742G   386G     28    214  3.49M  26.8M
PS> promraid     742G   386G      0    347      0  43.5M
PS> promraid     742G   386G      0    349      0  43.7M
PS> promraid     742G   386G      0    354      0  44.3M
PS> promraid     742G   386G      0    365      0  45.7M
PS> promraid     742G   386G      2    460  7.49K  55.5M

PS> At this point the bottleneck looks architectural rather than CPU. None of 
the
PS> cores are saturated, and the CPU usage of the ZFS kernel threads is pretty
PS> low.

PS> I say architectural because writes to the underlying devices are not 
PS> sustained; it drops to almost zero for certain periods (this is more visible
PS> in iostat -x than it is in the zpool statistics). What I think is happening
PS> is that ZFS is too late to evict data in the cache, thus blocking the 
writing
PS> process. Once a transaction group with a bunch of data gets committed the
PS> application unblocks, but presumably ZFS waits for a little while before
PS> resuming writes.

PS> Note that this is also being run on plain hardware; it's not even PCI 
Express.
PS> During throughput peaks, but not constantly, the bottleneck is probably the
PS> PCI bus.


Sequential writing problem with process throttling - there's an open
bug for it for quite a while. Try to lower txg_time to 1s - should
help a little bit.

Can you also post iostat -xnz 1 while you're doing dd?
and zpool status



-- 
Best regards,
 Robert                            mailto:[EMAIL PROTECTED]
                                       http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to