muchas gracias por vuestro tiempo, por que me hiciste mas fuerte

[if misleading, plz excuse my french...]
;-)
z



----- Original Message ----- 
From: "Bob Friesenhahn" <bfrie...@simple.dallas.tx.us>
To: "Dmitry Razguliaev" <rdmitry0...@yandex.ru>
Cc: <zfs-discuss@opensolaris.org>
Sent: Saturday, January 10, 2009 10:28 AM
Subject: Re: [zfs-discuss] ZFS poor performance on Areca 1231ML


> On Sat, 10 Jan 2009, Dmitry Razguliaev wrote:
>
>> At the time of writing that post, no, I didn't run zpool iostat -v
>> 1. However, I run it after that. Results for operations of iostat
>> command has changed from 1 for every device in raidz to something in
>> between 20 and 400 for raidz volume and from 3 to something in
>> between 200 and 450 for a single device zfs volume, but the final
>> result remained the same: single disk zfs volume is only about twice
>> slower, then 9 disks raidz zfs volume, which seems to be very
>> strange. My expectations are in a range of 6-7 times difference in
>> performance.
>
> Your expectations were wrong.  Raidz and raidz2 will improve bulk
> sequential read/write with large files but they do nothing useful for
> random access or multi-user performance.  There is also the issue that
> a single slow disk in a raidz or raidz2 vdev and drag down the
> performance of the whole vdev.
>
> Bob
> ======================================
> Bob Friesenhahn
> bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to