Got excited too quickly on one thing... reading single zfs file does give me
almost same speed as dd /dev/dsk... around 78MB/s... however, creating a
2-drive stripe, still doesn't perform as well as it ought to:

   r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
 294.3    0.0 37675.6    0.0  0.0  0.4    0.0    1.4   0  40 c3d0
 293.0    0.0 37504.9    0.0  0.0  0.4    0.0    1.4   0  40 c3d1

Simultaneous dd on those 2 drives from /dev/dsk runs at 46MB/s per drive.
   r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
 800.4    0.0 44824.6    0.0  0.0  1.8    0.0    2.2   0  99 c3d0
 792.1    0.0 44357.9    0.0  0.0  1.8    0.0    2.2   0  98 c3d1

(and in Linux it saturates PCI bus at 60MB/s per drive)

On 5/15/07, Marko Milisavljevic <[EMAIL PROTECTED]> wrote:

set zfs:zfs_prefetch_disable=1

bingo!

    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
  609.0    0.0 77910.0    0.0  0.0  0.8    0.0    1.4   0  83 c0d0

only 1-2 % slower then dd from /dev/dsk. Do you think this is general
32-bit problem, or specific to this combination of hardware? I am
using PCI/SATA Sil3114 card, and other then ZFS, performance of this
interface has some limitations in Solaris. That is, single drive gives
80MB/s, but doing dd /dev/dsk/xyz simultaneously on 2 drives attached
to the card gives only 46MB/s each. On Linux, however, that gives
60MB/s each, close to saturating theoretical throughput of PCI bus.
Having both drives in zpool stripe gives, with prefetch disabled,
close to 45MB/s each through dd from zfs file.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to