Thank you, following your suggestion improves things - reading a ZFS
file from a RAID-0 pair now gives me 95MB/sec - about the same as from
/dev/dsk. What I find surprising is that reading from RAID-1 2-drive
zpool gives me only 56MB/s - I imagined it would be roughly like
reading from RAID-0. I can see that it can't be identical - when
reading mirrored drives simultaneously, some data will need to be
skipped if the file is laid out sequentially, but it doesn't seem
intuitively obvious how my broken drvers/card would affect it to that
degree, especially since reading from a file from one-disk zpool gives
me 70MB/s. My plan was to make 4-disk RAID-Z - we'll see how it works
out when all drives arrive.

Given how common Sil3114 chipset is in
my-old-computer-became-home-server segment, I am sure this workaround
will be appreciated by many who google their way here. And just in
case it is not clear, what j means below is to add these two lines in
/etc/system:

set zfs:zfs_vdev_min_pending=1
set zfs:zfs_vdev_max_pending=1

I've been doing a lot of reading, and it seem unlikely that any effort
will be made to address the driver performance with either ATA or
Sil311x chipset specifically - by the time more pressing enhancements
are made with various SATA drivers, this will be too obsolete to
matter.

With your workaround things are working well enough for the purpose
that I am able to chose Solaris over Linux - thanks again.

Marko

On 5/16/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
Marko,
Matt and I discussed this offline some more and he had a couple of ideas
about double-checking your hardware.

It looks like your controller (or disks, maybe?) is having trouble with
multiple simultaneous I/Os to the same disk.  It looks like prefetch
aggravates this problem.

When I asked Matt what we could do to verify that it's the number of
concurrent I/Os that is causing performance to be poor, he had the
following suggestions:

        set zfs_vdev_{min,max}_pending=1 and run with prefetch on, then
        iostat should show 1 outstanding io and perf should be good.

        or turn prefetch off, and have multiple threads reading
        concurrently, then iostat should show multiple outstanding ios
        and perf should be bad.

Let me know if you have any additional questions.

-j
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to