Right now, the AthlonXP machine is booted into Linux, and I'm getting same
raw speed as when it is in Solaris, from PCI Sil3114 with Seagate 320G (
7200.10):

dd if=/dev/sdb of=/dev/null bs=128k count=10000
10000+0 records in
10000+0 records out
1310720000 bytes (1.3 GB) copied, 16.7756 seconds, 78.1 MB/s

sudo dd if=./test.mov of=/dev/null bs=128k count=10000
10000+0 records in
10000+0 records out
1310720000 bytes (1.3 GB) copied, 24.2731 seconds, 54.0 MB/s <-- some
overhead compared to raw speed of same disk above

same machine, onboard ATA, Seagate 120G:
dd if=/dev/hda of=/dev/null bs=128k count=10000
10000+0 records in
10000+0 records out
1310720000 bytes (1.3 GB) copied, 22.5892 seconds, 58.0 MB/s

On another machine with Pentium D 3.0GHz and ICH7 onboard SATA in AHCI mode,
running Darwin OS:

from a Seagate 500G (7200.10):
dd if=/dev/rdisk0 of=/dev/null bs=128k count=10000
10000+0 records in
10000+0 records out
1310720000 bytes transferred in 17.697512 secs (74062388 bytes/sec)

same disk, access through file system (HFS+)
dd if=./Summer\ 2006\ with\ Cohen\ 4 of=/dev/null bs=128k count=10000
10000+0 records in
10000+0 records out
1310720000 bytes transferred in 20.381901 secs (64308035 bytes/sec) <- very
small overhead compared to raw access above!

same Intel machine, Seagate 200G (7200.8, I think):
dd if=/dev/rdisk1 of=/dev/null bs=128k count=10000
10000+0 records in
10000+0 records out
1310720000 bytes transferred in 20.850229 secs (62863578 bytes/sec)

Modern disk drives are definitely fast and pushing close to 80MB/s raw
performance. And some file systems can get over 85% of that with simple
sequential access. So far, on these particular hardware and software
combinations, I have, filesystem performance as percentage of raw disk
performance for sequential unchached read:

HFS+: 86%
ext3 and UFS: 70%
ZFS: 45%

On 5/14/07, Richard Elling <[EMAIL PROTECTED]> wrote:

Marko Milisavljevic wrote:
> I missed an important conclusion from j's data, and that is that single
> disk raw access gives him 56MB/s, and RAID 0 array gives him
> 961/46=21MB/s per disk, which comes in at 38% of potential performance.
> That is in the ballpark of getting 45% of potential performance, as I am
> seeing with my puny setup of single or dual drives. Of course, I don't
> expect a complex file system to match raw disk dd performance, but it
> doesn't compare favourably to common file systems like UFS or ext3, so
> the question remains, is ZFS overhead normally this big? That would mean
> that one needs to have at least 4-5 way stripe to generate enough data
> to saturate gigabit ethernet, compared to 2-3 way stripe on a "lesser"
> filesystem, a possibly important consideration in SOHO situation.

Could you post iostat data for these runs?

Also, as I suggested previously, try with checksum off.  AthlonXP doesn't
have a reputation as a speed deamon.

BTW, for 7,200 rpm drives, which are typical in desktops, 56 MBytes/s
isn't bad.  The media speed will range from perhaps [30-40]-[60-75]
MBytes/s
judging from a quick scan of disk vendor datasheets.  In other words, it
would not surprise me to see 4-5 way stripe being required to keep a
GbE saturated.
  -- richard


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to