G'Day Victor,

Lets shuffle this thread over to perf-discuss (or zfs-discuss).

On Thu, Jun 26, 2008 at 10:19:01AM -0700, victor wrote:
> Okay, no matter how hard I try I just can't get the disk speeds to be 
> remotely reasonable on Indiana snv_91.
> 
> Here are the stats on the box in question:
> 
> SunOS web 5.11 snv_91 i86pc i386 i86pc
> 
> pci8086,346c, instance #0 (driver name: ahci)
>     disk, instance #0 (driver name: sd)
>     disk, instance #1 (driver name: sd)
> 
>        0. c5t0d0 <DEFAULT cyl 60798 alt 2 hd 255 sec 63>
>           /[EMAIL PROTECTED],0/pci8086,[EMAIL PROTECTED],2/[EMAIL PROTECTED],0
>        1. c5t1d0 <ATA-WDC WD5000AAJS-2-1C01-465.76GB>
>           /[EMAIL PROTECTED],0/pci8086,[EMAIL PROTECTED],2/[EMAIL PROTECTED],0
> 
>       NAME          STATE     READ WRITE CKSUM
>       rpool         ONLINE       0     0     0
>         mirror      ONLINE       0     0     0
>           c5t0d0s0  ONLINE       0     0     0
>           c5t1d0s0  ONLINE       0     0     0
> 
> 
> Writing dd if=/dev/zero of=foo.bin count=55500  will write to disk from  
> 1.1megs/sec to 3.3megs/sec *tops* I can't get the disk to write any faster. 
> The system I/O is just at a crawl.

How were you measuring system I/O?  I'd try this:

 window1# iostat -xne 1

 window2# zpool iostat -v 5

 window3# ptime dd if=/dev/urandom of=foo bs=128k count=1024

That should create a 128 Mbyte file.  Watch iostat/zpool beforehand to
make sure the system is idle, and remember that they will show the I/O
that ZFS pushed to disk, not I/O to the pool - which will cache in
memory first.  To get an idea of what your application will see, you
can divide 128 Mbytes by the 'real' time from ptime.

If compression is on and you were dd'ing from /dev/zero, then ZFS will
compress that down a lot before writing, and iostat/zpool won't show
much.  Another problem could be disk errors, hense the 'e' on iostat.

Brendan

-- 
Brendan
[CA, USA]
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to