G'Day Victor, On Thu, Jun 26, 2008 at 03:09:04PM -0700, victor wrote: > Okay I got more information... > > I detached the second disk and starting benchmarking it. Here are the results. > > Write to zvol: > dd if=/dev/urandom of=/dev/zvol/rdsk/foo/ufs bs=1024k count=100 > 104857600 bytes (105 MB) copied, 86.3466 s, 1.2 MB/s > > Write to raw disk: > dd if=/dev/urandom of=/dev/rdsk/c5t1d0s0 bs=1024k count=100 > 100+0 records in > 100+0 records out > 104857600 bytes (105 MB) copied, 51.9522 s, 2.0 MB/s
Wow - something is seriously wrong with that disk - a /dev/rdsk write should fly. > Read from zvol: > [EMAIL PROTECTED]:/# dd if=/dev/zvol/rdsk/foo/ufs of=/dev/null bs=1024k > count=100 > 104857600 bytes (105 MB) copied, 4.30601 s, 24.4 MB/s > > Read from raw disk: > ptime dd if=/dev/rdsk/c1t0d0s0 of=/dev/null bs=1024k count=100 > 104857600 bytes (105 MB) copied, 1.13959 s, 92.0 MB/s @ 100 Right, so the disk can do 92 Mbytes/sec read but only 2 Mbytes/sec write; even the worst flash memory based disks I've seen write much faster than that. The dd /dev/rdsk test helps eliminate many factors, including ZFS. If the disk isn't defective (replace with a known working disk?), I'd start checking for driver bugs in this Indiana release. DTrace can help a lot too (if you have the time), as you'll be able to see what the drivers are doing, or eliminate them if this is truly time spent on the bus alone. Just taking it down to the block driver may be interesting: # dtrace -n 'io:::start { start[args[0]->b_edev, args[0]->b_blkno] = timestamp; } io:::done /start[args[0]->b_edev, args[0]->b_blkno]/ { this->d = timestamp - start[args[0]->b_edev, args[0]->b_blkno]; @[args[1]->dev_pathname] = quantize(this->d); }' that should give you nanosecond service time by device path. Brendan -- Brendan [CA, USA] _______________________________________________ perf-discuss mailing list perf-discuss@opensolaris.org