Thomas W wrote:
Hi, it's me again.
First of all, technically slicing the drive worked like it should.
I started to experiment and found some issues I don't really understand.
My base playground setup:
- Intel D945GCLF2, 2GB ram, Opensolaris from EON
- 2 Sata Seagates 500GB
A normal zpool of the two drives to get a TB of space.
Now I added a 1 TB USB drive (I sliced it to have 500GB partitions). I attached
them to the Sata drives to mirror them.
Worked great...
But, suddenly the throughput dropped from around 15MB/s to 300KB/s. After
detaching the USB drives it went back to 15MB/s.
My Question:
Is it possible that mixing USB 2.0 external drives and Sata drives isn't a good
idea or is the problem that I sliced the external drive?
After removing the USB drive I done a little benchmarking as I was curious how
well the Intel system works at all.
I wonder if this 'iostat' output is okay (For me it doesn't)
sumpf 804G 124G 257 0 32.0M 0
sumpf 804G 124G 0 0 0 0
sumpf 804G 124G 178 0 22.2M 0
sumpf 804G 124G 78 0 9.85M 0
sumpf 804G 124G 0 0 0 0
sumpf 804G 124G 257 0 32.0M 0
sumpf 804G 124G 0 0 0 0
sumpf 804G 124G 0 0 0 0
sumpf 804G 124G 257 0 32.0M 0
sumpf 804G 124G 0 0 0 0
sumpf 804G 124G 257 0 32.0M 0
sumpf 804G 124G 0 0 0 0
Why are there so many 0 in this chart? No wonder I only get 15MB/s max...
Thanks for helping a Solaris beginner. Your help is very appreciated.
Thomas
USB isn't great, but it's not responsible for your problem. Slicing the
1TB disk into 2 partitions is. Think about this: the original zpool
(with the 2 500GB drives) is configured in a stripe - (most) data is
written across both drives simultaneously, so you will get roughly 2x
the performance of a single drive. Now, you've added 1/2 of a SINGLE
disk to create a mirror pair for each 500GB drive. Now, when data is
written to your zpool, data has to write to each 500GB drive (which can
be done independently), but then also has to be written to each half of
the 1TB usb drive - thus, this drive is in serious I/O contention,
because for each write to the zpool, 2 writes are queued to the 1TB
drive (1 for each 500GB partition). This will cause both seek time and
access time delays, which is going to thrash your 1TB disk but good.
To take a look at what's going on, use this command version of iostat:
% iostat -dnx
For instance, my current system shows:
$ iostat -dnx 10
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.3 1.0 1.6 0.0 0.0 5.3 10.1 0 0 c7d0
0.0 0.3 0.9 1.6 0.0 0.0 5.2 10.5 0 0 c8d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c7t1d0
2.6 3.2 156.1 34.3 0.1 0.1 13.8 23.6 1 2 c9t2d0
2.4 3.2 142.9 34.3 0.1 0.1 17.4 26.1 1 2 c9t3d0
2.5 3.1 152.9 34.3 0.1 0.2 23.1 38.0 1 3 c9t4d0
2.7 3.1 164.9 34.3 0.1 0.2 24.1 36.1 1 3 c9t5d0
2.5 3.2 152.4 34.3 0.1 0.2 22.4 39.3 1 3 c9t6d0
2.7 3.1 166.3 34.4 0.1 0.2 23.6 38.5 1 3 c9t7d0
I'm running a raidz pool on this, with all drives in c9. As you can
see, it's quite balanced, with all the c9 drives having roughly the same
wait and wsvt_t. Also, the %w is very low. I suspect that you'll see a
radically different picture, with your 1TB drive showing very high svc_t
and %w numbers (or at least, much higher than your 500GB drives).
A drop from 15mb/s to 300kb/s seems a little radical, though (that's a
45x reduction). I'm also a little suspect about your USB connection.
Try this to see what your USB connection throughput is: remove the 1TB
disk mirror partitions, and create a separate zpool with just 1 of the
partitions, and then run iostat on it (under some load, of course). This
will at least tell you what the raw performance of the USB disk is.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss