Hi,

First of all, my apologies for some of my posts appearing 2 or even 3 times 
here, the forum seems to be acting up, and although I received a Java exception 
for those double postings and they never appeared yesterday, apparently they 
still made it through eventually.

Back on topic: I fruitlessly tried to extract higher write speeds from the 
Seagate drives using an Addonics Silicon Image 3124 based SATA controller. I 
got exactly the same 21 MB/s for each drive (booted from a Knoppix cd).

I was planning on contacting Seagate support about this, but in the mean time I 
absolutely had to start using this system, even if it meant low write speeds. 
So I installed Solaris on a 1GB CF card and wanted to start configuring ZFS. I 
noticed that the first SATA disk was still shown with a different label by the 
"format" command (see my other post somewhere here). I tried to get rid of all 
disk labels (unsuccessfully), so I decided to boot Knoppix again and zero out 
the start and end sectors manually (erasing all GPT data).

Back to Solaris. I ran "zpool create tank raidz c1t0d0 c1t1d0 c1t2d0" and tried 
a dd while monitoring with iostat -xn 1 to see the effect of not having a slice 
as part of the zpool (write cache etc). I was seeing write speeds in excess of 
50MB/s per drive! Whoa! I didn't understand this at all, because 5 minutes 
earlier I couldn't get more than 21MB/s in Linux using block sizes up to 
1048576 bytes. How could this be?

I decided to destroy the zpool and try to dd from Linux once more. This is when 
my jaw dropped to the floor:

[EMAIL PROTECTED]:~# dd if=/dev/zero of=/dev/sda bs=4096
^[250916+0 records in
250915+0 records out
1027747840 bytes (1.0 GB) copied, 10.0172 s, 103 MB/s

Finally, the write speed one should expect from these drives, according to 
various reviews around the web.

I still get a healthy 52MB/s at the end of the disk:

# dd if=/dev/zero of=/dev/sda bs=4096 seek=183000000
dd: writing `/dev/sda': No space left on device
143647+0 records in
143646+0 records out
588374016 bytes (588 MB) copied, 11.2223 s, 52.4 MB/s

But how is it possible that I didn't get these speeds earlier? This may be part 
of the explanation:

[EMAIL PROTECTED]:~# dd if=/dev/zero of=/dev/sda bs=2048
101909+0 records in
101909+0 records out
208709632 bytes (209 MB) copied, 9.32228 s, 22.4 MB/s

Could it be that the firmware in these drives has issues with write requests of 
2048 bytes and smaller?

There must be more to it though, because I'm absolutely sure that I used larger 
block sizes when testing with Linux earlier (like 16384, 65536 and 1048576). 
It's impossible to tell, but maybe there was something fishy going on which was 
fixed by zero'ing parts of the drives. I absolutely cannot explain it otherwise.

Anyway, I'm still not seeing much more than 50MB/s per drive from ZFS, but I 
suspect the 2048 VS 4096 byte write block size effect may be influencing this. 
Having a slice as part of the pool earlier perhaps magnified this behavior as 
well. Caching or swap problems are certainly no issues now.

Any thoughts? I certainly want to thank everyone once more for your 
co-operation!

Greetings,

Pascal
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to