Hi all,

The more readings i do about ZFS, and experiments the more i like this
stack of technologies.
Since we all like to see real figures in real environments , i might as
well share some of my numbers ..
The replication has been achieved with the zfs send / zfs receive but
piped with mbuffer (http://www.maier-komor.de/mbuffer.html), during
business hours , so it's a live environment and *not *a controlled test
environment.

storageA

opensolaris snv_133
2 quad-core amd
28 gb ram

Seagate Barracuda SATA drives 1.5TB 7.200 rpm (ST31500341AS) -
*non-enterprise class disks*
1 RAIDZ2 pool with 6 vdevs with 3 disks each connected to a lsi non-raid
controller

storageB

opensolaris snv_134
2 Intel Xeon 2.0ghz
8 gb ram


Seagate Barracuda SATA drives 1TB 7.200 rpm (ST31000640SS) - *enterprise
class disks*
1 RAIDZ2 pool with 4 vdevs with 5 disks each connected to a Adaptec RAID
controller(52445, 512 mb cache) with read and write cache enabled. The
adaptec hba has 20 volumes , where one volume = one drive..something
similar to a jbod

Both systems are connected to a gigabit switch without vlans (switch is
a 3com), and  jumbo-frames are disabled.

And now the results :

Dataset : around 26.5 gb in files bigger than 256 KB and smaller than 1MB

summary: 26.6 GByte in  6 min 20.6 sec - average of *71.7 MB/s*

Dataset : around 160gb of data with files small (less than 20 kb) and
large (bigger than 10MB)

summary:  164 GByte in 34 min 41.9 sec - average of *80.6 MB/s*


I don't know what about you...but for me it does seems very , very ,
very good performance :), specially if i consider that in overall these
two systems cost less than 12.000EUR .

Does anyone else has numbers to share?

Bruno



Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to