On 03/26/10 08:47 AM, Bruno Sousa wrote:
Hi all,

The more readings i do about ZFS, and experiments the more i like this stack of technologies. Since we all like to see real figures in real environments , i might as well share some of my numbers .. The replication has been achieved with the zfs send / zfs receive but piped with mbuffer (http://www.maier-komor.de/mbuffer.html), during business hours , so it's a live environment and *not *a controlled test environment.

storageA

opensolaris snv_133
2 quad-core amd
28 gb ram

Seagate Barracuda SATA drives 1.5TB 7.200 rpm (ST31500341AS) - *non-enterprise class disks* 1 RAIDZ2 pool with 6 vdevs with 3 disks each connected to a lsi non-raid controller

As others have already said, raidz2 with 3 drives is Not A Good Idea!

storageB

opensolaris snv_134
2 Intel Xeon 2.0ghz
8 gb ram


Seagate Barracuda SATA drives 1TB 7.200 rpm (ST31000640SS) - *enterprise class disks* 1 RAIDZ2 pool with 4 vdevs with 5 disks each connected to a Adaptec RAID controller(52445, 512 mb cache) with read and write cache enabled. The adaptec hba has 20 volumes , where one volume = one drive..something similar to a jbod

Both systems are connected to a gigabit switch without vlans (switch is a 3com), and jumbo-frames are disabled.

And now the results :

Dataset : around 26.5 gb in files bigger than 256 KB and smaller than 1MB

summary: 26.6 GByte in  6 min 20.6 sec - average of *71.7 MB/s*

Dataset : around 160gb of data with files small (less than 20 kb) and large (bigger than 10MB)

summary:  164 GByte in 34 min 41.9 sec - average of *80.6 MB/s*


Those numbers look right for a 1 Gig link. Try a tool such as bonnie++ to see what the block read and write numbers are for your pools and if the they are significantly better than these, try an aggregated link between the systems.

--
Ian.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to