Did you measure CPU utilization by any chance during the tests?
Its T2000 and CPU cores are quite slow on this box hence might be a
bottleneck.

just a guess.

On Mon, 2007-04-16 at 13:10 -0400, Tony Galway wrote:
> I had previously undertaken a benchmark that pits “out of box”
> performance of UFS via SVM, VxFS and ZFS but was waylaid due to some
> outstanding availability issues in ZFS. These have been taken care of,
> and I am once again undertaking this challenge on behalf of my
> customer. The idea behind this benchmark is to show
> 
>  
> 
> a.      How ZFS might displace the current commercial volume and file
> system management applications being used.
> 
> b.     The learning curve of moving from current volume management
> products to ZFS.
> 
> c.      Performance differences across the different volume management
> products.
> 
>  
> 
> VDBench is the test bed of choice as this has been accepted by the
> customer as a telling and accurate indicator of performance. The last
> time I attempted this test it had been suggested that VDBench is not
> appropriate to testing ZFS, I cannot see that being a problem, VDBench
> is a tool – if it highlights performance problems, then I would think
> it is a very effective tool so that we might better be able to fix
> those deficiencies.
> 
>  
> 
> Now, to the heart of my problem!
> 
>  
> 
> The test hardware is a T2000 connected to a 12 disk SE3510 (presenting
> as JBOD)  through a brocade switch, and I am using Solaris 10 11/06.
> For Veritas, I am using Storage Foundation Suite 5.0. The systems were
> jumpstarted to the same configuration before testing a different
> volume management software to ensure there were no artifacts remaining
> from any previous test.
> 
>  
> 
> I present my vdbench definition below for your information:
> 
>  
> 
> sd=FS,lun=/pool/TESTFILE,size=10g,threads=8
> 
> wd=DWR,sd=FS,rdpct=100,seekpct=80
> 
> wd=ETL,sd=FS,rdpct=0,  seekpct=80
> 
> wd=OLT,sd=FS,rdpct=70, seekpct=80
> 
> rd=R1-DWR,wd=DWR,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
> 
> rd=R1-ETL,wd=ETL,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
> 
> rd=R1-OLT,wd=OLT,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
> 
> rd=R2-DWR,wd=DWR,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
> 
> rd=R2-ETL,wd=ETL,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
> 
> rd=R2-OLT,wd=OLT,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
> 
> rd=R3-DWR,wd=DWR,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
> 
> rd=R3-ETL,wd=ETL,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
> 
> rd=R3-OLT,wd=OLT,iorate=max,elapsed=1800,interval=30,forxfersize=(1k,2k,4k,8k,16k,32k,64k,128k)
> 
>  
> 
> As you can see, it is fairly straight forward and I take the average
> of the three runs in each of ETL, OLT and DWR workloads. As an aside,
> I am also performing this test for various file system block sizes as
> applicable as well.
> 
>  
> 
> I then ran this workload against a Raid-5 LUN created and mounted in
> each of the different file system types. Please note that one of the
> test criteria is that the associated volume management software create
> the Raid-5 LUN, not the disk subsystem.
> 
>  
> 
> 1.      UFS via SVM
> 
> # metainit d20 –r d1 … d8 
> 
> # newfs /dev/md/dsk/d20
> 
> # mount /dev/md/dsk/d20 /pool
> 
>  
> 
> 2.      ZFS
> 
> # zfs create pool raidz d1 … d8
> 
>  
> 
> 3.      VxFS – Veritas SF5.0
> 
> # vxdisk init SUN35100_0 ….  SUN35100_7
> 
> # vxdg init testdg SUN35100_0  … 
> 
> # vxassist –g testdg make pool 418283m layout=raid5
> 
>  
> 
>  
> 
> Now to my problem – Performance!  Given the test as defined above,
> VxFS absolutely blows the doors off of both UFS and ZFS during write
> operations. For example, during a single test on an 8k file system
> block, I have the following average IO Rates:
> 
>  
> 
>  
> 
> 
>        ETL
> 
> 
>       OLTP
> 
> 
>        DWR
> 
> 
> UFS
> 
> 
>            390.00
> 
> 
>           1298.44
> 
> 
>          23173.60
> 
> 
> VxFS
> 
> 
>          15323.10
> 
> 
>          27329.04
> 
> 
>          22889.91
> 
> 
> ZFS
> 
> 
>           2122.23
> 
> 
>           7299.36
> 
> 
>          22940.63
> 
> 
> 
>  
> 
>  
> 
> If you look at these numbers percentage wise, with VxFS being set to
> 100%, then UFS run’s at 2.5% the speed, and ZFS at 13.8% the speed,
> for OLTP UFS is 4.8% and ZFS 26.7%, however in DWR where there are
> 100% reads, no writing, performance is similar with UFS at 101.2% and
> ZFS at 100.2% the speed of VxFS.
> 
>  
> 
>                   cid:image002.png@01C78027.99B515D0
> 
>  
> 
>  
> 
> Given this performance problems, then quite obviously VxFS quite
> rightly deserves to be the file system of choice, even with a cost
> premium. If anyone has any insight into why I am seeing, consistently,
> these types of very disappointing numbers I would very much appreciate
> your comments. The numbers are very disturbing as it is indicating
> that write performance has issues. Please take into account that this
> benchmark is performed on non-tuned file systems specifically at the
> customers request as this is likely the way they would be deployed in
> their production environments.
> 
>  
> 
> Maybe I should be configuring my workload differently for VDBench – if
> so, does anyone have any ideas on this?
> 
>  
> 
> Unfortunately, I have weeks worth of test data to back up these
> numbers and would enjoy the opportunity to discuss these results in
> detail to discover if my methodology has problems or if it is the file
> system.
> 
>  
> 
> Thanks for your time.
> 
>  
> 
> [EMAIL PROTECTED]
> 
> 416.801.6779
> 
>  
> 
> You can always tell who the Newfoundlanders are in Heaven. They're
> the ones who want to go home
> 
>  
> 
> 
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-- 
Erast

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to