On Mar 2, 2010, at 9:43 PM, Abdullah Al-Dahlawi wrote:

> Greeting Richard
> 
> After spending alomost 48 hours working on this problem, I believe I've 
> discovered the BUG in Filebench !!!.
> 
> I do not believe it is the change directory that you have indicated below 
> cause this directory is used to dump the stat data at the end of the 
> benchmarks, it is NOT used during benchmark's I/O. (DTrace proved that).
> 
> Any way , what I discovered is that when you run filebench in BATCH mode and 
> using randomread workload, filebench does not honor the workingset size 
> indicated in the config file that the user has created intially.
> 
> file bench generate another workload config file on behalf of the user with 
> an extension ".f" and pretty much apply all the settings that the user has 
> intially chosen in his config file EXCEPT the workingset size.
> 
> This means (according to filebench documentation) that tworkingset will 
> default to ZERO which also mean the WHOLE file (5G in my case- NO way to fit 
> in ARC) is being used in random reads for 100 seconds (looooots of seeeeeks) 
> and therefore greate latency.

You might have something there. Check the source at
http://sourceforge.net/projects/filebench/

> However, when I run my SAME benchmark in an Interactive mode, my workingset 
> size (10m) is honored  which means that 10M of the file is loaded into ACR 
> and random reads is conducted from ARC. wooo 100% ARC hit as shown by my 
> arcstats.
> 
> The problem now is how to fix this bug in order to use the batch mode 
> effectivly ???

That would be through the filebench project on sourceforge.
 -- richard

> 
> Any feed back
> 
> 
> 
> On Tue, Mar 2, 2010 at 11:09 AM, Richard Elling <richard.ell...@gmail.com> 
> wrote:
> see below...
> 
> On Mar 2, 2010, at 12:38 AM, Abdullah Al-Dahlawi wrote:
> 
> > Greeting All
> >
> > I am using Filebench benchmark in an "Interactive mode" to test ZFS 
> > performance with randomread wordload.
> > My Filebench setting & run results are as follwos
> > ------------------------------------------------------------------------------------------
> > filebench> set $filesize=5g
> > filebench> set $dir=/hdd/fs32k
> > filebench> set $iosize=32k
> > filebench> set $workingset=10m
> > filebench> set $function=generic
> > filebench> set $filesystem=zfs
> > filebench> run 100
> >  1062: 106.866: Creating/pre-allocating files and filesets
> >  1062: 106.867: File largefile1: mbytes=5120
> >  1062: 106.867: Re-using file largefile1.
> >  1062: 106.867: Creating file largefile1...
> >  1062: 108.612: Preallocated 1 of 1 of file largefile1 in 2 seconds
> >  1062: 108.612: waiting for fileset pre-allocation to finish
> >  1062: 108.612: Starting 1 rand-read instances
> >  1063: 109.617: Starting 1 rand-thread threads
> >  1062: 112.627: Running...
> >  1062: 213.627: Run took 100 seconds...
> >  1062: 213.628: Per-Operation Breakdown
> > rand-rate                   0ops/s   0.0mb/s      0.0ms/op        0us/op-cpu
> > rand-read1              41845ops/s 1307.7mb/s      0.0ms/op       
> > 20us/op-cpu
> >
> >  1062: 213.628:
> > IO Summary:      4226337 ops, 41845.0 ops/s, (41845/0 r/w) 1307.7mb/s,     
> > 21us cpu/op,   0.0ms latency
> >  1062: 213.628: Shutting down processes
> > ---------------------------------------------------------------------------------------------
> > The output looks GREAT so far .... notice the 1307.7 mb/s
> >
> > **** HOWEVER *****
> >
> > When I run the SAME workload using Filebench "config file" in batch mode, 
> > the performance dropped significantly !!!!!!!!
> >
> > Here is my config file & filebench results.
> >
> >
> > # Copyright 2007 Sun Microsystems, Inc.  All rights reserved.
> > # Use is subject to license terms.
> > #
> > # ident    "%Z%%M%    %I%    %E% SMI"
> >
> > DEFAULTS {
> >     runtime = 30;
> >         dir = /hdd/fs32k;
> >         $statsdir=/export/home/abdullah/bench.stat/woow87;
> >         stats = /export/home/abdullah/bench.stat;
> >     filesystem = zfs;
> >     description = "ZFS-RR-WS-10M";
> > }
> >
> > CONFIG rr32k {
> >     function = generic;
> >     personality = randomread;
> >     filesize = 5g;
> >     iosize = 32k;
> >     nthreads = 1;
> >      workingset=10m;
> > }
> >
> > And the Run result ....
> >
> > abdul...@hp_hdx_16:/usr/benchmarks/filebench/config# filebench rrws10m
> > parsing profile for config: rr32k
> > Creating Client Script 
> > /export/home/abdullah/bench.stat/HP_HDX_16-zfs-rrws10m-Mar_2_2010-03h_10m_46s/rr32k/thisrun.f
> > Running 
> > /export/home/abdullah/bench.stat/HP_HDX_16-zfs-rrws10m-Mar_2_2010-03h_10m_46s/rr32k/thisrun.f
> > FileBench Version 1.4.4
> >  1147: 0.004: Random Read Version 2.0 IO personality successfully loaded
> >  1147: 0.004: Creating/pre-allocating files and filesets
> >  1147: 0.005: File largefile1: mbytes=5120
> >  1147: 0.005: Re-using file largefile1.
> >  1147: 0.005: Creating file largefile1...
> >  1147: 1.837: Preallocated 1 of 1 of file largefile1 in 2 seconds
> >  1147: 1.837: waiting for fileset pre-allocation to finish
> >  1147: 1.837: Running '/usr/benchmarks/filebench/scripts/fs_flush zfs 
> > /hdd/fs32k'
> 
> This step flushes the cache.
> 
> >  1147: 1.845: Change dir to 
> > /export/home/abdullah/bench.stat/HP_HDX_16-zfs-rrws10m-Mar_2_2010-03h_10m_46s/rr32k
> >  1147: 1.845: Starting 1 rand-read instances
> >  1149: 2.850: Starting 1 rand-thread threads
> >  1147: 5.860: Running...
> >  1147: 36.159: Run took 30 seconds...
> >  1147: 36.160: Per-Operation Breakdown
> > rand-rate                   0ops/s   0.0mb/s      0.0ms/op        0us/op-cpu
> > rand-read1                 88ops/s   2.7mb/s     11.4ms/op       35us/op-cpu
> 
> This is right on spec for a single drive: seek + rotate = 11.3 ms
>  -- richard
> 
> >
> >  1147: 36.160:
> > IO Summary:       2660 ops,  87.8 ops/s, (88/0 r/w)   2.7mb/s,    443us 
> > cpu/op,  11.4ms latency
> >  1147: 36.160: Stats dump to file 'stats.rr32k.out'
> >  1147: 36.160: in statsdump stats.rr32k.out
> >  1147: 36.415: Shutting down processes
> > Generating html for 
> > /export/home/abdullah/bench.stat/HP_HDX_16-zfs-rrws10m-Mar_2_2010-03h_10m_46s
> > file = 
> > /export/home/abdullah/bench.stat/HP_HDX_16-zfs-rrws10m-Mar_2_2010-03h_10m_46s/rr32k/stats.rr32k.out
> > ------------------------------------------------------------------------------------------------
> >
> > The output for the same workload is disappointing , notice that the 
> > throughput dropped from 1307.7 mb/s to 2.7 mb/s !!!!!!!!!!!!!!!!!!!!!!1
> >
> > My ARC_max is 3G
> >
> > Here is a snapshot of my arcstat output in case of high throughput ---
> > notice the 100% hits ratio
> >
> >
> > arcsz,read,hits,Hit%,miss,miss%,dhit,dh%,dmis,dm%,phit,ph%,pmis,pm%,mhit,mh%,mmis,mm%,mfug,mrug,
> >    1G, 31M, 31M,  99,111K,    0, 28M, 99, 99K,  0,  2M, 99, 12K,  0,  1M, 
> > 98, 13K,  1,  43,  43,
> >    1G,147K,145K,  99,  1K,    0, 14K, 99,   2,  0,131K, 99,  1K,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G,166K,166K, 100,   0,    0, 37K,100,   0,  0,128K,100,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 42K, 42K, 100,   0,    0, 42K,100,   0,  0, 256,100,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 42K, 42K, 100,   0,    0, 42K,100,   0,  0, 256,100,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 42K, 42K, 100,   0,    0, 41K,100,   0,  0, 256,100,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 42K, 42K, 100,   0,    0, 42K,100,   0,  0, 256,100,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 42K, 42K, 100,   0,    0, 42K,100,   0,  0, 256,100,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 42K, 42K, 100,   0,    0, 41K,100,   0,  0, 256,100,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 42K, 42K, 100,   0,    0, 42K,100,   0,  0, 256,100,   0,  0,  
> > 10,100,   0,  0,   0,   0,
> >    1G, 42K, 42K, 100,   0,    0, 42K,100,   0,  0, 256,100,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 42K, 42K, 100,   0,    0, 42K,100,   0,  0, 256,100,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 42K, 42K, 100,   0,    0, 42K,100,   0,  0, 256,100,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 42K, 42K, 100,   0,    0, 42K,100,   0,  0, 256,100,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 42K, 42K, 100,   0,    0, 42K,100,   0,  0, 256,100,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 42K, 42K, 100,   0,    0, 42K,100,   0,  0, 256,100,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 42K, 42K, 100,   0,    0, 41K,100,   0,  0, 256,100,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 42K, 42K, 100,   0,    0, 41K,100,   0,  0, 256,100,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 42K, 42K, 100,   0,    0, 42K,100,   0,  0, 256,100,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 42K, 42K, 100,   0,    0, 42K,100,   0,  0, 256,100,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 42K, 42K, 100,   0,    0, 42K,100,   0,  0, 256,100,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >
> > and a snapshot in case of low throughput
> > notice the low hit ratio !!
> > arcsz,read,hits,Hit%,miss,miss%,dhit,dh%,dmis,dm%,phit,ph%,pmis,pm%,mhit,mh%,mmis,mm%,mfug,mrug,
> >    1G,   3,   3, 100,   0,    0,   3,100,   0,  0,   0,  0,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G,   0,   0,   0,   0,    0,   0,  0,   0,  0,   0,  0,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G,   0,   0,   0,   0,    0,   0,  0,   0,  0,   0,  0,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G,   0,   0,   0,   0,    0,   0,  0,   0,  0,   0,  0,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G,  40,   3,   7,  37,   92,   3,  7,  37, 92,   0,  0,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 113,  12,  10, 101,   89,  12, 10, 101, 89,   0,  0,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 105,  14,  13,  91,   86,  14, 13,  91, 86,   0,  0,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 108,  15,  13,  93,   86,  15, 13,  93, 86,   0,  0,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G,  99,  11,  11,  88,   88,  11, 11,  88, 88,   0,  0,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 103,  11,  10,  92,   89,  11, 10,  92, 89,   0,  0,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 101,  13,  12,  88,   87,  13, 12,  88, 87,   0,  0,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 107,  12,  11,  95,   88,  12, 11,  95, 88,   0,  0,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G,  99,  12,  12,  87,   87,  12, 12,  87, 87,   0,  0,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 100,   5,   5,  95,   95,   5,  5,  95, 95,   0,  0,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 114,  17,  14,  97,   85,  17, 14,  97, 85,   0,  0,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 106,  17,  16,  89,   83,  17, 16,  89, 83,   0,  0,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 107,   7,   6, 100,   93,   7,  6, 100, 93,   0,  0,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 100,  11,  11,  89,   89,  11, 11,  89, 89,   0,  0,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G,  99,   8,   8,  91,   91,   8,  8,  91, 91,   0,  0,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 101,   9,   8,  92,   91,   9,  8,  92, 91,   0,  0,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >    1G, 101,   9,   8,  92,   91,   9,  8,  92, 91,   0,  0,   0,  0,   0,  
> > 0,   0,  0,   0,   0,
> >
> >
> > Any Feed Back !!!!!!!!!!!!
> >
> > --
> > Abdullah Al-Dahlawi
> > PhD Candidate
> > George Washington University
> > Department. Of Electrical & Computer Engineering
> > ----
> > Check The Fastest 500 Super Computers Worldwide
> > http://www.top500.org/list/2009/11/100
> > _______________________________________________
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 
> ZFS storage and performance consulting at http://www.RichardElling.com
> ZFS training on deduplication, NexentaStor, and NAS performance
> http://nexenta-atlanta.eventbrite.com (March 16-18, 2010)
> 
> 
> 
> 
> 
> 
> 
> -- 
> Abdullah Al-Dahlawi
> PhD Candidate
> George Washington University
> Department. Of Electrical & Computer Engineering
> ----
> Check The Fastest 500 Super Computers Worldwide
> http://www.top500.org/list/2009/11/100

ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
http://nexenta-atlanta.eventbrite.com (March 16-18, 2010)




_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to