Andrew Wilson wrote:
FileBench followers,
I finally figured out all the nuts and bolts of getting new versions of FileBench into the build environment its SourceForge releases use. So, I have just put up a "Catch up" Release of FileBench v1.4.4, which includes the following:

Thanks for your efforts.

I succeeded to compile & run filebench on our Linux box (CentOS4) as follows:
- install libtecla (configure should have detected that it is missing?!)

- run autoconf to re-create configure from configure.in (because running with the distributed configure would give wrong results for many trivial tests, like memcpy, ftok etc.)

- apply some fixes (I can supply patches):
1. typedef uint_t and ulong_t types uniformly in filebench.h (uint_t was defined in one place, this has to be removed)
2. fix call of gethostbyname_r() to add the requried **hostent parameter

- link without aio, socket, nsl

So, with some fixes/additions to configure.in, things should work out of the box. I ran configure without any args, maybe I missed something there that would fix some of the above issues, but generally, these things should configure automatically.

How can such fixes be pushed back?

Below is a test run - please let me know if this looks reasonable.

 Joachim

joac...@chall-5 $ filebench
FileBench Version 1.4.4
filebench> load varmail
30633: 4.797: Varmail Version 2.1 personality successfully loaded
30633: 4.797: Usage: set $dir=<dir>
30633: 4.798:        set $filesize=<size>    defaults to 16384
30633: 4.798:        set $nfiles=<value>     defaults to 1000
30633: 4.798:        set $nthreads=<value>   defaults to 16
30633: 4.798:        set $meaniosize=<value> defaults to 16384
30633: 4.798:        set $readiosize=<size>  defaults to 1048576
30633: 4.798:        set $meandirwidth=<size> defaults to 1000000
30633: 4.798: (sets mean dir width and dir depth is calculated as log (width, nfiles)
30633: 4.798:  dirdepth therefore defaults to dir depth of 1 as in postmark
30633: 4.798:  set $meandir lower to increase depth beyond 1 if desired)
30633: 4.798:
30633: 4.798:        run runtime (e.g. run 60)
filebench> set $dir=/tmp
filebench> run 10
30633: 20.764: Creating/pre-allocating files and filesets
30633: 20.765: Fileset bigfileset: 1000 files, 0 leafdirs avg dir = 1000000, avg depth = 0.5, mbytes=14
30633: 20.767: Removed any existing fileset bigfileset in 1 seconds
30633: 20.767: making tree for filset /tmp/bigfileset
30633: 20.768: Creating fileset bigfileset...
30633: 20.807: Preallocated 805 of 1000 of fileset bigfileset in 1 seconds
30633: 20.807: waiting for fileset pre-allocation to finish
30633: 20.807: Starting 1 filereader instances
30638: 21.808: Starting 16 filereaderthread threads
30633: 24.809: Running...
30633: 34.820: Run took 10 seconds...
30633: 34.821: Per-Operation Breakdown
closefile4                230ops/s   0.0mb/s      0.0ms/op        0us/op-cpu
readfile4                 230ops/s   3.0mb/s      0.0ms/op        0us/op-cpu
openfile4                 230ops/s   0.0mb/s      0.0ms/op        0us/op-cpu
closefile3                230ops/s   0.0mb/s      0.0ms/op        0us/op-cpu
fsyncfile3                230ops/s   0.0mb/s     38.0ms/op        0us/op-cpu
appendfilerand3           232ops/s   1.8mb/s      0.1ms/op        0us/op-cpu
readfile3                 232ops/s   3.2mb/s      0.0ms/op        0us/op-cpu
openfile3                 232ops/s   0.0mb/s      0.0ms/op        0us/op-cpu
closefile2                232ops/s   0.0mb/s      0.0ms/op        0us/op-cpu
fsyncfile2                232ops/s   0.0mb/s     29.4ms/op        0us/op-cpu
appendfilerand2           232ops/s   1.7mb/s      0.0ms/op        0us/op-cpu
createfile2               232ops/s   0.0mb/s      0.2ms/op        0us/op-cpu
deletefile1               232ops/s   0.0mb/s      0.5ms/op        0us/op-cpu

30633: 34.821:
IO Summary: 30119 ops, 3008.7 ops/s, (463/464 r/w) 9.9mb/s, 199us cpu/op, 17.0ms latency
30633: 34.821: Shutting down processes
filebench>


--
Joachim Worringen, Software Architect, Dolphin Interconnect Solutions
phone ++49/(0)228/324 08 17 - http://www.dolphinics.com
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to