On Wed, Mar 18, 2009 at 10:57 AM, Asif Iqbal <vad...@gmail.com> wrote:

> Hi Drew,
>
> Now that I have all the tests completed with different scenarios (raid
> cache, disk cache, zfs cache w/ and w/o zil on separate disks)
> for my X4150 running sol 10 u6, how do I get the color report?
>
> This is what I am referring to
>
>
> http://opensolaris.org/os/community/performance/filebench/filebench_sample/;jsessionid=7033C555957837F2C001C50D0FA582E9
>
> My output looks pretty boring .. here is an snapshot
>
> Throughput breakdown (ops per second)Workloadfileio zfs with no zfs cache
> and raid cache and disk cache fileio zfs with no zfs cache and disk cache
> only fileio zfs with no zfs cache and raid cache only fileio zfs with raid
> cache and disk cache fileio zfs with disk cache only fileio zfs with no
> disk or raid cache fileio zfs with no raid and disk cache and zil on
> separate disk fileio zfs with raid cache only and zil on separate disk fileio
> zfs with raid and disk cache and zil on separate disk fileio zfs with disk
> cache only and zil on separate disk multistreamread1m226228226227229224103
> 10210499multistreamreaddirect1m224227 228228227227103100101101
> multistreamwrite1m265263260282285259107104 123114multistreamwritedirect1m
> 317266259318291259109109128123randomread1m1532 15191479150415091516243364
> 383360randomread2k320363131931334311603131131037 11848701130923
> randomread8k23312230812233721797224852279211918701138922 randomwrite1m319
> 436316331362305997812481randomwrite2k714797639805 796643318264397327
> randomwrite8k531758533704765529305256380309 singlestreamread1m789938801848
> 9097919419691237singlestreamreaddirect1m798940805 852908786255236256219
> singlestreamwrite1m277279224275277222115110 131113
> singlestreamwritedirect1m261262208277268208113113131112Bandwidth breakdown
> (MB/s) Workloadfileio zfs with no zfs cache and raid cache and disk cache 
> fileio
> zfs with no zfs cache and disk cache only fileio zfs with no zfs cache and
> raid cache only fileio zfs with raid cache and disk cache fileio zfs with
> disk cache only fileio zfs with no disk or raid cache fileio zfs with no
> raid and disk cache and zil on separate disk fileio zfs with raid cache
> only and zil on separate disk fileio zfs with raid and disk cache and zil
> on separate disk fileio zfs with disk cache only and zil on separate disk
> multistreamread1m226 228 226 227 229 224 103 102 104 99
> multistreamreaddirect1m224 227 228 228 227 226 103 100 101 101
> multistreamwrite1m265 263 260 282 284 259 107 104 123 114
> multistreamwritedirect1m317 266 259 318 291 259 109 109 128 123
> randomread1m1532 1518 1479 1504 1509 1516 243 364 383 360 randomread2k62
> 61 61 60 61 60 2 1 2 1 randomread8k182 180 174 170 175 178 9 6 8 7
> randomwrite1m319 435 316 331 362 305 99 78 124 81 randomwrite2k1 1 1 1 1 1
> 0 0 0 0 randomwrite8k4 5 4 5 6 4 2 2 3 2 singlestreamread1m789 938 800 848
> 909 791 94 196 91 237 singlestreamreaddirect1m797 940 805 852 908 785 255
> 235 256 219 singlestreamwrite1m277 279 224 275 277 222 115 110 131 113
> singlestreamwritedirect1m261 262 208 277 268 208 113 113 131 112
> No colors :-(
>
> Thanks
>
> --
> Asif Iqbal
> PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu
> A: Because it messes up the order in which people normally read text.
> Q: Why is top-posting such a bad thing?
>
>
>
>


-- 
Asif Iqbal
PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to