Hi Drew,

FileBench is running on Thumper box.

Thanks,
Pavel


# zpool status
  pool: pool0
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        pool0       ONLINE       0     0     0
          raidz2    ONLINE       0     0     0
            c5t4d0  ONLINE       0     0     0
            c8t0d0  ONLINE       0     0     0
            c8t4d0  ONLINE       0     0     0
            c7t0d0  ONLINE       0     0     0
            c7t4d0  ONLINE       0     0     0
            c1t0d0  ONLINE       0     0     0
            c1t4d0  ONLINE       0     0     0
            c0t0d0  ONLINE       0     0     0
            c0t4d0  ONLINE       0     0     0
          raidz2    ONLINE       0     0     0
            c5t5d0  ONLINE       0     0     0
            c8t1d0  ONLINE       0     0     0
            c8t5d0  ONLINE       0     0     0
            c7t1d0  ONLINE       0     0     0
            c7t5d0  ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0
            c1t5d0  ONLINE       0     0     0
            c0t1d0  ONLINE       0     0     0
            c0t5d0  ONLINE       0     0     0
          raidz2    ONLINE       0     0     0
            c6t1d0  ONLINE       0     0     0
            c6t5d0  ONLINE       0     0     0
            c5t1d0  ONLINE       0     0     0
            c7t2d0  ONLINE       0     0     0
            c7t6d0  ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0
            c1t6d0  ONLINE       0     0     0
            c0t2d0  ONLINE       0     0     0
            c0t6d0  ONLINE       0     0     0
          raidz2    ONLINE       0     0     0
            c6t2d0  ONLINE       0     0     0
            c6t6d0  ONLINE       0     0     0
            c5t2d0  ONLINE       0     0     0
            c5t6d0  ONLINE       0     0     0
            c8t2d0  ONLINE       0     0     0
            c8t6d0  ONLINE       0     0     0
            c1t7d0  ONLINE       0     0     0
            c0t3d0  ONLINE       0     0     0
            c0t7d0  ONLINE       0     0     0
          raidz2    ONLINE       0     0     0
            c6t3d0  ONLINE       0     0     0
            c6t7d0  ONLINE       0     0     0
            c5t3d0  ONLINE       0     0     0
            c5t7d0  ONLINE       0     0     0
            c8t3d0  ONLINE       0     0     0
            c8t7d0  ONLINE       0     0     0
            c7t3d0  ONLINE       0     0     0
            c7t7d0  ONLINE       0     0     0
            c1t3d0  ONLINE       0     0     0
        spares
          c5t0d0    AVAIL  

Andrew Wilson wrote:
> Pavel,
>   It was supposed to only use 32. Thanks for reporting this. I'll look 
> into that for you and try to get it fixed. Just out of curiosity, are 
> you running this on a machine with some sort of RAIDed disk drives. If 
> you only have one drive the paralloc doesn't seem to really help 
> anyway, as the single disk is the bottleneck.
>
> Drew
>
> Pavel Filipensky wrote:
>> Hi,
>>
>> I am running Filebench 1.2.4 on s10 x86. I have set the fileset to 
>> contain 1.000.000 files, with options prealloc=100,reuse,paralloc.
>> During creation phase, I see that the go_filebench process is not 
>> destroying threads - so far there is  almost million lwps:
>>
>>
>> # ps -L -p 2746|wc -l
>>   741525
>>
>>
>> # ps -L -p 2746|head
>>    PID   LWP TTY        LTIME CMD
>>   2746     1 pts/1       4:03 go_fileb
>>   2746     2 pts/1       0:01 go_fileb
>>   2746     3 pts/1       0:00 <defunct>
>>   2746     4 pts/1       0:00 <defunct>
>>   2746     5 pts/1       0:00 <defunct>
>>   2746     6 pts/1       0:00 <defunct>
>>   2746     7 pts/1       0:00 <defunct>
>>   2746     8 pts/1       0:00 <defunct>
>>   2746     9 pts/1       0:00 <defunct>
>>
>> # ps -L -p 2746|tail
>>   2746 741473 pts/1       0:00 <defunct>
>>   2746 741474 pts/1       0:00 go_fileb
>>   2746 741475 pts/1       0:00 go_fileb
>>   2746 741476 pts/1       0:00 go_fileb
>>   2746 741477 pts/1       0:00 go_fileb
>>   2746 741478 pts/1       0:00 go_fileb
>>   2746 741479 pts/1       0:00 go_fileb
>>   2746 741480 pts/1       0:00 go_fileb
>>   2746 741481 pts/1       0:00 go_fileb
>>   2746 741482 pts/1       0:00 go_fileb
>>
>> Can this be fixed to save the resources?
>>
>> Thanks,
>> Pavel
>>
>> _______________________________________________
>> perf-discuss mailing list
>> perf-discuss@opensolaris.org
>>   
>

_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to