Logan,
   Could you say how many threads you were trying to run, and what the 
mean directory width was that you requested?

Thanks,
Drew

Logan wrote:
> I'm doing a performance test of an nfs server using Filebench.  The test is 
> meant to emulate the performance of a cache such as squid hitting an nfs 
> server.  The nfs server must be able to serve a large file set - tens of 
> millions of roughly 30KB files - as would be the case in a moderate sized 
> photo sharing consumer internet site. 
>
> I have filebench setup and it runs fine for small file sets of say 10K files. 
>  As soon as the fileset size grows above about 40K files filebench chokes, 
> giving errors such as "out of memory" and  "mutex lock failed".  The machine 
> running filebench has 2GB of RAM and no other software running.  One error 
> indicated that filebench was trying to allocate over 500GB of RAM.  So my 
> question is  - is filebench just broken or is there some secret configuration 
> that I'm missing?  I've tried setting the thread memsize as high as 200m and 
> as low as 10m to no avail.
>
> The need to run tests on large file sets seems fairly basic - small file sets 
> get served out of RAM from the target NFS server, so the file set size must 
> exceed the size of the target 2GB RAM machine.   Note that both client and 
> nfs server are Linux Fedora Core 7 boxes.
>
> Any ideas?
>  
>  
> This message posted from opensolaris.org
> _______________________________________________
> perf-discuss mailing list
> perf-discuss@opensolaris.org
>   

_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to