Marion Hakanson wrote:
> [EMAIL PROTECTED] said:
>   
>> I have a set of threads each doing random reads to about 25% of its own,
>> previously written, large file ... a test run will read in  about 20GB on a
>> server with 2GB of RAM 
>> . . .
>> after several successful runs of my test application, some run of my test
>> will be running fine, but at some point before it finishes, I see that all IO
>> to the pool has stopped, and, while I still can use the system for other
>> things, most operations that involve the pool will also hang (e.g.   a
>> wc    on a pool based file will hang) 
>>     
>
>
> Bill,
>
> Unencumbered by full knowledge of the history of your project, I'll say
> that I think you need more RAM.  I've seen this behavior on a system
> with 16GB RAM (and no SSD for cache), if heavy I/O goes on long enough.
> If larger RAM is not feasible, or you don't have a 64-bit CPU, you could
> try limiting the size of the ARC as well.
>
> That's not to say you're not seeing some other issue, but 2GB for heavy
> ZFS I/O seems a little on the small side, given my experience.
>   

If this is the case, you might try using arcstat to view ARC usage.
    http://blogs.sun.com/realneel/entry/zfs_arc_statistics
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to