> Isn't this a matter of not keeping enough free memory as a workspace?  By 
> free memory, I am referring to unallocated memory and also recoverable main 
> memory used for shrinkable read caches (shrinkable by discarding cached  
> data).  If the system keeps enough free and recoverable memory around for 
> workspace, why should the deadlock case ever arise?  Slowness and page 
> swapping might be expected to arise (as a result of a shrinking read cache 
> and high >memory pressure), but deadlocks too?

> It sounds like deadlocks from the described scenario indicate the memory 
> allocation and caching algorithms do not perform gracefully in the face of 
> high memory pressure.  If the deadlocks do not occur when different memory 
> pools  are involved (by using a second computer), that tells me that memory 
> allocation decisions are playing a role.  Additional data should not be 
> accepted for writes when the system determines memory pressure is so high 
> that it it may not > be able to flush everything to disk.

> Here is one article about memory pressure (on Windows, but the issues apply 
> cross-OS):
> http://blogs.msdn.com/b/slavao/archive/2005/02/01/364523.aspx

> (How does virtualization fit into this picture?  If both OpenSolaris systems 
> are actually running inside of different virtual machines, on top of the same 
> host, have we isolated them enough to allow pools inside pools without risk 
> of deadlocks? )

I haven't noticed any deadlock issues so far in low memory conditions when 
doing nested pools (in replicated configuration), atleast in snv134. Maybe I 
haven't tried hard enough, anyway, wouldn't log-device in innerpool help in 
this situation?

Yours
Markus Kovero

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to