Mattias Pantzare wrote:
On Wed, Sep 22, 2010 at 20:15, Markus Kovero <markus.kov...@nebula.fi> wrote:
Such configuration was known to cause deadlocks. Even if it works now (which I
don't expect to be the case) it will make your data to be cached twice. The CPU
utilization > will also be much higher, etc.
All in all I strongly recommend against such setup.
--
Pawel Jakub Dawidek http://www.wheelsystems.com
p...@freebsd.org http://www.FreeBSD.org
FreeBSD committer Am I Evil? Yes, I Am!
Well, CPU utilization can be tuned downwards by disabling checksums in inner
pools as checksumming is done in main pool. I'd be interested in bug id's for
deadlock issues and everything related. Caching twice is not an issue,
prefetching could be and it can be disabled
I don't understand what makes it difficult for zfs to handle this kind of
setup. Main pool (testpool) should just allow any writes/reads to/from volume,
not caring what they are, where as anotherpool would just work as any other
pool consisting of any other devices.
This is quite similar setup to iscsi-replicated mirror pool, where you have
redundant pool created from iscsi volumes locally and remotely.
ZFS needs free memory for writes. If you fill your memory with dirty
data zfs has to flush that data to disk. If that disk is a virtual
disk in zfs on the same computer those writes need more memory from
the same memory pool and you have a deadlock.
If you write to a zvol on a different host (via iSCSI) those writes
use memory in a different memory pool (on the other computer). No
deadlock.
Isn't this a matter of not keeping enough free memory as a workspace?
By free memory, I am referring to unallocated memory and also
recoverable main memory used for shrinkable read caches (shrinkable by
discarding cached data). If the system keeps enough free and
recoverable memory around for workspace, why should the deadlock case
ever arise? Slowness and page swapping might be expected to arise (as a
result of a shrinking read cache and high memory pressure), but
deadlocks too?
It sounds like deadlocks from the described scenario indicate the memory
allocation and caching algorithms do not perform gracefully in the face
of high memory pressure. If the deadlocks do not occur when different
memory pools are involved (by using a second computer), that tells me
that memory allocation decisions are playing a role. Additional data
should not be accepted for writes when the system determines memory
pressure is so high that it it may not be able to flush everything to disk.
Here is one article about memory pressure (on Windows, but the issues
apply cross-OS):
http://blogs.msdn.com/b/slavao/archive/2005/02/01/364523.aspx
(How does virtualization fit into this picture? If both OpenSolaris
systems are actually running inside of different virtual machines, on
top of the same host, have we isolated them enough to allow pools inside
pools without risk of deadlocks? )
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss