On Wed, Jun 25, 2008 at 11:09 PM, Jan Damborsky <[EMAIL PROTECTED]> wrote:
> Thank you very much all for this valuable input.
>
> Based on the collected information, I would take
> following approach as far as calculating size of
> swap and dump devices on ZFS volumes in Caiman
> installer is concerned.
>
> [1] Following formula would be used for calculating
>    swap and dump sizes:
>
> size_of_swap = size_of_dump = MAX(512 MiB, MIN(physical_memory/2, 32 GiB))

dump should scale with memory size, but the size given is completely
overkill.  On very active (heavy kernel activity) servers with 300+ GB
of RAM, I have never seen a (compressed) dump that needed more than 8
GB.  Even uncompressed the maximum size I've seen has been in the 18
GB range.  This has been without zfs in the mix.  It is my
understanding that at one time the arc was dumped as part of kernel
memory but that was regarded as a bug and has sense been fixed.  If
the arc is dumped, a value of dump much closer to physical memory is
likely to be appropriate.

As an aside, does the dedicated dump on all machines make it so that
savecore no longer runs by default?  It just creates a lot of extra
I/O during boot (thereby slowing down boot after a crash) and uses a
lot of extra disk space for those that will never look at a crash
dump.  Those that actually use it (not the majority target audience
for OpenSolaris, I would guess) will be able to figure out how to
enable (the yet non-existent) svc:/system/savecore:default.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to