Hello Roch,

Thursday, April 26, 2007, 12:33:00 PM, you wrote:

RP> Robert Milkowski writes:
 >> Hello Brian,
 >> 
 >> Thursday, April 26, 2007, 3:55:16 AM, you wrote:
 >> 
 >> BG> If I recall, the dump partition needed to be at least as large as RAM.
 >> 
 >> BG> In Solaris 8(?) this changed, in that crashdumps streans were
 >> BG> compressed as they were written out to disk. Although I've never read
 >> BG> this anywhere, I assumed the reasons this was done are as follows:
 >> 
 >> BG> 1) Large enterprise systems could support ridiculous (at the time)
 >> BG> amounts of physical RAM. Providing a physical disk/LUN partition that
 >> BG> could hold such a large crashdump seemed wasteful and expensive.
 >> 
 >> BG> 2) Compressing the dump before writing to disk would be faster, thus
 >> BG> improving the chances of getting a full dump. (CPU performance has
 >> BG> progressed at a much higher rate of change than disk throughputs
 >> BG> have).
 >> 
 >> BG> (I don't know what the compression ratios are, but I'd imagine they
 >> BG> would be pretty high).
 >> 
 >> By default only kernel pages are saved to dump device so even without
 >> compression it can be smaller than ram size in a server. I often see
 >> compression ratio 1.x or 2.x nothing more (it's lzjb after all).
 >> 
 >> Now with ZFS the story is a little bit different as its caches are
 >> treated as kernel pages so you basically are dumping all memory in
 >> case of file servers... there's an open bug for it.
 >> 

RP> Correction, it's now Fix Delivered build snv_56.

RP>         4894692  caching data in heap inflates crash dump

Good to know.
I hope it will make it into U4.

-- 
Best regards,
 Robert                            mailto:[EMAIL PROTECTED]
                                       http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to