On Apr 2, 2010, at 2:03 PM, Miles Nordin wrote:

>>>>>> "re" == Richard Elling <richard.ell...@gmail.com> writes:
> 
>    re> # ptime zdb -S zwimming Simulated DDT histogram:
>    re>  refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   
> DSIZE
>    re>   Total    2.63M    277G    218G    225G    3.22M    337G    263G    
> 270G
> 
>    re>        in-core size = 2.63M * 250 = 657.5 MB
> 
> Thanks, that is really useful!  It'll probably make the difference
> between trying dedup and not, for me.
> 
> It is not working for me yet.  It got to this point in prstat:
> 
>  6754 root     2554M 1439M sleep   60    0   0:03:31 1.9% zdb/106
> 
> and then ran out of memory:
> 
> $ pfexec ptime zdb -S tub
> out of memory -- generating core dump

This is annoying. By default, zdb is compiled as a 32-bit executable and
it can be a hog. Compiling it yourself is too painful for most folks :-(

> I might add some swap I guess.  I will have to try it on another
> machine with more RAM and less pool, and see how the size of the zdb
> image compares to the calculated size of DDT needed.  So long as zdb
> is the same or a little smaller than the DDT it predicts, the tool's
> still useful, just sometimes it will report ``DDT too big but not sure
> by how much'', by coredumping/thrashing instead of finishing.

In my experience, more swap doesn't help break through the 2GB memory
barrier.  As zdb is an intentionally unsupported tool, methinks recompile
may be required (or write your own).
 -- richard

ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com 





_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to