> From: Toomas Soome [mailto:toomas.so...@mls.ee] > > well, do a bit math. if ima correct, with 320B DTT the 1.75GB of ram can fit > 5.8M entries, 1TB of data, assuming 128k recordsize would produce 8M > entries.... thats with default metadata limit. unless i did my calculations > wrong, that will explain the slowdown.
Not sure where you're getting those numbers, but rule of thumb is to add 1-3G of ram for every 1T of unique dedup data. http://hub.opensolaris.org/bin/view/Community+Group+zfs/dedup _______________________________________________ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss