--Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of erik.ableson
Sent: Thursday, January 21, 2010 6:05 PM
To: zfs-discuss
Subject: [zfs-discuss] Dedup memory overhead
Hi all,
I'm going to be trying out some tests using b130
On Fri, Jan 22, 2010 at 7:19 AM, Mike Gerdts wrote:
> On Thu, Jan 21, 2010 at 2:51 PM, Andrey Kuzmin
> wrote:
>> Looking at dedupe code, I noticed that on-disk DDT entries are
>> compressed less efficiently than possible: key is not compressed at
>> all (I'd expect roughly 2:1 compression ration
On 21 janv. 2010, at 22:55, Daniel Carosone wrote:
> On Thu, Jan 21, 2010 at 05:04:51PM +0100, erik.ableson wrote:
>
>> What I'm trying to get a handle on is how to estimate the memory
>> overhead required for dedup on that amount of storage.
>
> We'd all appreciate better visibility of this
On Thu, Jan 21, 2010 at 2:51 PM, Andrey Kuzmin
wrote:
> Looking at dedupe code, I noticed that on-disk DDT entries are
> compressed less efficiently than possible: key is not compressed at
> all (I'd expect roughly 2:1 compression ration with sha256 data),
A cryptographic hash such as sha256 shou
On Fri, Jan 22, 2010 at 08:55:16AM +1100, Daniel Carosone wrote:
> For performance (rather than space) issues, I look at dedup as simply
> increasing the size of the working set, with a goal of reducing the
> amount of IO (avoided duplicate writes) in return.
I should add "and avoided future dupli
On Thu, Jan 21, 2010 at 05:04:51PM +0100, erik.ableson wrote:
> What I'm trying to get a handle on is how to estimate the memory
> overhead required for dedup on that amount of storage.
We'd all appreciate better visibility of this. This requires:
- time and observation and experience, and
-
On Thu, Jan 21, 2010 at 10:00 PM, Richard Elling
wrote:
> On Jan 21, 2010, at 8:04 AM, erik.ableson wrote:
>
>> Hi all,
>>
>> I'm going to be trying out some tests using b130 for dedup on a server with
>> about 1,7Tb of useable storage (14x146 in two raidz vdevs of 7 disks). What
>> I'm trying
On Jan 21, 2010, at 8:04 AM, erik.ableson wrote:
> Hi all,
>
> I'm going to be trying out some tests using b130 for dedup on a server with
> about 1,7Tb of useable storage (14x146 in two raidz vdevs of 7 disks). What
> I'm trying to get a handle on is how to estimate the memory overhead requir
Hi all,
I'm going to be trying out some tests using b130 for dedup on a server with
about 1,7Tb of useable storage (14x146 in two raidz vdevs of 7 disks). What
I'm trying to get a handle on is how to estimate the memory overhead required
for dedup on that amount of storage. From what I gather