On Fri, Nov 13, 2009 at 7:09 AM, Ross wrote:
> > Isn't dedupe in some ways the antithesis of setting
> > copies > 1? We go to a lot of trouble to create redundancy (n-way
> > mirroring, raidz-n, copies=n, etc) to make things as robust as
> > possible and then we reduce redundancy with dedupe and
On Fri, 13 Nov 2009, Ross wrote:
But are we reducing redundancy? I don't know the details of how
dedupe is implemented, but I'd have thought that if copies=2, you
get 2 copies of each dedupe block. So your data is just as safe
since you haven't actually changed the redundancy, it's just tha
On 13.11.09 16:09, Ross wrote:
Isn't dedupe in some ways the antithesis of setting copies > 1? We go to a
lot of trouble to create redundancy (n-way mirroring, raidz-n, copies=n,
etc) to make things as robust as possible and then we reduce redundancy
with dedupe and compression
But are we redu
> Isn't dedupe in some ways the antithesis of setting
> copies > 1? We go to a lot of trouble to create redundancy (n-way
> mirroring, raidz-n, copies=n, etc) to make things as robust as
> possible and then we reduce redundancy with dedupe and compression
But are we reducing redundancy? I don't
On Nov 12, 2009, at 1:36 PM, Frank Middleton wrote:
Got some out-of-curiosity questions for the gurus if they
have time to answer:
Isn't dedupe in some ways the antithesis of setting copies > 1?
We go to a lot of trouble to create redundancy (n-way mirroring,
raidz-n, copies=n, etc) to make th
Got some out-of-curiosity questions for the gurus if they
have time to answer:
Isn't dedupe in some ways the antithesis of setting copies > 1?
We go to a lot of trouble to create redundancy (n-way mirroring,
raidz-n, copies=n, etc) to make things as robust as possible and
then we reduce redundan
On Sun, 8 Nov 2009, Dennis Clarke wrote:
That works well.
You know what ... I'm a schmuck. I didn't grab a time based seed first.
All those files with random text .. have identical twins on the filesystem
somewhere. :-P damn
That is one reason why I asked. Failure to get a good seed is the
>
> You can get more dedup information by running 'zdb -DD zp_dd'. This
> should show you how we break things down. Add more 'D' options and get
> even more detail.
>
> - George
OKay .. thank you. Looks like I have piles of numbers here :
# zdb -DDD zp_dd
DDT-sha256-zap-duplicate: 37317 entries,
> On Sat, 7 Nov 2009, Dennis Clarke wrote:
>>
>> Now the first test I did was to write 26^2 files [a-z][a-z].dat in 26^2
>> directories named [a-z][a-z] where each file is 64K of random
>> non-compressible data and then some english text.
>
> What method did you use to produce this "random" data?
On Sat, 7 Nov 2009, Dennis Clarke wrote:
Now the first test I did was to write 26^2 files [a-z][a-z].dat in 26^2
directories named [a-z][a-z] where each file is 64K of random
non-compressible data and then some english text.
What method did you use to produce this "random" data?
The dedupe r
Dennis Clarke wrote:
On Sat, 2009-11-07 at 17:41 -0500, Dennis Clarke wrote:
Does the dedupe functionality happen at the file level or a lower block
level?
it occurs at the block allocation level.
I am writing a large number of files that have the fol structure :
-- file begins
1024 line
> On Sat, 2009-11-07 at 17:41 -0500, Dennis Clarke wrote:
>> Does the dedupe functionality happen at the file level or a lower block
>> level?
>
> it occurs at the block allocation level.
>
>> I am writing a large number of files that have the fol structure :
>>
>> -- file begins
>> 1024 lines
On Sat, 2009-11-07 at 17:41 -0500, Dennis Clarke wrote:
> Does the dedupe functionality happen at the file level or a lower block
> level?
it occurs at the block allocation level.
> I am writing a large number of files that have the fol structure :
>
> -- file begins
> 1024 lines of random A
Dennis Clarke wrote:
Does the dedupe functionality happen at the file level or a lower block
level?
block level, but remember that block size may vary from file to file.
I am writing a large number of files that have the fol structure :
-- file begins
1024 lines of random ASCII chars 64
Does the dedupe functionality happen at the file level or a lower block
level?
I am writing a large number of files that have the fol structure :
-- file begins
1024 lines of random ASCII chars 64 chars long
some tilde chars .. about 1000 of then
some text ( english ) for 2K
more text ( engl
15 matches
Mail list logo