Tristan, there's another dedup system for "zfs send" in PSARC 2009/557. This
can be used independently of whether the in-pool data was deduped.
Case log: http://arc.opensolaris.org/caselog/PSARC/2009/557/
Discussion: http://www.opensolaris.org/jive/thread.jspa?threadID=115082
So I believe your
I'm curious as to how send/recv intersects with dedupe... if I send/recv
a deduped filesystem, is the data sent it it's de-duped form, ie just
sent once, followed by the pointers for subsequent dupe data, or is the
the data sent in expanded form, with the recv side system then having to
redo
On a related note, it looks like Constantin is developing a nice SMF service
for auto scrub:
http://blogs.sun.com/constantin/entry/new_opensolaris_zfs_auto_scrub
This is an adaptation of the well-tested auto snapshot service. Amongst other
advantages, this approach means that you don't have t
Great stuff, Jeff and company. You all rock. =-)
A potential topic for the follow-up posts: auto-ditto, and the philosophy
behind choosing a default threshold for creating a second copy.
--
This message posted from opensolaris.org
___
zfs-discuss mai
I just stumbled across a clever visual representation of deduplication:
http://loveallthis.tumblr.com/post/166124704
It's a flowchart of the lyrics to "Hey Jude". =-)
Nothing is compressed, so you can still read all of the words. Instead, all of
the duplicates have been folded together. -ch
zfs...@jeremykister.com said:
> # format -e c12t1d0 selecting c12t1d0 [disk formatted] /dev/dsk/c3t11d0s0 is
> part of active ZFS pool dbzpool. Please see zpool(1M).
>
> It is true that c3t11d0 is part of dbzpool. But why is solaris upset about
> c3t11 when i'm working with c12t1 ?? So i checke
Mike Gerdts wrote:
On Mon, Nov 2, 2009 at 7:20 AM, Jeff Bonwick wrote:
Terrific! Can't wait to read the man pages / blogs about how to use it...
Just posted one:
http://blogs.sun.com/bonwick/en_US/entry/zfs_dedup
Enjoy, and let me know if you have any questions or suggestions for
follow-on p
Hi Alex,
I'm checking with some folks on how we handled this handoff
for the previous project.
I'll get back to you shortly.
Thanks,
Cindy
On 11/02/09 16:07, Alex Blewitt wrote:
The man pages documentation from the old Apple port
(http://github.com/alblue/mac-zfs/tree/master/zfs_documentatio
The man pages documentation from the old Apple port
(http://github.com/alblue/mac-zfs/tree/master/zfs_documentation/man8/)
don't seem to have a corresponding source file in the onnv-gate
repository (http://hub.opensolaris.org/bin/view/Project+onnv/WebHome)
although I've found the text on-line
(http
James Lever wrote:
On 03/11/2009, at 7:32 AM, Daniel Streicher wrote:
But how can I "update" my current OpenSolaris (2009.06) or Solaris 10
(5/09) to use this.
Or have I wait for a new stable release of Solaris 10 / OpenSolaris?
For OpenSolaris, you change your repository and switch to the
ZFS dedup will be in snv_128,
but putbacks to snv_128 will not likely close till the end of this week.
The OpenSolaris dev repository was updated to snv_126 last Thursday:
http://mail.opensolaris.org/pipermail/opensolaris-announce/2009-October/001317.html
So it looks like about 5 weeks before the
Looks great - and by the time OpenSolaris build has it, I will have a
brand new laptop to put it on ;-)
One question though - I have a file server at home with 4x750GB on
raidz1. When I upgrade to the latest build and set dedup=on, given
that it does not have an offline mode, there is no way to op
On 03/11/2009, at 7:32 AM, Daniel Streicher wrote:
But how can I "update" my current OpenSolaris (2009.06) or Solaris
10 (5/09) to use this.
Or have I wait for a new stable release of Solaris 10 / OpenSolaris?
For OpenSolaris, you change your repository and switch to the
development branc
Okay, nice to hear ZFS can now use dedup.
But how can I "update" my current OpenSolaris (2009.06) or Solaris 10 (5/09) to
use this.
Or have I wait for a new stable release of Solaris 10 / OpenSolaris?
--
Daniel
--
This message posted from opensolaris.org
_
On Mon, Nov 2, 2009 at 2:16 PM, Nicolas Williams
wrote:
> On Mon, Nov 02, 2009 at 11:01:34AM -0800, Jeremy Kitchen wrote:
>> forgive my ignorance, but what's the advantage of this new dedup over
>> the existing compression option? Wouldn't full-filesystem compression
>> naturally de-dupe?
>
> If
On Mon, Nov 02, 2009 at 11:01:34AM -0800, Jeremy Kitchen wrote:
> forgive my ignorance, but what's the advantage of this new dedup over
> the existing compression option? Wouldn't full-filesystem compression
> naturally de-dupe?
If you snapshot/clone as you go, then yes, dedup will do little
Jeremy Kitchen wrote:
On Nov 2, 2009, at 9:07 AM, Victor Latushkin wrote:
Enda O'Connor wrote:
it works at a pool wide level with the ability to exclude at a
dataset level, or the converse, if set to off at top level dataset
can then set lower level datasets to on, ie one can include and
ex
On Sat, 31 Oct 2009, Al Hopper wrote:
> Kudos to you - nice technical analysis and presentation, Keep lobbying
> your point of view - I think interoperability should win out if it comes
> down to an arbitrary decision.
Thanks; but so far that doesn't look promising. Right now I've got a cron
job
> "hj" == Henrik Johansson writes:
hj> A "überquota" property for the whole pool would have been nice
hj> [to get out-of-space errors instead of fragmentation]
just make an empty filesystem with a reservation. That's what I do.
NAMEUSED
On Thu, 29 Oct 2009 casper@sun.com wrote:
> Do you have the complete NFS trace output? My reading of the source code
> says that the file will be created with the proper gid so I am actually
> believing that the client "over corrects" the attributes after creating
> the file/directory.
Just
>forgive my ignorance, but what's the advantage of this new dedup over
>the existing compression option?
it may provide another space saving advantage. depending on your data, the
savings can be very significant.
>Wouldn't full-filesystem compression
>naturally de-dupe?
no. compression doesn`t
On Mon, Nov 2, 2009 at 9:01 PM, Jeremy Kitchen
wrote:
>
> forgive my ignorance, but what's the advantage of this new dedup over the
> existing compression option? Wouldn't full-filesystem compression naturally
> de-dupe?
No, the compression works on the block level. If there are two
identical bl
On Nov 2, 2009, at 9:07 AM, Victor Latushkin wrote:
Enda O'Connor wrote:
it works at a pool wide level with the ability to exclude at a
dataset level, or the converse, if set to off at top level dataset
can then set lower level datasets to on, ie one can include and
exclude depending on t
On Mon, Nov 02, 2009 at 12:58:32PM -0500, Dennis Clarke wrote:
> Looking at FIPS-180-3 in sections 4.1.2 and 4.1.3 I was thinking that the
> major leap from SHA256 to SHA512 was a 32-bit to 64-bit step.
ZFS doesn't have enough room in blkptr_t for 512-bi hashes.
Nico
--
_
On Mon, Nov 2, 2009 at 11:58 AM, Dennis Clarke wrote:
>
>>> Terrific! Can't wait to read the man pages / blogs about how to use
>>> it...
>>
>> Just posted one:
>>
>> http://blogs.sun.com/bonwick/en_US/entry/zfs_dedup
>>
>> Enjoy, and let me know if you have any questions or suggestions for
>> fol
>> Terrific! Can't wait to read the man pages / blogs about how to use
>> it...
>
> Just posted one:
>
> http://blogs.sun.com/bonwick/en_US/entry/zfs_dedup
>
> Enjoy, and let me know if you have any questions or suggestions for
> follow-on posts.
Looking at FIPS-180-3 in sections 4.1.2 and 4.1.3
Matthias Appel wrote:
> I am using 2x Gbit Ethernet an 4 Gig of RAM,
> 4 Gig of RAM for the iRAM should be more than sufficient (0.5 times RAM and
> 10s worth of IO)
>
> I am aware that this RAM is non-ECC so I plan to mirror the ZIL device.
>
> Any considerations for this setupWill it work a
Enda O'Connor wrote:
it works at a pool wide level with the ability to exclude at a dataset
level, or the converse, if set to off at top level dataset can then set
lower level datasets to on, ie one can include and exclude depending on
the datasets contents.
so largefile will get deduped in t
Ok, thanks everyone then (but still thanks to Victor for the heads up) :-)
On Mon, Nov 2, 2009 at 4:03 PM, Victor Latushkin
wrote:
> On 02.11.09 18:38, Ross wrote:
>>
>> Double WOHOO! Thanks Victor!
>
> Thanks should go to Tim Haley, Jeff Bonwick and George Wilson ;-)
>
___
On 02.11.09 18:38, Ross wrote:
Double WOHOO! Thanks Victor!
Thanks should go to Tim Haley, Jeff Bonwick and George Wilson ;-)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Double WOHOO! Thanks Victor!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
This is truly awesome news!
What's the best way to dedup existing datasets? Will send/recv work, or
do we just cp things around?
Regards,
Tristan
Jeff Bonwick wrote:
Terrific! Can't wait to read the man pages / blogs about how to use it...
Just posted one:
http://blogs.sun.com/bon
On Mon, Nov 2, 2009 at 9:41 AM, Enda O'Connor wrote:
> it works at a pool wide level with the ability to exclude at a dataset
> level, or the converse, if set to off at top level dataset can then set
> lower level datasets to on, ie one can include and exclude depending on the
> datasets contents.
it works at a pool wide level with the ability to exclude at a dataset
level, or the converse, if set to off at top level dataset can then set
lower level datasets to on, ie one can include and exclude depending on
the datasets contents.
so largefile will get deduped in the example below.
End
On Mon, Nov 2, 2009 at 7:20 AM, Jeff Bonwick wrote:
>> Terrific! Can't wait to read the man pages / blogs about how to use it...
>
> Just posted one:
>
> http://blogs.sun.com/bonwick/en_US/entry/zfs_dedup
>
> Enjoy, and let me know if you have any questions or suggestions for
> follow-on posts.
>
Does dedup work at the pool level or the filesystem/dataset level?
For example, if I were to do this:
bash-3.2$ mkfile 100m /tmp/largefile
bash-3.2$ zfs set dedup=off tank
bash-3.2$ zfs set dedup=on tank/dir1
bash-3.2$ zfs set dedup=on tank/dir2
bash-3.2$ zfs set dedup=on tank/dir3
bash-3.2$ cp /t
> Terrific! Can't wait to read the man pages / blogs about how to use it...
Just posted one:
http://blogs.sun.com/bonwick/en_US/entry/zfs_dedup
Enjoy, and let me know if you have any questions or suggestions for
follow-on posts.
Jeff
___
zfs-discuss m
I have the same card and might have seen the same problem. Yesterday I upgraded
to b126 and started to migrate all my data to 8 disc raidz2 connected to such a
card. And suddenly ZFS reported checksum errors. I thought the drives were
faulty. But you suggest the problem could have been the drive
Hey,
On Sun, Nov 1, 2009 at 8:48 PM, Donald Murray, P.Eng.
wrote:
> Hi,
>
> I may have lost my first zpool, due to ... well, we're not yet sure.
> The 'zpool import tank' causes a panic -- one which I'm not even
> able to capture via savecore.
>
Looks like I've found the root cause. When I disco
Hey,
On Sat, Oct 31, 2009 at 5:03 PM, Victor Latushkin
wrote:
> Donald Murray, P.Eng. wrote:
>>
>> Hi,
>>
>> I've got an OpenSolaris 2009.06 box that will reliably panic whenever
>> I try to import one of my pools. What's the best practice for
>> recovering (before I resort to nuking the pool an
David Magda wrote:
Deduplication was committed last night by Mr. Bonwick:
Log message:
PSARC 2009/571 ZFS Deduplication Properties
6677093 zfs should have dedup capability
http://mail.opensolaris.org/pipermail/onnv-notify/2009-November/010683.html
And "PSARC 2009/479 zpool recovery suppor
On Mon, Nov 2, 2009 at 2:25 PM, Alex Lam S.L. wrote:
> Terrific! Can't wait to read the man pages / blogs about how to use it...
Alex,
you may wish to check PSARC 2009/571 materials [1] for a sneak preview :)
[1] http://arc.opensolaris.org/caselog/PSARC/2009/571/
>
> Alex.
>
> On Mon, Nov 2, 2
Why didn't one of the developers from green-bytes do the commit? :P
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Terrific! Can't wait to read the man pages / blogs about how to use it...
Alex.
On Mon, Nov 2, 2009 at 12:21 PM, David Magda wrote:
> Deduplication was committed last night by Mr. Bonwick:
>
>> Log message:
>> PSARC 2009/571 ZFS Deduplication Properties
>> 6677093 zfs should have dedup capabilit
Deduplication was committed last night by Mr. Bonwick:
Log message:
PSARC 2009/571 ZFS Deduplication Properties
6677093 zfs should have dedup capability
http://mail.opensolaris.org/pipermail/onnv-notify/2009-November/010683.html
Via c0t0d0s0.org.
___
Donald Murray, P.Eng. wrote:
What steps are _you_ taking to protect _your_ pools?
Replication and tape backup.
How are you protecting your enterprise data?
Replication and tape backup.
How often are you losing an entire pool and restoring from backups?
Never (since I started using ZFS in
46 matches
Mail list logo