I just went through a BFU update to snv_127 on a V880 :
neptune console login: root
Password:
Nov 3 08:19:12 neptune login: ROOT LOGIN /dev/console
Last login: Mon Nov 2 16:40:36 on console
Sun Microsystems Inc. SunOS 5.11 snv_127 Nov. 02, 2009
SunOS Internal Development: root 2009-Nov-0
Hi,
Lets take a look:
# zpool list
NAMESIZE USED AVAILCAP DEDUP HEALTH ALTROOT
rpool68G 13.9G 54.1G20% 42.27x ONLINE -
# zfs get all rpool/export/data
NAME PROPERTYVALUE
SOURCE
rpool/export/data type
On 2-Nov-09, at 3:16 PM, Nicolas Williams wrote:
On Mon, Nov 02, 2009 at 11:01:34AM -0800, Jeremy Kitchen wrote:
forgive my ignorance, but what's the advantage of this new dedup over
the existing compression option? Wouldn't full-filesystem
compression
naturally de-dupe?
...
There are man
Dennis Clarke wrote:
I just went through a BFU update to snv_127 on a V880 :
neptune console login: root
Password:
Nov 3 08:19:12 neptune login: ROOT LOGIN /dev/console
Last login: Mon Nov 2 16:40:36 on console
Sun Microsystems Inc. SunOS 5.11 snv_127 Nov. 02, 2009
SunOS Internal Develo
Hi,
is it possible to link multiple machines into one storage pool using zfs?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Tristan Ball wrote:
I'm curious as to how send/recv intersects with dedupe... if I send/recv
a deduped filesystem, is the data sent it it's de-duped form, ie just
sent once, followed by the pointers for subsequent dupe data, or is the
the data sent in expanded form, with the recv side system
Miha Voncina wrote:
Hi,
is it possible to link multiple machines into one storage pool using zfs?
Depends what you mean by this.
Multiple machines can not import the same ZFS pool at the same time,
doing so *will* cause corruption and ZFS tries hard to protect against
multiple imports.
Ho
> Dennis Clarke wrote:
>> I just went through a BFU update to snv_127 on a V880 :
>>
>> neptune console login: root
>> Password:
>> Nov 3 08:19:12 neptune login: ROOT LOGIN /dev/console
>> Last login: Mon Nov 2 16:40:36 on console
>> Sun Microsystems Inc. SunOS 5.11 snv_127 Nov. 02, 2009
> So.. it seems that data is deduplicated, zpool has
> 54.1G of free space, but I can use only 40M.
>
> It's x86, ONNV revision 10924, debug build, bfu'ed from b125.
I think I'm observing the same (with changeset 10936) ...
I created a 2GB file, and a "tank" zpool on top of that file,
with compr
> I think I'm observing the same (with changeset 10936) ...
# mkfile 2g /var/tmp/tank.img
# zpool create tank /var/tmp/tank.img
# zfs set dedup=on tank
# zfs create tank/foobar
> dd if=/dev/urandom of=/tank/foobar/file1 bs=1024k count=512
512+0 records in
512+0 record
On 11/2/2009 9:23 PM, Marion Hakanson wrote:
Could it be that c12t1d0 was at some time in the past (either in this
machine or another machine) known as c3t11d0, and was part of a pool
called "dbzpool"?
quite possibly. but certainly not this host's dbzpool.
You'll need to give the same "dd" t
On Nov 2, 2009, at 2:38 PM, "Paul B. Henson" wrote:
On Sat, 31 Oct 2009, Al Hopper wrote:
Kudos to you - nice technical analysis and presentation, Keep
lobbying
your point of view - I think interoperability should win out if it
comes
down to an arbitrary decision.
Thanks; but so far tha
I was under the impression that you can create a new zfs dataset and turn on
the dedup functionality, and copy your data to it. Or am I wrong?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://ma
Alex,
You can download the man page source files from this URL:
http://dlc.sun.com/osol/man/downloads/current/
If you want a different version, you can navigate to the available
source consolidations from the Downloads page on opensolaris.org.
Thanks,
Cindy
On 11/02/09 16:39, Cindy Swearinge
Good morning all...
Great work on the De-Dupe stuff. cant wait to try it out. but quick question
about iSCSI and De-Dupe. will it work? if i share out a ZVOL to another
machine and copy some simular files to it (thinking VMs) will they get
de-duplicated?
Thanks.
--
Tiernan O'Toole
blog.lotas-sm
Orvar Korvar wrote:
I was under the impression that you can create a new zfs dataset and turn on
the dedup functionality, and copy your data to it. Or am I wrong?
you don't even have to create a new dataset just do:
# zfs set dedup=on
--
Darren J Moffat
_
Tiernan OToole wrote:
Good morning all...
Great work on the De-Dupe stuff. cant wait to try it out. but quick
question about iSCSI and De-Dupe. will it work? if i share out a ZVOL to
another machine and copy some simular files to it (thinking VMs) will
they get de-duplicated?
It works but h
On Tue, 3 Nov 2009, Ross Walker wrote:
> Maybe this isn't an interoperability fix, but a security fix as it allows
> non-Sun clients to bypass security restrictions placed on a sgid
> protected directory tree because it doesn't properly test the existence
> of that bit upon file creation.
>
> If a
Hi Darren,
More below...
Darren J Moffat wrote:
Tristan Ball wrote:
Obviously sending it deduped is more efficient in terms of bandwidth
and CPU time on the recv side, but it may also be more complicated to
achieve?
A stream can be deduped even if the on disk format isn't and vice versa.
We see the same issue on a x4540 Thor system with 500G disks:
lots of:
...
Nov 3 16:41:46 uva.nl scsi: [ID 107833 kern.warning] WARNING:
/p...@3c,0/pci10de,3...@f/pci1000,1...@0 (mpt5):
Nov 3 16:41:46 encore.science.uva.nl Disconnected command timeout for Target
7
...
This system is run
Kyle McDonald wrote:
Hi Darren,
More below...
Darren J Moffat wrote:
Tristan Ball wrote:
Obviously sending it deduped is more efficient in terms of bandwidth
and CPU time on the recv side, but it may also be more complicated
to achieve?
A stream can be deduped even if the on disk format i
On Nov 3, 2009, at 6:01 AM, Jürgen Keil wrote:
I think I'm observing the same (with changeset 10936) ...
# mkfile 2g /var/tmp/tank.img
# zpool create tank /var/tmp/tank.img
# zfs set dedup=on tank
# zfs create tank/foobar
This has to do with the fact that dedup space accounting
On Mon, November 2, 2009 20:23, Marion Hakanson wrote:
> You'll need to give the same "dd" treatment to the end of the disk as
> well;
> ZFS puts copies of its labels at the beginning and at the end.
Does anybody else see this as rather troubling? Obviously it's dangerous
to get in the habit of
Hi Eric and all,
Eric Schrock wrote:
On Nov 3, 2009, at 6:01 AM, Jürgen Keil wrote:
I think I'm observing the same (with changeset 10936) ...
# mkfile 2g /var/tmp/tank.img
# zpool create tank /var/tmp/tank.img
# zfs set dedup=on tank
# zfs create tank/foobar
This has to do wit
Hi,
It looks interesting problem.
Would it help if as ZFS detects dedup blocks, it can start increasing
effective size of pool.
It will create an anomaly with respect to total disk space, but it will
still be accurate from each file system usage point of view.
Basically, dedup is at block level,
Well, then you could have more "logical space" than "physical space", and that
would be extremely cool, but what happens if for some reason you wanted to turn
off dedup on one of the filesystems? It might exhaust all the pool's space to
do this. I think good idea would be another pool's/filesyst
Hi David,
This RFE is filed for this feature:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6893282
Allow the zpool command to wipe labels from disks
Cindy
On 11/03/09 09:00, David Dyer-Bennet wrote:
On Mon, November 2, 2009 20:23, Marion Hakanson wrote:
You'll need to give th
Cyril Plisko wrote:
I think I'm observing the same (with changeset 10936) ...
# mkfile 2g /var/tmp/tank.img
# zpool create tank /var/tmp/tank.img
# zfs set dedup=on tank
# zfs create tank/foobar
This has to do with the fact that dedup space accounting is charged to all
f
On Mon, Nov 2, 2009 at 6:34 AM, Orvar Korvar wrote:
> I have the same card and might have seen the same problem. Yesterday I
> upgraded to b126 and started to migrate all my data to 8 disc raidz2
> connected to such a card. And suddenly ZFS reported checksum errors. I
> thought the drives were fa
On Tue, November 3, 2009 10:32, Bartlomiej Pelc wrote:
> Well, then you could have more "logical space" than "physical space", and
> that would be extremely cool, but what happens if for some reason you
> wanted to turn off dedup on one of the filesystems? It might exhaust all
> the pool's space t
Hello
A customer recently had a power outage. Prior to the outage, they did a
graceful shutdown of their system.
On power-up, the system is not coming up due to zfs errors as follows:
cannot mount 'rpool/export': Number of symbolic links encountered during
path name traversal exceeds MAXSYMLI
On 11/3/2009 3:49 PM, Marion Hakanson wrote:
If the disk is going to be part of whole-disk zpool, I like to make
sure there is not an old VTOC-style partition table on there. That
can be done either via some "format -e" commands, or with "fdisk -E",
to put an EFI label on there.
unfortunately
On Nov 3, 2009, at 12:24 PM, Cyril Plisko wrote:
I think I'm observing the same (with changeset 10936) ...
# mkfile 2g /var/tmp/tank.img
# zpool create tank /var/tmp/tank.img
# zfs set dedup=on tank
# zfs create tank/foobar
This has to do with the fact that dedup space accounting is
>I said:
>> You'll need to give the same "dd" treatment to the end of the disk as well;
>> ZFS puts copies of its labels at the beginning and at the end. Oh, and
zfs...@jeremykister.com said:
> im not sure what you mean here - I thought p0 was the entire disk in x86 -
> and s2 was the whole disk
Hi Cyril,
But: Isn't there an implicit expectation for a space guarantee associated
with a dataset? In other words, if a dataset has 1GB of data, isn't it
natural to expect to be able to overwrite that space with other data? One
I'd say that expectation is not [always] valid. Assume you have a
Kyle McDonald wrote:
Hi Darren,
More below...
Darren J Moffat wrote:
Tristan Ball wrote:
Obviously sending it deduped is more efficient in terms of bandwidth
and CPU time on the recv side, but it may also be more complicated to
achieve?
A stream can be deduped even if the on disk format
Trevor Pretty wrote:
Darren J Moffat wrote:
Orvar Korvar wrote:
I was under the impression that you can create a new zfs dataset and turn on
the dedup functionality, and copy your data to it. Or am I wrong?
you don't even have to create a new dataset just do:
# zfs set dedup=on
Green-bytes is publicly selling their hardware and dedup solution
today. From the feedback of others with testing from someone on our
team we've found the quality of the initial putback to be buggy and not
even close to production ready. (That's fine since nobody has stated it
was production
Well, then you could have more "logical space" than "physical space"
Reconsidering my own question again, it seems to me that the question of space
management is probably more fundamental than I had initially thought, and I
assume members of the core team will have thought through much of it.
> No point in trying to preserve a naive mental model that
simply can't stand up to reality.
I kind of dislike the idea to talk about naiveness here.
Being able to give guarantees (in this case: reserve space) can be vital for
running critical business applications. Think about the analogy i
> But: Isn't there an implicit expectation for a space guarantee associated
> with a
> dataset? In other words, if a dataset has 1GB of data, isn't it natural to
> expect to be able to overwrite that space with other
> data?
Is there such a space guarantee for compressed or cloned zfs?
--
This
On Mon, Nov 2, 2009 at 1:34 PM, Ramin Moazeni wrote:
> Hello
>
> A customer recently had a power outage. Prior to the outage, they did a
> graceful shutdown of their system.
> On power-up, the system is not coming up due to zfs errors as follows:
> cannot mount 'rpool/export': Number of symbolic
We recently found that the ZFS user/group quota accounting for disk-usage worked
"opposite" to what we were expecting. Ie, any space saved from compression was a
benefit to the customer, not to us.
(We expected the Google style: Give a customer 2GB quota, and if compression
saves space, that
Ramin
I don't know but..
Is the error not from mount and it's /export/home that can't be
created?
"mount '/export/home': failed to create mountpoint."
Have you tried mounting 'rpool/export' somewhere else, ike .mnt?
Ramin Moazeni wrote:
Hello
A customer recently had a power outag
On Tue, Nov 3, 2009 at 10:54 PM, Nils Goroll wrote:
> Now to the more general question: If all datasets of a pool contained the
> same data and got de-duped, the sums of their "used" space still seems to be
> limited by the "locical" pool size, as we've seen in examples given by
> Jürgen and other
Miha
If you do want multi-reader,
multi-writer block access (and not use iSCSI) then QFS is what you
want.
http://www.sun.com/storage/management_software/data_management/qfs/features.xml
You can use ZFS pools are lumps of disk under SAM-QFS:-
https://blogs.communication.utexas.edu/groups/te
On Tue, November 3, 2009 16:36, Nils Goroll wrote:
> > No point in trying to preserve a naive mental model that
>> simply can't stand up to reality.
>
> I kind of dislike the idea to talk about naiveness here.
Maybe it was a poor choice of words; I mean something more along the lines
of "simpli
On 11/ 2/09 07:42 PM, Craig S. Bell wrote:
I just stumbled across a clever visual representation of deduplication:
http://loveallthis.tumblr.com/post/166124704
It's a flowchart of the lyrics to "Hey Jude". =-)
Nothing is compressed, so you can still read all of the words. Instead, all of
th
On Tuesday, November 3, 2009, "C. Bergström" wrote:
> Green-bytes is publicly selling their hardware and dedup solution today.
> From the feedback of others with testing from someone on our team we've found
> the quality of the initial putback to be buggy and not even close to
> production rea
I am a bit of a Solaris newbie. I have a brand spankin' new Solaris 10u8
machine (x4250) that is running an attached J4400 and some internal drives.
We're using multipathed SAS I/O (enabled via stmsboot), so the device mount
points have been moved off from their "normal" c0t5d0 to long strings -
> Well, then you could have more "logical space" than
> "physical space", and that would be extremely cool,
I think we already have that, with zfs clones.
I often clone a zfs onnv workspace, and everything
is "deduped" between zfs parent snapshot and clone
filesystem. The clone (initially) needs
I'm fairly new to all this and I think that is the intended behavior.
Also from my limited understanding I believe dedup behavior it would
significantly cut down on access times.
For the most part though this is such new code that I would wait abit to see
where they take it.
On Tue, Nov 3, 2009 a
>>> I think I'm observing the same (with changeset 10936) ...
>>
>> # mkfile 2g /var/tmp/tank.img
>> # zpool create tank /var/tmp/tank.img
>> # zfs set dedup=on tank
>> # zfs create tank/foobar
>
> This has to do with the fact that dedup space accounting is charged to all
> filesystems, reg
On Tue, November 3, 2009 15:06, Cyril Plisko wrote:
> On Tue, Nov 3, 2009 at 10:54 PM, Nils Goroll wrote:
>> But: Isn't there an implicit expectation for a space guarantee
>> associated
>> with a dataset? In other words, if a dataset has 1GB of data, isn't it
>> natural to expect to be able to o
Hi and hello,
I have a problem confusing me. I hope someone can help me with it.
I followed a "best practise" - I think - using dedicated zfs filesystems for my
virtual machines.
Commands (for completion):
[i]zfs create rpool/vms[/i]
[i]zfs create rpool/vms/vm1[/i]
[i] zfs create -V 10G rpool/vms
Hi David,
simply can't stand up to reality.
I kind of dislike the idea to talk about naiveness here.
Maybe it was a poor choice of words; I mean something more along the lines
of "simplistic". The point is, "space" is no longer as simple a concept
as it was 40 years ago. Even without dedupl
Darren J Moffat wrote:
Orvar Korvar wrote:
I was under the impression that you can create a new zfs dataset and turn on the dedup functionality, and copy your data to it. Or am I wrong?
you don't even have to create a new dataset just do:
# zfs set dedup=on
But l
Hello,
I am actually using ZFS under FreeBSD, but maybe someone over here can
help me anyway. I'd like some advice if I still can rely on one of my
ZFS pools:
[u...@host ~]$ sudo zpool clear zpool01
...
[u...@host ~]$ sudo zpool scrub zpool01
...
[u...@host ~]$ sudo zpool status -v zpool0
Eric Schrock wrote:
On Nov 3, 2009, at 12:24 PM, Cyril Plisko wrote:
I think I'm observing the same (with changeset 10936) ...
# mkfile 2g /var/tmp/tank.img
# zpool create tank /var/tmp/tank.img
# zfs set dedup=on tank
# zfs create tank/foobar
This has to do with the fact that dedu
59 matches
Mail list logo