For de-duplication to perform well you need to be able to fit the de-dup table
in memory. Is a good rule-of-thumb for needed RAM Size=(pool capacity/avg
block size)*270 bytes? Or perhaps it's Size/expected_dedup_ratio?
And if you limit de-dup to certain datasets in the pool, how would this
calc
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Freddie Cash
>
> The following works well:
> dd if=/dev/random of=/dev/disk-node bs=1M count=1 seek=whatever
>
> If you have long enough cables, you can move a disk outside the case
> and ru
Freddie,
Thank you very much for your help.
Regards,
Peter
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Cindy,
thanks for clarifying that. Basically, the problem seems to lie within the
Netatalk afpd, which is what I use for our Mac clients.
For some reason, putting a new file or folder on a Netatalk-ZFS share, don't
pulls the ACEs that this new object should inherit from its parent.
I have a
Thanks, I'm going to do that. I'm just worried about corrupting my data, or
other problems. I wanted to make sure there is nothing I really should be
careful with.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@ope
How do you know it is dedup causing the problem?
You can check to see how much is by looking at the threads (look for ddt)
mdb -k
::threadlist -v
or dtrace it.
fbt:zfs:ddt*:entry
You can disable dedup. I believe current dedup data stays until it gets
over written. I'm not sure what send w
"Can I disable dedup on the dataset while the transfer is going on?"
Yes. Only the blocks copied after disabling dedupe will not be deduped. The
stuff you have already copied will be deduped.
"Can I simply Ctrl-C the procress to stop it?"
Yes, you can do that to a mv process.
Maybe stop the pr
Hello again!
2010/9/24 Gary Mills :
> On Fri, Sep 24, 2010 at 12:01:35AM +0200, Alexander Skwar wrote:
>> Yes. I was rather thinking about RAIDZ instead of mirroring.
>
> I was just using a simpler example.
Understood. Like I just wrote, we're actually now going
to use mirroring, so that I've go
ok - it makes sense.
Thanks !
Axelle.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello.
2010/9/24 Marty Scholes :
> ZFS will ensure integrity, even when the underlying device fumbles.
Yes.
> When you mirror the iSCSI devices, be sure that they are configured
> in such a way that a failure on one iSCSI "device" does not imply a
> failure on the other iSCSI device.
Very good
Hi all
I'm currently moving a fairly big dataset (~2TB) within the same zpool. Data is
being moved from a dataset to another, which has dedup enabled.
The transfer started at quite a slow transfer speed — maybe 12MB/s. But it is
now crawling to a near halt. Only 800GB has been moved in 48 hours
Due to storage outage, some LUNS were disappeared and replaced by spares. When the
original LUNS came back, customer ran "zpool clear zpool05" (instead of zpool
replace) that resulted ZFS clears all the errors on the original LUN and placed it
ONLINE. Now both spare and the original device are
On 09/24/10 11:26, Peter Taps wrote:
Folks,
One of the zpool properties that is reported is "dedupditto." However, there is
no documentation available, either in man pages or anywhere else on the Internet. What
exactly is this property?
Thank you in advance for your help.
Regards,
Peter
On Fri, Sep 24, 2010 at 10:33 AM, Peter Taps wrote:
> Command "zpool status" reports disk status that includes read errors, write
> errors, and checksum errors. These values have always been 0 in our test
> environment. Is there any tool out there that can corrupt the state? At the
> very least
Folks,
Command "zpool status" reports disk status that includes read errors, write
errors, and checksum errors. These values have always been 0 in our test
environment. Is there any tool out there that can corrupt the state? At the
very least, we should be able to write to the disk directly and
Folks,
One of the zpool properties that is reported is "dedupditto." However, there is
no documentation available, either in man pages or anywhere else on the
Internet. What exactly is this property?
Thank you in advance for your help.
Regards,
Peter
--
This message posted from opensolaris.or
Hi Stephan,
Yes, the aclmode property was removed, but we're not sure how
this change is impacting your users.
Can you provide their existing ACL information and we'll take
a look.
Thanks,
Cindy
On 09/24/10 01:41, Stephan Budach wrote:
Hi,
I recently installed oi147 and I noticed that the p
Alexander Skwar wrote:
> Okay. This contradicts the ZFS Best Practices Guide,
> which states:
>
> # For production environments, configure ZFS so that
> # it can repair data inconsistencies. Use ZFS
> redundancy,
> # such as RAIDZ, RAIDZ-2, RAIDZ-3, mirror, or copies
> > 1,
> # regardless of the R
On Fri, Sep 24, 2010 at 12:01:35AM +0200, Alexander Skwar wrote:
> >
> > Suppose they gave you two huge lumps of storage from the SAN, and you
> > mirrored them with ZFS. What would you do if ZFS reported that one of
> > its two disks had failed and needed to be replaced? You can't do disk
> > ma
Axelle Apvrille wrote:
Hi all,
I would like to add a new partition to my ZFS pool but it looks like it's more
stricky than expected.
The layout of my disk is the following:
- first partition for Windows. I want to keep it. (no formatting !)
- second partition for OpenSolaris.This is where I hav
Up :)
I still haven't found the way to do that. Is it impossible because this
partition is outside my Solaris slices? Isn't there a way to use the space
however ?
Regards
Axelle.
--
This message posted from opensolaris.org
___
zfs-discuss mailing li
On 9/24/2010 6:27 AM, Frank Middleton wrote:
On 09/23/10 19:08, Peter Jeremy wrote:
The downsides are generally that it'll be slower and less power-
efficient that a current generation server and the I/O interfaces will
be also be last generation (so you are more likely to be stuck with
parall
On 09/23/10 19:08, Peter Jeremy wrote:
The downsides are generally that it'll be slower and less power-
efficient that a current generation server and the I/O interfaces will
be also be last generation (so you are more likely to be stuck with
parallel SCSI and PCI or PCIx rather than SAS/SATA an
Hi,
I recently installed oi147 and I noticed that the property aclmode is no longer
present and has been nuked from my volumes when I imported a pool that had been
previously hosted on a OSol 134 system.
Anybody know, if that's a bug or had aclmode been removed on purpose?
Seems that my Macs ha
24 matches
Mail list logo