On 5/6/2011 5:46 PM, Richard Elling wrote:
On May 6, 2011, at 3:24 AM, Erik Trimble wrote:
Casper and Richard are correct - RAM starvation seriously impacts snapshot or
dataset deletion when a pool has dedup enabled. The reason behind this is that
ZFS needs to scan the entire DDT to check t
On May 6, 2011, at 3:24 AM, Erik Trimble wrote:
> On 5/6/2011 1:37 AM, casper@oracle.com wrote:
>>> Op 06-05-11 05:44, Richard Elling schreef:
As the size of the data grows, the need to have the whole DDT in RAM or
L2ARC
decreases. With one notable exception, destroying a data
One of the quoted participants is Richard Elling, the other is Edward Ned
Harvey, but my quoting was screwed up enough that I don't know which is which.
Apologies.
>> >zdb -DD poolname
>> This just gives you the -S output, and the -D output all in one go. So I
>Sorry, zdb -DD only works f
On Fri, May 6, 2011 at 9:15 AM, Ray Van Dolson wrote:
> We use dedupe on our VMware datastores and typically see 50% savings,
> often times more. We do of course keep "like" VM's on the same volume
I think NetApp uses 4k blocks by default, so the block size and
alignment should match up for mos
Hi Rich,
With the Ultra 20M2 there is a very cheap/easy alternative
that might work for you (until you need to expand past 2
more external devices anyway)
Pick up an eSATA pci bracket cable adapter, something like this-
http://www.newegg.com/Product/Product.aspx?Item=N82E16812226003&cm_re=eSATA-
On Wed, May 04, 2011 at 08:49:03PM -0700, Edward Ned Harvey wrote:
> > From: Tim Cook [mailto:t...@cook.ms]
> >
> > That's patently false. VM images are the absolute best use-case for dedup
> > outside of backup workloads. I'm not sure who told you/where you got the
> > idea that VM images are n
Sounds like a nasty bug, and not one I've seen in illumos or
NexentaStor. What build are you running?
- Garrett
On Wed, 2011-05-04 at 15:40 -0700, Adam Serediuk wrote:
> Dedup is disabled (confirmed to be.) Doing some digging it looks like
> this is a very similar issue
> to http://forum
Hi all,
I'm looking at replacing my old D1000 array with some new external drives,
most likely these: http://www.g-technology.com/products/g-drive.cfm . In
the immediate term, I'm planning to use USB 2.0 connections, but the drive
I'm considering also supports eSATA, which is MUCH faster than USB
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> > zdb -DD poolname
> This just gives you the -S output, and the -D output all in one go. So I
Sorry, zdb -DD only works for pools that are already dedup'd.
If you wa
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> > --- To calculate size of DDT ---
> zdb -S poolname
Look at total blocks allocated. It is rounded, and uses a suffix like "K,
M, G" but it's in decimal (powers of 10) notation, so you have to remember
that... So
On 05/ 5/11 10:02 PM, Joerg Schilling wrote:
Ian Collins wrote:
*ufsrestore works fine on ZFS filesystems (although I haven't tried it
with any POSIX ACLs on the original ufs filesystem, which would probably
simply get lost).
star -copy -no-fsync is typically 30% faster that ufsdump | u
On 06 May, 2011 - Erik Trimble sent me these 1,8K bytes:
> If dedup isn't enabled, snapshot and data deletion is very light on RAM
> requirements, and generally won't need to do much (if any) disk I/O.
> Such deletion should take milliseconds to a minute or so.
.. or hours. We've had problem
On 5/6/2011 1:37 AM, casper@oracle.com wrote:
Op 06-05-11 05:44, Richard Elling schreef:
As the size of the data grows, the need to have the whole DDT in RAM or L2ARC
decreases. With one notable exception, destroying a dataset or snapshot requires
the DDT entries for the destroyed blocks to
>Op 06-05-11 05:44, Richard Elling schreef:
>> As the size of the data grows, the need to have the whole DDT in RAM or L2ARC
>> decreases. With one notable exception, destroying a dataset or snapshot
>> requires
>> the DDT entries for the destroyed blocks to be updated. This is why people
>> can
Op 06-05-11 05:44, Richard Elling schreef:
> As the size of the data grows, the need to have the whole DDT in RAM or L2ARC
> decreases. With one notable exception, destroying a dataset or snapshot
> requires
> the DDT entries for the destroyed blocks to be updated. This is why people can
> go for
15 matches
Mail list logo