On Tue, Jul 20, 2010 at 9:48 AM, Hernan Freschi wrote:
> Is there a way to see which files are using dedup? Or should I just
> copy everything to a new ZFS?
Using 'zfs send' to copy the datasets will work and preserve other
metadata that copying will lose.
-B
--
Brandon High : bh...@freaks.co
Hi,
Is there a way to see which files have been deduped, so I can copy them again
an un-dedupe them?
unfortunately, that's not easy (I've tried it :) ).
The issue is that the dedup table (which knows which blocks have been deduped)
doesn't know about files.
And if you pull block pointers fo
On Tue, Jul 20, 2010 at 1:40 PM, Ulrich Graef wrote:
> When you are writing to a file and currently dedup is enabled, then the
> Data is entered into the dedup table of the pool.
> (There is one dedup table per pool not per zfs).
>
> Switching off the dedup does not change this data.
Yes, i suppo
Hi,
Hernan Freschi wrote:
Hi, thanks for answering,
How large is your ARC / your main memory?
Probably too small to hold all metadata (1/1000 of the data amount).
=> metadata has to be read again and again
Main memory is 8GB. ARC (according to arcstat.pl) usually stays at 5-7GB
I have 8GB RAM, arcsz as reported by arcstat.pl is 5-7GB usually.
It took about 20-30 mins to delete the files.
Is there a way to see which files have been deduped, so I can copy them again
an un-dedupe them?
Thanks,
Hernan
--
This message posted from opensolaris.org
__
) I'm getting?
>
> so far nothing suprising...
>
>
> Regards,
>
> Ulrich
>
>
>
> - Original Message -
> From: drge...@gmail.com
> To: zfs-discuss@opensolaris.org
> Sent: Monday, July 19, 2010 5:14:03 PM GMT +01:00 Amsterdam / Berlin / Bern /
>
/ Stockholm / Vienna
Subject: [zfs-discuss] Deleting large amounts of files
Hello,
I think this is the second time this happens to me. A couple of year ago, I
deleted a big (500G) zvol and then the machine started to hang some 20 minutes
later (out of memory), even rebooting didnt help. But wit
If these files are deduped, and there is not a lot of RAM on the machine, it
can take a long, long time to work through the dedupe portion. I don't know
enough to know if that is what you are experiencing, but it could be the
problem.
How much RAM do you have?
Scott
--
This message posted fro
Hello,
I think this is the second time this happens to me. A couple of year ago, I
deleted a big (500G) zvol and then the machine started to hang some 20 minutes
later (out of memory), even rebooting didnt help. But with the great support
from Victor Latushkin, who on a weekend helped me debug t