Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-20 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Edward Ned Harvey > > New problem: > > I'm following all the advice I summarized into the OP of this thread, and > testing on a test system. (A laptop). And it's just not working. I am > ju

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-18 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Edward Ned Harvey > > New problem: > > I'm following all the advice I summarized into the OP of this thread, and > testing on a test system. (A laptop). And it's just not working. I am > ju

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-11 Thread Frank Van Damme
Op 10-05-11 06:56, Edward Ned Harvey schreef: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey >> >> BTW, here's how to tune it: >> >> echo "arc_meta_limit/Z 0x3000" | sudo mdb -kw >> >> echo "::arc" | sudo mdb -k | gre

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-10 Thread Frank Van Damme
Op 09-05-11 15:42, Edward Ned Harvey schreef: >> > in my previous >> > post my arc_meta_used was bigger than my arc_meta_limit (by about 50%) > I have the same thing. But as I sit here and run more and more extensive > tests on it ... it seems like arc_meta_limit is sort of a soft limit. Or it >

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-09 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Edward Ned Harvey > > BTW, here's how to tune it: > > echo "arc_meta_limit/Z 0x3000" | sudo mdb -kw > > echo "::arc" | sudo mdb -k | grep meta_limit > arc_meta_limit= 76

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-09 Thread Frank Van Damme
Op 09-05-11 15:42, Edward Ned Harvey schreef: >> > in my previous >> > post my arc_meta_used was bigger than my arc_meta_limit (by about 50%) > I have the same thing. But as I sit here and run more and more extensive > tests on it ... it seems like arc_meta_limit is sort of a soft limit. Or it >

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-09 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Frank Van Damme > > in my previous > post my arc_meta_used was bigger than my arc_meta_limit (by about 50%) I have the same thing. But as I sit here and run more and more extensive tests on i

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-09 Thread Frank Van Damme
Op 09-05-11 14:36, Edward Ned Harvey schreef: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey >> >> So now I'll change meta_max and >> see if it helps... > > Oh, know what? Nevermind. > I just looked at the source, and i

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-09 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Edward Ned Harvey > > So now I'll change meta_max and > see if it helps... Oh, know what? Nevermind. I just looked at the source, and it seems arc_meta_max is just a gauge for you to use, so

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-09 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Frank Van Damme > > Otoh, I struggle to see the difference between arc_meta_limit and > arc_meta_max. Thanks for pointing this out. When I changed meta_limit and re-ran the test, there was no

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-09 Thread Frank Van Damme
Op 08-05-11 17:20, Edward Ned Harvey schreef: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey >> >> But I'll go tune and test with this knowledge, just to be sure. > > BTW, here's how to tune it: > > echo "arc_meta_limit

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-09 Thread Garrett D'Amore
Just another data point. The ddt is considered metadata, and by default the arc will not allow more than 1/4 of it to be used for metadata. Are you still sure it fits? Erik Trimble wrote: >On 5/7/2011 6:47 AM, Edward Ned Harvey wrote: >>> See below. Right around 400,000 blocks, dedup is su

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-08 Thread Richard Elling
On May 8, 2011, at 7:56 AM, Edward Ned Harvey wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey >> >> That could certainly start to explain why my >> arc size arcstats:c never grew to any size I thought seemed reaso

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-08 Thread Neil Perrin
On 05/08/11 09:22, Andrew Gabriel wrote: Toby Thain wrote: On 08/05/11 10:31 AM, Edward Ned Harvey wrote: ... Incidentally, does fsync() and sync return instantly or wait? Cuz "time sync" might product 0 sec every time even if there were something waiting to be flushed to disk. The

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-08 Thread Andrew Gabriel
Toby Thain wrote: On 08/05/11 10:31 AM, Edward Ned Harvey wrote: ... Incidentally, does fsync() and sync return instantly or wait? Cuz "time sync" might product 0 sec every time even if there were something waiting to be flushed to disk. The semantics need to be synchronous. Anything

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-08 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Edward Ned Harvey > > But I'll go tune and test with this knowledge, just to be sure. BTW, here's how to tune it: echo "arc_meta_limit/Z 0x3000" | sudo mdb -kw echo "::arc" | sudo mdb -k

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-08 Thread Edward Ned Harvey
> From: Garrett D'Amore [mailto:garr...@nexenta.com] > > It is tunable, I don't remember the exact tunable name... Arc_metadata_limit > or some such. There it is: echo "::arc" | sudo mdb -k | grep meta_limit arc_meta_limit= 286 MB Looking at my chart earlier in this discussion,

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-08 Thread Garrett D'Amore
It is tunable, I don't remember the exact tunable name... Arc_metadata_limit or some such. -- Garrett D'Amore On May 8, 2011, at 7:37 AM, "Edward Ned Harvey" wrote: >> From: Garrett D'Amore [mailto:garr...@nexenta.com] >> >> Just another data point. The ddt is considered metadata, and by

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-08 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Edward Ned Harvey > > That could certainly start to explain why my > arc size arcstats:c never grew to any size I thought seemed reasonable... Also now that I'm looking closer at arcstats, it

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-08 Thread Toby Thain
On 08/05/11 10:31 AM, Edward Ned Harvey wrote: >... > Incidentally, does fsync() and sync return instantly or wait? Cuz "time > sync" might product 0 sec every time even if there were something waiting to > be flushed to disk. The semantics need to be synchronous. Anything else would be a horribl

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-08 Thread Toby Thain
On 06/05/11 9:17 PM, Erik Trimble wrote: > On 5/6/2011 5:46 PM, Richard Elling wrote: >> ... >> Yes, perhaps a bit longer for recursive destruction, but everyone here >> knows recursion is evil, right? :-) >> -- richard > You, my friend, have obviously never worshipped at the Temple of the > Lam

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-08 Thread Edward Ned Harvey
> From: Garrett D'Amore [mailto:garr...@nexenta.com] > > Just another data point. The ddt is considered metadata, and by default the > arc will not allow more than 1/4 of it to be used for metadata. Are you > still > sure it fits? That's interesting. Is it tunable? That could certainly star

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-08 Thread Edward Ned Harvey
> From: Erik Trimble [mailto:erik.trim...@oracle.com] > > (1) I'm assuming you run your script repeatedly in the same pool, > without deleting the pool. If that is the case, that means that a run of > X+1 should dedup completely with the run of X. E.g. a run with 12 > blocks will dedup the fi

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-07 Thread Erik Trimble
On 5/7/2011 6:47 AM, Edward Ned Harvey wrote: See below. Right around 400,000 blocks, dedup is suddenly an order of magnitude slower than without dedup. 40 10.7sec 136.7sec143 MB 195 MB 80 21.0sec 465.6sec287 MB 391

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-07 Thread Edward Ned Harvey
> See below. Right around 400,000 blocks, dedup is suddenly an order of > magnitude slower than without dedup. > > 4010.7sec 136.7sec143 MB 195 MB > 8021.0sec 465.6sec287 MB 391 MB The interesting thing is

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-07 Thread Edward Ned Harvey
New problem: I'm following all the advice I summarized into the OP of this thread, and testing on a test system. (A laptop). And it's just not working. I am jumping into the dedup performance abyss far, far eariler than predicted... My test system is a laptop with 1.5G ram, c_min =150M, c_max

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-06 Thread Erik Trimble
On 5/6/2011 5:46 PM, Richard Elling wrote: On May 6, 2011, at 3:24 AM, Erik Trimble wrote: Casper and Richard are correct - RAM starvation seriously impacts snapshot or dataset deletion when a pool has dedup enabled. The reason behind this is that ZFS needs to scan the entire DDT to check t

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-06 Thread Richard Elling
On May 6, 2011, at 3:24 AM, Erik Trimble wrote: > On 5/6/2011 1:37 AM, casper@oracle.com wrote: >>> Op 06-05-11 05:44, Richard Elling schreef: As the size of the data grows, the need to have the whole DDT in RAM or L2ARC decreases. With one notable exception, destroying a data

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-06 Thread Yaverot
One of the quoted participants is Richard Elling, the other is Edward Ned Harvey, but my quoting was screwed up enough that I don't know which is which. Apologies. >> >zdb -DD poolname >> This just gives you the -S output, and the -D output all in one go. So I >Sorry, zdb -DD only works f

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-06 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Edward Ned Harvey > > > zdb -DD poolname > This just gives you the -S output, and the -D output all in one go. So I Sorry, zdb -DD only works for pools that are already dedup'd. If you wa

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-06 Thread Edward Ned Harvey
> From: Richard Elling [mailto:richard.ell...@gmail.com] > > > --- To calculate size of DDT --- > zdb -S poolname Look at total blocks allocated. It is rounded, and uses a suffix like "K, M, G" but it's in decimal (powers of 10) notation, so you have to remember that... So

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-06 Thread Tomas Ă–gren
On 06 May, 2011 - Erik Trimble sent me these 1,8K bytes: > If dedup isn't enabled, snapshot and data deletion is very light on RAM > requirements, and generally won't need to do much (if any) disk I/O. > Such deletion should take milliseconds to a minute or so. .. or hours. We've had problem

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-06 Thread Erik Trimble
On 5/6/2011 1:37 AM, casper@oracle.com wrote: Op 06-05-11 05:44, Richard Elling schreef: As the size of the data grows, the need to have the whole DDT in RAM or L2ARC decreases. With one notable exception, destroying a dataset or snapshot requires the DDT entries for the destroyed blocks to

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-06 Thread Casper . Dik
>Op 06-05-11 05:44, Richard Elling schreef: >> As the size of the data grows, the need to have the whole DDT in RAM or L2ARC >> decreases. With one notable exception, destroying a dataset or snapshot >> requires >> the DDT entries for the destroyed blocks to be updated. This is why people >> can

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-06 Thread Frank Van Damme
Op 06-05-11 05:44, Richard Elling schreef: > As the size of the data grows, the need to have the whole DDT in RAM or L2ARC > decreases. With one notable exception, destroying a dataset or snapshot > requires > the DDT entries for the destroyed blocks to be updated. This is why people can > go for

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-05 Thread Richard Elling
On May 4, 2011, at 7:56 PM, Edward Ned Harvey wrote: > This is a summary of a much longer discussion "Dedup and L2ARC memory > requirements (again)" > Sorry even this summary is long. But the results vary enormously based on > individual usage, so any "rule of thumb" metric that has been bouncing

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-05 Thread Edward Ned Harvey
> From: Karl Wagner [mailto:k...@mouse-hole.com] > > so there's an ARC entry referencing each individual DDT entry in the L2ARC?! > I had made the assumption that DDT entries would be grouped into at least > minimum block sized groups (8k?), which would have lead to a much more > reasonable ARC re

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-05 Thread Karl Wagner
so there's an ARC entry referencing each individual DDT entry in the L2ARC?! I had made the assumption that DDT entries would be grouped into at least minimum block sized groups (8k?), which would have lead to a much more reasonable ARC requirement. seems like a bad design to me, which leads to

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-05 Thread Edward Ned Harvey
> From: Erik Trimble [mailto:erik.trim...@oracle.com] > > Using the standard c_max value of 80%, remember that this is 80% of the > TOTAL system RAM, including that RAM normally dedicated to other > purposes. So long as the total amount of RAM you expect to dedicate to > ARC usage (for all ZFS us

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-04 Thread Erik Trimble
Good summary, Ned. A couple of minor corrections. On 5/4/2011 7:56 PM, Edward Ned Harvey wrote: This is a summary of a much longer discussion "Dedup and L2ARC memory requirements (again)" Sorry even this summary is long. But the results vary enormously based on individual usage, so any "rule o

[zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-04 Thread Edward Ned Harvey
This is a summary of a much longer discussion "Dedup and L2ARC memory requirements (again)" Sorry even this summary is long. But the results vary enormously based on individual usage, so any "rule of thumb" metric that has been bouncing around on the internet is simply not sufficient. You need to