[zfs-discuss] tuning zfs_arc_min

2011-10-06 Thread Frank Van Damme
Hello, quick and stupid question: I'm breaking my head over how to tunz zfs_arc_min on a running system. There must be some magic word to pipe into mdb -kw but I forgot it. I tried /etc/system but it's still at the old value after reboot: ZFS Tunables (/etc/system): set zfs:zfs_arc_min =

Re: [zfs-discuss] ZFS

2011-10-06 Thread Frank Van Damme
io, possibly? -- Frank Van Damme No part of this copyright message may be reproduced, read or seen, dead or alive or by any means, including but not limited to telepathy without the benevolence of the author. ___ zfs-discuss mailing list zfs-discuss@opensola

Re: [zfs-discuss] tuning zfs_arc_min

2011-10-10 Thread Frank Van Damme
2011/10/8 James Litchfield : > The value of zfs_arc_min specified in /etc/system must be over 64MB > (0x400). > Otherwise the setting is ignored. The value is in bytes not pages. wel I've now set it to 0x800 and it stubbornly stays at 2048 MB... -- Frank Van Damme N

Re: [zfs-discuss] tuning zfs_arc_min

2011-10-11 Thread Frank Van Damme
cially for a storage server. Can > you explain your reasoning? Honestly? I don't remember. might be a "leftover" setting from a year ago. by now, I figured out I need to "update the boot archive" in order for the new setting to have effect at boot time which apparently in

Re: [zfs-discuss] tuning zfs_arc_min

2011-10-12 Thread Frank Van Damme
Op 12-10-11 02:27, Richard Elling schreef: On Oct 11, 2011, at 2:03 PM, Frank Van Damme wrote: Honestly? I don't remember. might be a "leftover" setting from a year ago. by now, I figured out I need to "update the boot archive" in order for the new setting to have

[zfs-discuss] deduplication: l2arc size

2010-08-23 Thread Frank Van Damme
a total of 13,027,407 entries, meaning it's 6,670,032,384 bytes big. So suppose our data grow on with a factor 12, it will take 80 GB. So, it would be best to buy a 128 GB SSD as L2ARC cache. Correct? Thanks for enlightening me, -- Frank Van Damme

[zfs-discuss] very slow boot: stuck at mounting zfs filesystems

2010-12-08 Thread Frank Van Damme
oblem may have anything to do with dedup? -- Frank Van Damme No part of this copyright message may be reproduced, read or seen, dead or alive or by any means, including but not limited to telepathy without the benevolence of the author. ___ zfs-discuss maili

Re: [zfs-discuss] very slow boot: stuck at mounting zfs filesystems

2010-12-09 Thread Frank Van Damme
somewhat > of a workaround for this, although I've not seen comprehensive figures for > the gain it gives > - http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6977913 -- Frank Van Damme No part of this copyright message may be reproduced, read or seen, dead or alive or by

Re: [zfs-discuss] very slow boot: stuck at mounting zfs filesystems

2010-12-09 Thread Frank Van Damme
s with the wrong object size) that would cause other > components to hang, waiting for memory allocations. > > This was so bad in earlier kernels that systems would become unresponsive > for > a potentially very long time ( a phenomenon known as "bricking"). > > As I

Re: [zfs-discuss] ZFS ... open source moving forward?

2010-12-11 Thread Frank Van Damme
> Solaris 11 is released, there's really not much point in debating it. And if they don't, it will be Sad, both in terms of useful code not being available to a wide community to review and amend, as in terms of Oracle not really getting the point about open source development. -- Fra

[zfs-discuss] gaining speed with l2arc

2011-05-03 Thread Frank Van Damme
data). Bad idea, or would it even help to set primarycache=metadata too, to not let RAM fill up with file data? P.S. the system is: NexentaOS_134f (I'm looking into newer OpenSolaris variants with bugs fixed/better performance, too). -- Frank Van Damme No part of this copyright message ma

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-06 Thread Frank Van Damme
Op 06-05-11 05:44, Richard Elling schreef: > As the size of the data grows, the need to have the whole DDT in RAM or L2ARC > decreases. With one notable exception, destroying a dataset or snapshot > requires > the DDT entries for the destroyed blocks to be updated. This is why people can > go for

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-09 Thread Frank Van Damme
Op 08-05-11 17:20, Edward Ned Harvey schreef: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey >> >> But I'll go tune and test with this knowledge, just to be sure. > > BTW, here's how to tune it: > > echo "arc_meta_limit

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-09 Thread Frank Van Damme
Op 09-05-11 14:36, Edward Ned Harvey schreef: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey >> >> So now I'll change meta_max and >> see if it helps... > > Oh, know what? Nevermind. > I just looked at the source, and i

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-09 Thread Frank Van Damme
Op 09-05-11 15:42, Edward Ned Harvey schreef: >> > in my previous >> > post my arc_meta_used was bigger than my arc_meta_limit (by about 50%) > I have the same thing. But as I sit here and run more and more extensive > tests on it ... it seems like arc_meta_limit is sort of a soft limit. Or it >

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-10 Thread Frank Van Damme
Op 09-05-11 15:42, Edward Ned Harvey schreef: >> > in my previous >> > post my arc_meta_used was bigger than my arc_meta_limit (by about 50%) > I have the same thing. But as I sit here and run more and more extensive > tests on it ... it seems like arc_meta_limit is sort of a soft limit. Or it >

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-11 Thread Frank Van Damme
Op 10-05-11 06:56, Edward Ned Harvey schreef: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey >> >> BTW, here's how to tune it: >> >> echo "arc_meta_limit/Z 0x3000" | sudo mdb -kw >> >> echo "::arc" | sudo mdb -k | gre

Re: [zfs-discuss] Faster copy from UFS to ZFS

2011-05-19 Thread Frank Van Damme
Op 03-05-11 17:55, Brandon High schreef: > -H: Hard links If you're going to this for 2 TB of data, remember to expand your swap space first (or have tons of memory). Rsync will need it to store every inode number in the directory. -- No part of this copyright message may be reproduced, read or

Re: [zfs-discuss] Solaris vs FreeBSD question

2011-05-20 Thread Frank Van Damme
Op 20-05-11 01:17, Chris Forgeron schreef: > I ended up switching back to FreeBSD after using Solaris for some time > because I was getting tired of weird pool corruptions and the like. Did you ever manage to recover the data you blogged about on Sunday, February 6, 2011? -- No part of this cop

Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread Frank Van Damme
Op 24-05-11 22:58, LaoTsao schreef: > With various fock of opensource project > E.g. Zfs, opensolaris, openindina etc there are all different > There are not guarantee to be compatible I hope at least they'll try. Just in case I want to import/export zpools between Nexenta and OpenIndiana? -- N

Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread Frank Van Damme
Op 25-05-11 14:27, joerg.moellenk...@sun.com schreef: > Well, at first ZFS development is no standard body and at the end > everything has to be measured in compatibility to the Oracle ZFS > implementation Why? Given that ZFS is Solaris ZFS just as well as Nexenta ZFS just as well as illumos ZFS,

Re: [zfs-discuss] DDT sync?

2011-05-27 Thread Frank Van Damme
Op 26-05-11 13:38, Edward Ned Harvey schreef: > Perhaps a property could be > set, which would store the DDT exclusively on that device. Oh yes please, let me put my DDT on an SSD. But what if you loose it (the vdev), would there be a way to reconstruct the DDT (which you need to be able to delet

Re: [zfs-discuss] optimal layout for 8x 1 TByte SATA (consumer)

2011-05-27 Thread Frank Van Damme
t; These are all 7200.11 Seagates, refurbished. I'd scrub > once a week, that'd probably suck on raidz2, too? > > Thanks. Sequential? Let's suppose no spares. 4 mirrors of 2 = sustained bandwidth of 4 disks raidz2 with 8 disks = sustained bandwidth of 6 disks So :) --

Re: [zfs-discuss] offline dedup

2011-05-27 Thread Frank Van Damme
isable dedup, the system won't bother checking to see if > there are duplicate blocks anymore.  So the DDT won't need to be in > arc+l2arc.  I should say "shouldn't." Except when deleting deduped blocks. -- Frank Van Damme No part of this copyright message may be r

Re: [zfs-discuss] DDT sync?

2011-06-01 Thread Frank Van Damme
ARC size on this box tends to drop far below arc_min after a few days, not withstanding the fact it's supposed to be a hard limit. I call for an arc_data_max setting :) -- Frank Van Damme No part of this copyright message may be reproduced, read or seen, dead or alive o

Re: [zfs-discuss] zpool import hangs any zfs-related programs, eats all RAM and dies in swapping hell

2011-06-14 Thread Frank Van Damme
2011/6/10 Tim Cook : > While your memory may be sufficient, that cpu is sorely lacking.  Is it even > 64bit?  There's a reason intel couldn't give those things away in the early > 2000s and amd was eating their lunch. A Pentium 4 is 32-bit. -- Frank Van Damme No part of thi

Re: [zfs-discuss] question about COW and snapshots

2011-06-16 Thread Frank Van Damme
Op 15-06-11 05:56, Richard Elling schreef: > You can even have applications like databases make snapshots when > they want. Makes me think of a backup utility called mylvmbackup, which is written with Linux in mind - basically it locks mysql tables, takes an LVM snapshot and releases the lock (and

Re: [zfs-discuss] question about COW and snapshots

2011-06-16 Thread Frank Van Damme
Op 15-06-11 14:30, Simon Walter schreef: > Anyone know how Google Docs does it? Anyone from Google on the list? :-) Seriously, this is the kind of feature to be found in Serious CMS applications, like, as already mentioned, Alfresco. -- No part of this copyright message may be reproduced, read

Re: [zfs-discuss] Summary: Dedup memory and performance (again, again)

2011-07-14 Thread Frank Van Damme
Op 12-07-11 13:40, Jim Klimov schreef: > Even if I batch background RM's so a hundred processes hang > and then they all at once complete in a minute or two. Hmmm. I only run one rm process at a time. You think running more processes at the same time would be faster? -- No part of this copyright

Re: [zfs-discuss] Summary: Dedup memory and performance (again, again)

2011-07-14 Thread Frank Van Damme
Op 14-07-11 12:28, Jim Klimov schreef: >> > Yes, quite often it seems so. > Whenever my slow "dcpool" decides to accept a write, > it processes a hundred pending deletions instead of one ;) > > Even so, it took quite a few pool or iscsi hangs and then > reboots of both server and client, and about

Re: [zfs-discuss] Summary: Dedup memory and performance (again, again)

2011-07-14 Thread Frank Van Damme
Op 15-07-11 04:27, Edward Ned Harvey schreef: > Is anyone from Oracle reading this? I understand if you can't say what > you're working on and stuff like that. But I am merely hopeful this work > isn't going into a black hole... > > Anyway. Thanks for listening (I hope.) ttyl If they aren'

Re: [zfs-discuss] Zil on multiple usb keys

2011-07-18 Thread Frank Van Damme
iding cheap storage). -- Frank Van Damme No part of this copyright message may be reproduced, read or seen, dead or alive or by any means, including but not limited to telepathy without the benevolence of the author. ___ zfs-discuss mailing list

Re: [zfs-discuss] zfs send/receive and ashift

2011-07-26 Thread Frank Van Damme
Op 26-07-11 12:56, Fred Liu schreef: > Any alternatives, if you don't mind? ;-) vpn's, openssl piped over netcat, a password-protected zip file,... ;) ssh would be the most practical, probably. -- No part of this copyright message may be reproduced, read or seen, dead or alive or by any means,