Hello,
quick and stupid question: I'm breaking my head over how to tunz
zfs_arc_min on a running system. There must be some magic word to pipe
into mdb -kw but I forgot it. I tried /etc/system but it's still at the
old value after reboot:
ZFS Tunables (/etc/system):
set zfs:zfs_arc_min =
io, possibly?
--
Frank Van Damme
No part of this copyright message may be reproduced, read or seen,
dead or alive or by any means, including but not limited to telepathy
without the benevolence of the author.
___
zfs-discuss mailing list
zfs-discuss@opensola
2011/10/8 James Litchfield :
> The value of zfs_arc_min specified in /etc/system must be over 64MB
> (0x400).
> Otherwise the setting is ignored. The value is in bytes not pages.
wel I've now set it to 0x800 and it stubbornly stays at 2048 MB...
--
Frank Van Damme
N
cially for a storage server. Can
> you explain your reasoning?
Honestly? I don't remember. might be a "leftover" setting from a year
ago. by now, I figured out I need to "update the boot archive" in
order for the new setting to have effect at boot time which apparently
in
Op 12-10-11 02:27, Richard Elling schreef:
On Oct 11, 2011, at 2:03 PM, Frank Van Damme wrote:
Honestly? I don't remember. might be a "leftover" setting from a year
ago. by now, I figured out I need to "update the boot archive" in
order for the new setting to have
a total of 13,027,407 entries, meaning
it's 6,670,032,384 bytes big. So suppose our data grow on with a factor
12, it will take 80 GB. So, it would be best to buy a 128 GB SSD as
L2ARC cache. Correct?
Thanks for enlightening me,
--
Frank Van Damme
oblem may have anything to do with dedup?
--
Frank Van Damme
No part of this copyright message may be reproduced, read or seen,
dead or alive or by any means, including but not limited to telepathy
without the benevolence of the author.
___
zfs-discuss maili
somewhat
> of a workaround for this, although I've not seen comprehensive figures for
> the gain it gives
> - http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6977913
--
Frank Van Damme
No part of this copyright message may be reproduced, read or seen,
dead or alive or by
s with the wrong object size) that would cause other
> components to hang, waiting for memory allocations.
>
> This was so bad in earlier kernels that systems would become unresponsive
> for
> a potentially very long time ( a phenomenon known as "bricking").
>
> As I
> Solaris 11 is released, there's really not much point in debating it.
And if they don't, it will be Sad, both in terms of useful code not
being available to a wide community to review and amend, as in terms
of Oracle not really getting the point about open source development.
--
Fra
data).
Bad idea, or would it even help to set primarycache=metadata too, to not
let RAM fill up with file data?
P.S. the system is: NexentaOS_134f (I'm looking into newer OpenSolaris
variants with bugs fixed/better performance, too).
--
Frank Van Damme
No part of this copyright message ma
Op 06-05-11 05:44, Richard Elling schreef:
> As the size of the data grows, the need to have the whole DDT in RAM or L2ARC
> decreases. With one notable exception, destroying a dataset or snapshot
> requires
> the DDT entries for the destroyed blocks to be updated. This is why people can
> go for
Op 08-05-11 17:20, Edward Ned Harvey schreef:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>>
>> But I'll go tune and test with this knowledge, just to be sure.
>
> BTW, here's how to tune it:
>
> echo "arc_meta_limit
Op 09-05-11 14:36, Edward Ned Harvey schreef:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>>
>> So now I'll change meta_max and
>> see if it helps...
>
> Oh, know what? Nevermind.
> I just looked at the source, and i
Op 09-05-11 15:42, Edward Ned Harvey schreef:
>> > in my previous
>> > post my arc_meta_used was bigger than my arc_meta_limit (by about 50%)
> I have the same thing. But as I sit here and run more and more extensive
> tests on it ... it seems like arc_meta_limit is sort of a soft limit. Or it
>
Op 09-05-11 15:42, Edward Ned Harvey schreef:
>> > in my previous
>> > post my arc_meta_used was bigger than my arc_meta_limit (by about 50%)
> I have the same thing. But as I sit here and run more and more extensive
> tests on it ... it seems like arc_meta_limit is sort of a soft limit. Or it
>
Op 10-05-11 06:56, Edward Ned Harvey schreef:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>>
>> BTW, here's how to tune it:
>>
>> echo "arc_meta_limit/Z 0x3000" | sudo mdb -kw
>>
>> echo "::arc" | sudo mdb -k | gre
Op 03-05-11 17:55, Brandon High schreef:
> -H: Hard links
If you're going to this for 2 TB of data, remember to expand your swap
space first (or have tons of memory). Rsync will need it to store every
inode number in the directory.
--
No part of this copyright message may be reproduced, read or
Op 20-05-11 01:17, Chris Forgeron schreef:
> I ended up switching back to FreeBSD after using Solaris for some time
> because I was getting tired of weird pool corruptions and the like.
Did you ever manage to recover the data you blogged about on Sunday,
February 6, 2011?
--
No part of this cop
Op 24-05-11 22:58, LaoTsao schreef:
> With various fock of opensource project
> E.g. Zfs, opensolaris, openindina etc there are all different
> There are not guarantee to be compatible
I hope at least they'll try. Just in case I want to import/export zpools
between Nexenta and OpenIndiana?
--
N
Op 25-05-11 14:27, joerg.moellenk...@sun.com schreef:
> Well, at first ZFS development is no standard body and at the end
> everything has to be measured in compatibility to the Oracle ZFS
> implementation
Why? Given that ZFS is Solaris ZFS just as well as Nexenta ZFS just as
well as illumos ZFS,
Op 26-05-11 13:38, Edward Ned Harvey schreef:
> Perhaps a property could be
> set, which would store the DDT exclusively on that device.
Oh yes please, let me put my DDT on an SSD.
But what if you loose it (the vdev), would there be a way to reconstruct
the DDT (which you need to be able to delet
t; These are all 7200.11 Seagates, refurbished. I'd scrub
> once a week, that'd probably suck on raidz2, too?
>
> Thanks.
Sequential? Let's suppose no spares.
4 mirrors of 2 = sustained bandwidth of 4 disks
raidz2 with 8 disks = sustained bandwidth of 6 disks
So :)
--
isable dedup, the system won't bother checking to see if
> there are duplicate blocks anymore. So the DDT won't need to be in
> arc+l2arc. I should say "shouldn't."
Except when deleting deduped blocks.
--
Frank Van Damme
No part of this copyright message may be r
ARC size on this box tends to drop far below arc_min after a
few days, not withstanding the fact it's supposed to be a hard limit.
I call for an arc_data_max setting :)
--
Frank Van Damme
No part of this copyright message may be reproduced, read or seen,
dead or alive o
2011/6/10 Tim Cook :
> While your memory may be sufficient, that cpu is sorely lacking. Is it even
> 64bit? There's a reason intel couldn't give those things away in the early
> 2000s and amd was eating their lunch.
A Pentium 4 is 32-bit.
--
Frank Van Damme
No part of thi
Op 15-06-11 05:56, Richard Elling schreef:
> You can even have applications like databases make snapshots when
> they want.
Makes me think of a backup utility called mylvmbackup, which is written
with Linux in mind - basically it locks mysql tables, takes an LVM
snapshot and releases the lock (and
Op 15-06-11 14:30, Simon Walter schreef:
> Anyone know how Google Docs does it?
Anyone from Google on the list? :-)
Seriously, this is the kind of feature to be found in Serious CMS
applications, like, as already mentioned, Alfresco.
--
No part of this copyright message may be reproduced, read
Op 12-07-11 13:40, Jim Klimov schreef:
> Even if I batch background RM's so a hundred processes hang
> and then they all at once complete in a minute or two.
Hmmm. I only run one rm process at a time. You think running more
processes at the same time would be faster?
--
No part of this copyright
Op 14-07-11 12:28, Jim Klimov schreef:
>>
> Yes, quite often it seems so.
> Whenever my slow "dcpool" decides to accept a write,
> it processes a hundred pending deletions instead of one ;)
>
> Even so, it took quite a few pool or iscsi hangs and then
> reboots of both server and client, and about
Op 15-07-11 04:27, Edward Ned Harvey schreef:
> Is anyone from Oracle reading this? I understand if you can't say what
> you're working on and stuff like that. But I am merely hopeful this work
> isn't going into a black hole...
>
> Anyway. Thanks for listening (I hope.) ttyl
If they aren'
iding cheap storage).
--
Frank Van Damme
No part of this copyright message may be reproduced, read or seen,
dead or alive or by any means, including but not limited to telepathy
without the benevolence of the author.
___
zfs-discuss mailing list
Op 26-07-11 12:56, Fred Liu schreef:
> Any alternatives, if you don't mind? ;-)
vpn's, openssl piped over netcat, a password-protected zip file,... ;)
ssh would be the most practical, probably.
--
No part of this copyright message may be reproduced, read or seen,
dead or alive or by any means,
33 matches
Mail list logo