Jason J. W. Williams writes:
> Hi Guys,
>
> Rather than starting a new thread I thought I'd continue this thread.
> I've been running Build 54 on a Thumper since Mid January and wanted
> to ask a question about the zfs_arc_max setting. We set it to "
> 0x1 #4GB", however its creeping
So you're not really sure it's the ARC growing, but only that the kernel
is growing
to 6.8GB.
Print the arc values via mdb:
# mdb -k
Loading modules: [ unix krtld genunix specfs dtrace uppc scsi_vhci ufs
ip hook neti sctp arp usba nca lofs zfs random sppp crypto ptm ipc ]
> arc::print -t siz
Hi Guys,
Rather than starting a new thread I thought I'd continue this thread.
I've been running Build 54 on a Thumper since Mid January and wanted
to ask a question about the zfs_arc_max setting. We set it to "
0x1 #4GB", however its creeping over that till our Kernel
memory usage is nea
With latest Nevada setting zfs_arc_max in /etc/system is
sufficient. Playing with mdb on a live system is more
tricky and is what caused the problem here.
-r
[EMAIL PROTECTED] writes:
> Jim Mauro wrote:
>
> > All righty...I set c_max to 512MB, c to 512MB, and p to 256MB...
> >
> > > arc::
Jim Mauro wrote:
All righty...I set c_max to 512MB, c to 512MB, and p to 256MB...
> arc::print -tad
{
...
c02e29e8 uint64_t size = 0t299008
c02e29f0 uint64_t p = 0t16588228608
c02e29f8 uint64_t c = 0t33176457216
c02e2a00 uint64_t c_min = 0t10703
All righty...I set c_max to 512MB, c to 512MB, and p to 256MB...
> arc::print -tad
{
...
c02e29e8 uint64_t size = 0t299008
c02e29f0 uint64_t p = 0t16588228608
c02e29f8 uint64_t c = 0t33176457216
c02e2a00 uint64_t c_min = 0t1070318720
c02e2a08
Will try that now...
/jim
[EMAIL PROTECTED] wrote:
I suppose I should have been more forward about making my last point.
If the arc_c_max isn't set in /etc/system, I don't believe that the ARC
will initialize arc.p to the correct value. I could be wrong about
this; however, next time you set
Following a reboot:
> arc::print -tad
{
. . .
c02e29e8 uint64_t size = 0t299008
c02e29f0 uint64_t p = 0t16588228608
c02e29f8 uint64_t c = 0t33176457216
c02e2a00 uint64_t c_min = 0t1070318720
c02e2a08 uint64_t c_max = 0t33176457216
. . .
}
>
I suppose I should have been more forward about making my last point.
If the arc_c_max isn't set in /etc/system, I don't believe that the ARC
will initialize arc.p to the correct value. I could be wrong about
this; however, next time you set c_max, set c to the same value as c_max
and set p to ha
How/when did you configure arc_c_max?
Immediately following a reboot, I set arc.c_max using mdb,
then verified reading the arc structure again.
arc.p is supposed to be
initialized to half of arc.c. Also, I assume that there's a reliable
test case for reproducing this problem?
Yep. I'm
Something else to consider, depending upon how you set arc_c_max, you
may just want to set arc_c and arc_p at the same time. If you try
setting arc_c_max, and then setting arc_c to arc_c_max, and then set
arc_p to arc_c / 2, do you still get this problem?
-j
On Thu, Mar 15, 2007 at 05:18:12PM -0
Gar. This isn't what I was hoping to see. Buffers that aren't
available for eviction aren't listed in the lsize count. It looks like
the MRU has grown to 10Gb and most of this could be successfully
evicted.
The calculation for determining if we evict from the MRU is in
arc_adjust() and looks so
> ARC_mru::print -d size lsize
size = 0t10224433152
lsize = 0t10218960896
> ARC_mfu::print -d size lsize
size = 0t303450112
lsize = 0t289998848
> ARC_anon::print -d size
size = 0
>
So it looks like the MRU is running at 10GB...
What does this tell us?
Thanks,
/jim
[EMAIL PROTECTED] wrote:
This seems a bit strange. What's the workload, and also, what's the
output for:
> ARC_mru::print size lsize
> ARC_mfu::print size lsize
and
> ARC_anon::print size
For obvious reasons, the ARC can't evict buffers that are in use.
Buffers that are available to be evicted should be on the mru or mf
Hi Jim,
My understanding is that the DLNC can consume quite a bit of memory
too, and the ARC limitations (and memory culler) don't clean the DNLC
yet. So if you're working with a lot of smaller files, you can still
go way over your ARC limit. Anyone, please correct me if I've got that
wrong.
-J
FYI - After a few more runs, ARC size hit 10GB, which is now 10X c_max:
> arc::print -tad
{
. . .
c02e29e8 uint64_t size = 0t10527883264
c02e29f0 uint64_t p = 0t16381819904
c02e29f8 uint64_t c = 0t1070318720
c02e2a00 uint64_t c_min = 0t1070318720
f
Running an mmap-intensive workload on ZFS on a X4500, Solaris 10 11/06
(update 3). All file IO is mmap(file), read memory segment, unmap, close.
Tweaked the arc size down via mdb to 1GB. I used that value because
c_min was also 1GB, and I was not sure if c_max could be larger than
c_minAnyway
17 matches
Mail list logo