Aucoin wrote:
From: Rene Herman [mailto:[EMAIL PROTECTED] ftruncate there
and some similarity to a problem I once experienced
I can't honestly say I completely grasp the fundamentals of the issue
you experienced but we are using ext3 with data=journal
Rereading I see ext3 isn't involved at a
> From: Rene Herman [mailto:[EMAIL PROTECTED]
> ftruncate there and some similarity to a problem I once experienced
I can't honestly say I completely grasp the fundamentals of the issue you
experienced but we are using ext3 with data=journal
-
To unsubscribe from this list: send the line "unsubs
> From: Nick Piggin [mailto:[EMAIL PROTECTED]
> Can you try getting the output of /proc/vmstat as well?
Ouput from vmstat, meminfo and bloatmon below.
vmstat
nr_dirty 0
nr_writeback 0
nr_unstable 0
nr_page_table_pages 361
nr_mapped 33077
nr_slab 8107
pgpgin 1433195947
pgpgout 148795046
pswpin 0
p
Nick Piggin wrote:
Aucoin wrote:
Ummm, shm_open, ftruncate, mmap ? Is it a trick question ? The process
responsible for initially setting up the shared area doesn't stay
resident.
The issue is that the shm pages should show up in the active and
inactive lists. But they aren't, and you seem
Aucoin wrote:
From: Linus Torvalds [mailto:[EMAIL PROTECTED]
I actually suspect you should be _fairly_ close to such a situation
We run with min_free_kbytes set around 4k to answer your earlier question.
Louis, exactly how do you allocate that big 1.6GB shared area?
Ummm, shm_open, ftrun
> From: Linus Torvalds [mailto:[EMAIL PROTECTED]
> I actually suspect you should be _fairly_ close to such a situation
We run with min_free_kbytes set around 4k to answer your earlier question.
> Louis, exactly how do you allocate that big 1.6GB shared area?
Ummm, shm_open, ftruncate, mmap ? Is
On Mon, 4 Dec 2006, Aucoin wrote:
>
> If I'm going to go through all the trouble to change the kernel and maybe
> create a new proc file how much code would I have to touch to create a proc
> file to set something like, let's say, effective memory and have all the vm
> calculations use effective
> From: Nick Piggin [mailto:[EMAIL PROTECTED]
> I'd be interested to know how OOM and page reclaim behaves after these
> patches
> (or with a newer kernel).
We didn't get far today. The various suggestions everyone has for solving
this problem spurred several new discussions inside the office and
On Mon, 4 Dec 2006 15:25:47 -0600
"Aucoin" <[EMAIL PROTECTED]> wrote:
> > From: Jeffrey Hundstad [mailto:[EMAIL PROTECTED]
> > POSIX_FADV_NOREUSE flags. It seems these would cause the tar and patch
>
> WI may be na__ve as well, but that sounds interesting. Unless someone knows
> of an obvious re
> From: Horst H. von Brand [mailto:[EMAIL PROTECTED]
> How do you /know/ it won't just be recycled in the production case?
In the production case is when oom fires and kills things. I can only assume
memory is not being freed fast enough otherwise oom wouldn't get so upset.
> That is your ultimat
> From: Tim Schmielau [mailto:[EMAIL PROTECTED]
> I believe your OOM problem is not connected to these observations. There
I don't know what to tell you except oom fires only when the update runs. I
know it's a pitiful datapoint so I'll work on getting more data.
-
To unsubscribe from this list:
> From: Jeffrey Hundstad [mailto:[EMAIL PROTECTED]
> POSIX_FADV_NOREUSE flags. It seems these would cause the tar and patch
WI may be naïve as well, but that sounds interesting. Unless someone knows
of an obvious reason this won't work we can make a one-off tar command and
give it a whirl.
-
To
As a workaround try this:
echo 2 > /proc/sys/vm/overcommit_memory
echo 0 > /proc/sys/vm/overcommit_ratio
Hopefully someone can fix this intrinsic swap before drop behaviour.
Thanks!
--
Al
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EM
Aucoin <[EMAIL PROTECTED]> wrote:
> From: Horst H. von Brand [mailto:[EMAIL PROTECTED]
> > That means that there isn't a need for that memory at all (and so they
> In the current isolated non-production, not actually bearing a load test
> case yes. But if I can't get it to not swap on an idle syst
On Mon, 4 Dec 2006, Aucoin wrote:
> > From: Horst H. von Brand [mailto:[EMAIL PROTECTED]
> > That means that there isn't a need for that memory at all (and so they
>
> In the current isolated non-production, not actually bearing a load test
> case yes. But if I can't get it to not swap on an idle
Hello,
Please forgive me if this is naive. It seems that you could recompile
your tar and patch commands to use the POSIX_FADVISE(2) feature with the
POSIX_FADV_NOREUSE flags. It seems these would cause the tar and patch
commands to not clutter the page cache at all.
It'd be nice to be abl
On Mon, 4 Dec 2006, Andrew Morton wrote:
> but that's rather dumb. Better would be to remove mlocked pages from the
> LRU.
Could we generalize the removal of sections of a zone from the LRU? I
believe this would help various buffer allocation schemes. We have some
issues with heavy LRU scans i
On Mon, 04 Dec 2006 14:07:22 -0300
"Horst H. von Brand" <[EMAIL PROTECTED]> wrote:
> Please explain again:
>
> - What you are doing, step by step
That 2GB machine apparently has a 1.6GB shm segment which is mlocked. That will
cause the VM to do one heck of a lot of pointless scanning and could,
> From: Horst H. von Brand [mailto:[EMAIL PROTECTED]
> That means that there isn't a need for that memory at all (and so they
In the current isolated non-production, not actually bearing a load test
case yes. But if I can't get it to not swap on an idle system I have no hope
of avoiding OOM on a
> From: David Lang [mailto:[EMAIL PROTECTED]
> I think that I am seeing two seperate issues here that are getting mixed
> up.
Fair enough.
> however the real problem that Aucoin is running into is patching process
> (tar, etc) kicks off the system is choosing to use it's
First name Louis, yes bu
Aucoin <[EMAIL PROTECTED]> wrote:
[...]
> The definition of perfectly good here may be up for debate or
> someone can explain it to me. This perfectly good data was
> cached under the tar yet hours after the tar has completed the
> pages are still cached.
That means that there isn't a need for t
On Sun, 3 Dec 2006, Linus Torvalds wrote:
> Wouldn't it be much nicer to just lower the dirty-page limit?
>
> echo 1 > /proc/sys/vm/dirty_background_ratio
> echo 2 > /proc/sys/vm/dirty_ratio
Dirty ratio cannot be set to less than 5%. See
mm/page-writeback.c:get_dirty_limits().
> or
Aucoin wrote:
The definition of perfectly good here may be up for debate or
someone can explain it to me. This perfectly good data was
cached under the tar yet hours after the tar has completed the
pages are still cached.
If nothing else has asked for that memory since the tar, there is no
re
I think that I am seeing two seperate issues here that are getting mixed up.
1. while doing the tar + patch the system is choosing to use memory for
caching the tar (pushing program data out to cache).
2. after the tar has completed the data remins in the cache.
the answer for #2 is the one t
Aucoin wrote:
From: Nick Piggin [mailto:[EMAIL PROTECTED]
We had customers see similar incorrect OOM problems, so I sent in some
patches merged after 2.6.16. Can you upgrade to latest kernel? (otherwise
I guess backporting could be an option for you).
I will raise the question of moving the ke
> From: Nick Piggin [mailto:[EMAIL PROTECTED]
> We had customers see similar incorrect OOM problems, so I sent in some
> patches merged after 2.6.16. Can you upgrade to latest kernel? (otherwise
> I guess backporting could be an option for you).
I will raise the question of moving the kernel forwa
> PS: No need to put a copy of the entire message
Apologies for the lapse in protocol.
> The point you're missing is that an "inactive" page is a free
> page that happens to have known clean data on it
I understand now where the inactive page count is coming from.
I don't understand why there
Aucoin wrote:
We want it to swap less for this particular operation because it is low
priority compared to the rest of what's going on inside the box.
We've considered both artificially manipulating swap on the fly similar to
your suggestion as well a parallel thread that pumps a 3 into drop_cac
On Dec 03, 2006, at 20:54:41, Aucoin wrote:
As a side note, even now, *hours* after the tar has completed and
even though I have swappiness set to 0, cache pressure set to ,
all dirty timeouts set to 1 and all dirty ratios set to 1, I still
have a 360+K inactive page count and my "free"
On Sun, 3 Dec 2006, Andrew Morton wrote:
> On Sun, 3 Dec 2006 17:56:30 -0600
> "Aucoin" <[EMAIL PROTECTED]> wrote:
>
> > I hope I haven't muddied things up even more but basically what we want to
> > do is find a way to limit the number of cached pages for disk I/O on the OS
> > filesystem, eve
On Sun, 3 Dec 2006 19:54:41 -0600
"Aucoin" <[EMAIL PROTECTED]> wrote:
> What, if anything, besides manually echoing a "3" to drop_caches will cause
> all those inactive pages to be put back on the free list ?
There is no reason for the kernel to do that - a clean, inactive page is
immediately rec
On Sun, 3 Dec 2006 17:56:30 -0600
"Aucoin" <[EMAIL PROTECTED]> wrote:
> I hope I haven't muddied things up even more but basically what we want to
> do is find a way to limit the number of cached pages for disk I/O on the OS
> filesystem, even if it drastically slows down the untar and verify proc
PROTECTED]
Sent: Sunday, December 03, 2006 5:57 PM
To: 'Tim Schmielau'
Cc: 'Andrew Morton'; '[EMAIL PROTECTED]'; 'linux-kernel@vger.kernel.org';
'[EMAIL PROTECTED]'
Subject: RE: la la la la ... swappiness
We want it to swap less for this particu
Aucoin <[EMAIL PROTECTED]> wrote:
> We want it to swap less for this particular operation because it is low
> priority compared to the rest of what's going on inside the box.
The swapping is not a "operation" thing, it is global for /all/ what is
going on in the box. And having it swap less means
down the untar and verify process
because the disk I/O we really care about is not on any of the OS
partitions.
Louis Aucoin
-Original Message-
From: Tim Schmielau [mailto:[EMAIL PROTECTED]
Sent: Sunday, December 03, 2006 2:47 PM
To: Aucoin
Cc: 'Andrew Morton'; [EMAIL PROT
On Sun, 3 Dec 2006, Aucoin wrote:
> during tar extraction ... inactive pages reaches levels as high as ~375000
So why do you want the system to swap _less_? You need to find some free
memory for the additional processes to run in, and you have lots of
inactive pages, so I think you want to swap
w Morton [mailto:[EMAIL PROTECTED]
Sent: Sunday, December 03, 2006 2:09 AM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; linux-kernel@vger.kernel.org; [EMAIL PROTECTED]
Subject: Re: la la la la ... swappiness
> On Sun, 3 Dec 2006 00:16:38 -0600 "Aucoin" <[EMAIL PROTECTED]> wrot
> On Sun, 3 Dec 2006 00:16:38 -0600 "Aucoin" <[EMAIL PROTECTED]> wrote:
> I set swappiness to zero and it doesn't do what I want!
>
> I have a system that runs as a Linux based data server 24x7 and occasionally
> I need to apply an update or patch. It's a BIIIG patch to the tune of
> several hundr
Reformatted as plain text.
From: Aucoin [mailto:[EMAIL PROTECTED]
Sent: Sunday, December 03, 2006 12:17 AM
To: '[EMAIL PROTECTED]'; '[EMAIL PROTECTED]'; 'linux-kernel@vger.kernel.org';
'[EMAIL PROTECTED]'
Subject: la la la la ... swappiness
I set swappin
39 matches
Mail list logo