Hello,
On Wed, Sep 27, 2000 at 01:55:52PM +0100, Hugh Dickins wrote:
> On Wed, 27 Sep 2000, Andrey Savochkin wrote:
> >
> > It's a waste of resources to reserve memory+swap for the case that every
> > running process decides to modify libc code (and, thus, should receive its
> > private copy of
On Wed, 27 Sep 2000, Andrey Savochkin wrote:
>
> It's a waste of resources to reserve memory+swap for the case that every
> running process decides to modify libc code (and, thus, should receive its
> private copy of the pages). A real waste!
A real waste indeed, but a bad example: libc code i
Horst von Brand wrote:
> I'd call emacs consistently not being able to start an ls on a 16Mb machine
> much worse than a surprise...
>
> Hint: Think about how emacs would go about doing that...
vfork ;-)
-- Jamie
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the b
On Tue, Sep 26, 2000 at 11:45:02AM -0600, Erik Andersen wrote:
[snip]
> "Overcommit" to me is the same things as Mark Hemment stated earlier in this
> thread -- the "fact that the system has over committed its memory resources.
> ie. it has sold too many tickets for the number of seats in the plan
Hello,
On Tue, Sep 26, 2000 at 01:10:30PM +0100, Mark Hemment wrote:
>
> On Mon, 25 Sep 2000, Stephen C. Tweedie wrote:
> > So you have run out of physical memory --- what do you do about it?
>
> Why let the system get into the state where it is neccessary to kill a
> process?
> Per-user/t
In message <[EMAIL PROTECTED]> you
write:
> I suspect that the proper way to do this is to just make another gfp_flag,
> which is basically another hint to the mm layer that we're doing a multi-
> page allocation and that the MM layer should not try forever to handle it.
>
> In fact, that's inde
Erik Andersen <[EMAIL PROTECTED]> said:
[...]
> Another approach would be to let user space turn off overcommit.
> That way, user space can be assured there will be no surprises...
I'd call emacs consistently not being able to start an ls on a 16Mb machine
much worse than a surprise...
Hint:
Hello,
> > Another approach would be to let user space turn off overcommit.
>
> No. Overcommit only applies to pageable memory. Beancounter is
> really needed for non-pageable resources such as page tables and
> mlock()ed pages.
>
In addition to beancounter, do you think pageable page tabl
On Tue Sep 26, 2000 at 06:08:20PM +0100, Stephen C. Tweedie wrote:
> Hi,
>
> On Tue, Sep 26, 2000 at 11:02:48AM -0600, Erik Andersen wrote:
>
> > Another approach would be to let user space turn off overcommit.
>
> No. Overcommit only applies to pageable memory. Beancounter is
> really need
Hi,
On Tue, Sep 26, 2000 at 11:02:48AM -0600, Erik Andersen wrote:
> Another approach would be to let user space turn off overcommit.
No. Overcommit only applies to pageable memory. Beancounter is
really needed for non-pageable resources such as page tables and
mlock()ed pages.
Cheers,
St
On Tue Sep 26, 2000 at 05:04:06PM +0100, Stephen C. Tweedie wrote:
> Hi,
>
> On Tue, Sep 26, 2000 at 09:17:44AM -0600, [EMAIL PROTECTED] wrote:
>
> > Operating systems cannot make more memory appear by magic.
> > The question is really about the best strategy for dealing with low memory. In my
>
Hi,
On Tue, Sep 26, 2000 at 09:17:44AM -0600, [EMAIL PROTECTED] wrote:
> Operating systems cannot make more memory appear by magic.
> The question is really about the best strategy for dealing with low memory. In my
> opinion, the OS should not try to out-think physical limitations. Instead, the
On Mon, Sep 25, 2000 at 05:14:11PM -0600, Erik Andersen wrote:
> On Mon Sep 25, 2000 at 02:04:19PM -0600, [EMAIL PROTECTED] wrote:
> >
> > > all of the pending requests just as long as they are serialised, is
> > > this a problem?
> >
> > I think you are solving the wrong problem. On a small mem
On Tue, Sep 26, 2000 at 11:07:36AM +0100, Stephen C. Tweedie wrote:
> Hi,
>
> On Mon, Sep 25, 2000 at 03:12:50PM -0600, [EMAIL PROTECTED] wrote:
> > > >
> > > > I'm not too sure of what you have in mind, but if it is
> > > > "process creates vast virtual space to generate many page table
>
On Tue, Sep 26, 2000 at 10:54:23AM +0100, Stephen C. Tweedie wrote:
> Beancounter is a framework for user-level accounting. _What_ you
> account is up to the callers. Maybe this has been a miscommunication,
> but beancounter is all about allowing callers to account for stuff
> before allocation,
Hi,
On Mon, 25 Sep 2000, Stephen C. Tweedie wrote:
> So you have run out of physical memory --- what do you do about it?
Why let the system get into the state where it is neccessary to kill a
process?
Per-user/task resource counters should prevent unprivileged users from
soaking up too many
Hi,
On Mon, Sep 25, 2000 at 03:12:50PM -0600, [EMAIL PROTECTED] wrote:
> > >
> > > I'm not too sure of what you have in mind, but if it is
> > > "process creates vast virtual space to generate many page table
> > > entries -- using mmap"
> > > the answer is, virtual address space quot
Hi,
On Mon, Sep 25, 2000 at 03:07:44PM -0600, [EMAIL PROTECTED] wrote:
> On Mon, Sep 25, 2000 at 09:46:35PM +0100, Alan Cox wrote:
> > > I'm not too sure of what you have in mind, but if it is
> > > "process creates vast virtual space to generate many page table
> > > entries -- using
> "Ingo" == Ingo Molnar <[EMAIL PROTECTED]> writes:
Ingo> On 26 Sep 2000, Jes Sorensen wrote:
>> 9.5KB blocks is common for people running Gigabit Ethernet with
>> Jumbo frames at least.
Ingo> yep, although this is more of a Linux limitation, the cards
Ingo> themselves are happy to DMA frag
On 26 Sep 2000, Jes Sorensen wrote:
> 9.5KB blocks is common for people running Gigabit Ethernet with Jumbo
> frames at least.
yep, although this is more of a Linux limitation, the cards themselves are
happy to DMA fragmented buffers as well. (sans some small penalty per new
fragment.)
> "Ingo" == Ingo Molnar <[EMAIL PROTECTED]> writes:
Ingo> On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
>> > ie. 99.45% of all allocations are single-page! 0.50% is the 8kb
>>
>> You're right. That's why it's a waste to have so many order in the
>> buddy allocator. [...]
Ingo> yep, i agree.
On Mon Sep 25, 2000 at 02:04:19PM -0600, [EMAIL PROTECTED] wrote:
>
> > all of the pending requests just as long as they are serialised, is
> > this a problem?
>
> I think you are solving the wrong problem. On a small memory machine, the kernel,
> utilities, and applications should be configured
On Mon, Sep 25, 2000 at 04:47:21PM -0400, Benjamin C.R. LaHaise wrote:
> On Mon, 25 Sep 2000 [EMAIL PROTECTED] wrote:
>
> > On Mon, Sep 25, 2000 at 09:23:48PM +0100, Alan Cox wrote:
> > > > my prediction is that if you show me an example of
> > > > DoS vulnerability, I can show you fix that doe
On Mon, Sep 25, 2000 at 09:46:35PM +0100, Alan Cox wrote:
> > I'm not too sure of what you have in mind, but if it is
> > "process creates vast virtual space to generate many page table
> > entries -- using mmap"
> > the answer is, virtual address space quotas and mmap should kill
> >
> I'm not too sure of what you have in mind, but if it is
> "process creates vast virtual space to generate many page table
> entries -- using mmap"
> the answer is, virtual address space quotas and mmap should kill
> the process on low mem for page tables.
Those quotas being exactly
On Mon, 25 Sep 2000 [EMAIL PROTECTED] wrote:
> On Mon, Sep 25, 2000 at 09:23:48PM +0100, Alan Cox wrote:
> > > my prediction is that if you show me an example of
> > > DoS vulnerability, I can show you fix that does not require bean counting.
> > > Am I wrong?
> >
> > I think so. Page tables a
On Mon, Sep 25, 2000 at 09:23:48PM +0100, Alan Cox wrote:
> > my prediction is that if you show me an example of
> > DoS vulnerability, I can show you fix that does not require bean counting.
> > Am I wrong?
>
> I think so. Page tables are a good example
I'm not too sure of what you have in mi
Hi,
On Mon, Sep 25, 2000 at 02:04:19PM -0600, [EMAIL PROTECTED] wrote:
> > Right, but if the alternative is spurious ENOMEM when we can satisfy
>
> An ENOMEM is not spurious if there is not enough memory. UNIX does not ask the
> OS to do impossible tricks.
Yes, but the ENOMEM _is_ spurious if
> my prediction is that if you show me an example of
> DoS vulnerability, I can show you fix that does not require bean counting.
> Am I wrong?
I think so. Page tables are a good example
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMA
On Mon, Sep 25, 2000 at 08:25:49PM +0100, Stephen C. Tweedie wrote:
> Hi,
>
> On Mon, Sep 25, 2000 at 12:34:56PM -0600, [EMAIL PROTECTED] wrote:
>
> > > > Process 1,2 and 3 all start allocating 20 pages
> > > > now 57 pages are locked up in non-swapable kernel space and the system
>deadlock
[EMAIL PROTECTED] said:
> Sometimes allocating such monster memory blocks could be supported,
> but it should not be expected to be *fast*. E.g. if doing it in
> "reliable" way needs possibly moving currently allocated pages
> away from memory to create such a hole(s), so be it
Hi,
On Mon, Sep 25, 2000 at 12:34:56PM -0600, [EMAIL PROTECTED] wrote:
> > > Process 1,2 and 3 all start allocating 20 pages
> > > now 57 pages are locked up in non-swapable kernel space and the system
>deadlocks OOM.
> >
> > Or go the beancounter route: process 1 asks "can I pin 20 pages"
Hi,
On Mon, Sep 25, 2000 at 08:09:31PM +0100, Alan Cox wrote:
> > > Indeed. But we wont fail the kmalloc with a NULL return
> >
> > Isn't that the preferred behaviour, though? If we are completely out
> > of VM on a no-swap machine, we should be killing one of the existing
> > processes rather
> > Indeed. But we wont fail the kmalloc with a NULL return
>
> Isn't that the preferred behaviour, though? If we are completely out
> of VM on a no-swap machine, we should be killing one of the existing
> processes rather than preventing any progress and keeping all of the
> old tasks alive but
[Chopped the recipient list radically]
On Mon, Sep 25, 2000 at 06:06:11PM +0100, Alan Cox wrote:
> > > > Stupidity has no limits...
> > > Unfortunately its frequently wired into the hardware to save a few cents on
> > > scatter gather logic.
> >
> > Since when hardware folks became exempt from
[EMAIL PROTECTED] wrote:
> > Or go the beancounter route: process 1 asks "can I pin 20 pages", gets
> > told "yes", and goes allocating them, blocking as necessary until it
>
> So you have a "pre-allocation allocator"? Leads to interesting and
> hard to detect bugs with old code that does not pr
On Mon, Sep 25, 2000 at 07:24:53PM +0100, Stephen C. Tweedie wrote:
> Hi,
>
> On Mon, Sep 25, 2000 at 12:13:15PM -0600, [EMAIL PROTECTED] wrote:
>
> > > Definitely not. GFP_ATOMIC is reserved for things that really can't
> > > swap or schedule right now. Use GFP_ATOMIC indiscriminately and you
Hi,
On Mon, Sep 25, 2000 at 12:13:15PM -0600, [EMAIL PROTECTED] wrote:
> > Definitely not. GFP_ATOMIC is reserved for things that really can't
> > swap or schedule right now. Use GFP_ATOMIC indiscriminately and you'll
> > have to increase the number of atomic-allocatable pages.
>
> Process 1,
Hi,
On Mon, Sep 25, 2000 at 07:13:27PM +0100, Alan Cox wrote:
> > there is no swap. If there is truly nothing kswapd can do to recover
> > here, then we are truly OOM. Otherwise, kswapd should be able to free
>
> Indeed. But we wont fail the kmalloc with a NULL return
Isn't that the preferred
> there is no swap. If there is truly nothing kswapd can do to recover
> here, then we are truly OOM. Otherwise, kswapd should be able to free
Indeed. But we wont fail the kmalloc with a NULL return
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a mess
On Mon, Sep 25, 2000 at 08:04:54PM +0200, Jamie Lokier wrote:
> [EMAIL PROTECTED] wrote:
> > > [EMAIL PROTECTED] wrote:
> > > >walk = out;
> > > > while(nfds > 0) {
> > > > poll_table *tmp = (poll_table *) __get_free_page(GFP_KERNEL);
> > > > if (!tmp) {
On Mon, Sep 25, 2000 at 11:51:39AM -0600, [EMAIL PROTECTED] wrote:
> It should probably be GFP_ATOMIC, if I understand the mm right.
poll_wait is called from the f_op->poll callback from select just before
a sleep and since it's allowed to sleep too it should be a GFP_KERNEL
(not ATOMIC). Using
[EMAIL PROTECTED] wrote:
> > [EMAIL PROTECTED] wrote:
> > >walk = out;
> > > while(nfds > 0) {
> > > poll_table *tmp = (poll_table *) __get_free_page(GFP_KERNEL);
> > > if (!tmp) {
> >
> > Shouldn't this be GFP_USER? (Which would also conveniently fix
On Mon, Sep 25, 2000 at 07:18:29PM +0200, Jamie Lokier wrote:
> [EMAIL PROTECTED] wrote:
> >walk = out;
> > while(nfds > 0) {
> > poll_table *tmp = (poll_table *) __get_free_page(GFP_KERNEL);
> > if (!tmp) {
>
> Shouldn't this be GFP_USER? (Which would
On Mon, 25 Sep 2000, Oliver Xymoron wrote:
> Sure about that? It's been a while, but I seem to recall NT enforcing a
> scatter-gather framework on all drivers because it only gave them virtual
> allocations. For the cheaper cards, the s-g was done by software issuing
> single span requests to the
Hi,
On Mon, Sep 25, 2000 at 05:51:49PM +0100, Alan Cox wrote:
> > > 2 active processes, no swap
> > >
> > > #1#2
> > > kmalloc 32K kmalloc 16K
> > > OKOK
> > > kmalloc 16K
On Mon, 25 Sep 2000, Alan Cox wrote:
> > > > Stupidity has no limits...
> > >
> > > Unfortunately its frequently wired into the hardware to save a few cents on
> > > scatter gather logic.
> >
> > Since when hardware folks became exempt from the rule above? 128K is
> > almost tolerable, there we
Hi,
On Mon, Sep 25, 2000 at 06:05:00PM +0200, Andrea Arcangeli wrote:
> On Mon, Sep 25, 2000 at 04:42:49PM +0100, Stephen C. Tweedie wrote:
> > Progress is made, clean pages are discarded and dirty ones queued for
>
> How can you make progress if there isn't swap avaiable and all the
> freeable
[EMAIL PROTECTED] wrote:
>walk = out;
> while(nfds > 0) {
> poll_table *tmp = (poll_table *) __get_free_page(GFP_KERNEL);
> if (!tmp) {
Shouldn't this be GFP_USER? (Which would also conveniently fix the
problem Victor's pointing out...)
-- Jamie
-
To
On Mon, Sep 25, 2000 at 02:10:07PM -0300, Rik van Riel wrote:
> Not really. We could fix this by making the page freeing
> functions smarter and only free the pages we need.
That's what I proposed in first place infact.
To free large chunk of memory you may have to throw away lots of cache. We'r
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
> On Mon, Sep 25, 2000 at 07:03:46PM +0200, Ingo Molnar wrote:
> > [..] __GFP_SOFT solves this all very nicely [..]
>
> s/very nicely/throwing away lots of useful cache for no one good reason/
Not really. We could fix this by making the page freeing
f
On Mon, Sep 25, 2000 at 04:42:49PM +0100, Stephen C. Tweedie wrote:
> Hi,
>
> On Mon, Sep 25, 2000 at 04:16:56PM +0100, Alan Cox wrote:
> >
> > Unless Im missing something here think about this case
> >
> > 2 active processes, no swap
> >
> > #1 #2
> > kmalloc
> > > Stupidity has no limits...
> >
> > Unfortunately its frequently wired into the hardware to save a few cents on
> > scatter gather logic.
>
> Since when hardware folks became exempt from the rule above? 128K is
> almost tolerable, there were requests for 64 _mega_bytes...
Most cheap ass PC
On Mon, Sep 25, 2000 at 07:03:46PM +0200, Ingo Molnar wrote:
> [..] __GFP_SOFT solves this all very nicely [..]
s/very nicely/throwing away lots of useful cache for no one good reason/
Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [E
On Mon, Sep 25, 2000 at 09:49:46AM -0700, Linus Torvalds wrote:
> [..] I
> don't think the balancing has to take the order of the allocation into
> account [..]
Why do you prefer to throw away most of the cache (potentially at fork time)
instead of freeing only the few contigous bits that we need
On Mon, 25 Sep 2000, Alan Cox wrote:
> > > yep, i agree. I'm not sure what the biggest allocation is, some drivers
> > > might use megabytes or contiguous RAM?
> >
> > Stupidity has no limits...
>
> Unfortunately its frequently wired into the hardware to save a few cents on
> scatter gather l
> > yep, i agree. I'm not sure what the biggest allocation is, some drivers
> > might use megabytes or contiguous RAM?
>
> Stupidity has no limits...
Unfortunately its frequently wired into the hardware to save a few cents on
scatter gather logic.
We need 128K blocks for sound DMA buffers and m
> > kmalloc 16K kmalloc 32K
> > block block
> >
> 2) set PF_MEMALLOC on the task you're killing for OOM,
>that way this task will either get the memory or
>fail (note that PF_MEMALLOC tasks don't wait)
Nobody is out of memory
> Frankly, how often do we allocate multi-order pages? I've just made quick
> statistics wrt. how allocation orders are distributed on a more or less
> typical system:
Enough that failures on this crashed older 2.2 kernels because the tcp code
ended up looping trying to get memory and the slab al
On Mon, 25 Sep 2000, Linus Torvalds wrote:
> Yes, I'm inclined to agree. Or at least not disagree. I'm more arguing
> that the order itself may not be the most interesting thing, and that
> I don't think the balancing has to take the order of the allocation
> into account - because it should be
> > 2 active processes, no swap
> >
> > #1 #2
> > kmalloc 32K kmalloc 16K
> > OK OK
> > kmalloc 16K kmalloc 32K
> > block block
> >
>
> ... and
On Mon, 25 Sep 2000, Rik van Riel wrote:
> >
> > Thinking about it, we do have it already. It's called
> > !__GFP_HIGH, and it used by all the GFP_USER allocations.
>
> Hmm, I think these two are orthagonal.
>
> __GFP_HIGH means that we are allowed to eat deeper into
> the free list (maybe ne
On Mon, 25 Sep 2000, Linus Torvalds wrote:
> On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
> >
> > But I'd much prefer to pass not only the classzone from allocator
> > to memory balancing, but _also_ the order of the allocation,
> > and then shrink_mmap will know it doesn't worth to free anything
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
>
> But I'd much prefer to pass not only the classzone from allocator
> to memory balancing, but _also_ the order of the allocation,
> and then shrink_mmap will know it doesn't worth to free anything
> that isn't contigous on the order of the alloca
On Mon, 25 Sep 2000, Alexander Viro wrote:
> On Mon, 25 Sep 2000, Ingo Molnar wrote:
> > yep, i agree. I'm not sure what the biggest allocation is, some drivers
> > might use megabytes or contiguous RAM?
> Stupidity has no limits...
Blame the hardware designers... and give me my big allocations.
On Mon, Sep 25, 2000 at 01:22:40PM -0300, Rik van Riel wrote:
> whereas the old allocator could break down even when
> we still had enough swap free
As far I can see that's a bug that you hided introducing a deadlock.
Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-k
On Mon, 25 Sep 2000, Ingo Molnar wrote:
> On Mon, 25 Sep 2000, Andi Kleen wrote:
>
> > Another thing I would worry about are ports with multiple user page
> > sizes in 2.5. Another ugly case is the x86-64 port which has 4K pages
> > but may likely need a 16K kernel stack due to the 64bit stack bl
On Mon, Sep 25, 2000 at 06:18:17PM +0200, Andi Kleen wrote:
> On Mon, Sep 25, 2000 at 06:19:07PM +0200, Ingo Molnar wrote:
> > > Another thing I would worry about are ports with multiple user page
> > > sizes in 2.5. Another ugly case is the x86-64 port which has 4K pages
> > > but may likely need
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
> On Mon, Sep 25, 2000 at 04:42:49PM +0100, Stephen C. Tweedie wrote:
> > Progress is made, clean pages are discarded and dirty ones queued for
>
> How can you make progress if there isn't swap avaiable and all the
> freeable page/buffer cache is just
On Mon, Sep 25, 2000 at 06:22:42PM +0200, Ingo Molnar wrote:
> yep, i agree. I'm not sure what the biggest allocation is, some drivers
> might use megabytes or contiguous RAM?
I'm not sure (we should grep all the drivers to be sure...) but I bet the old
2.2.0 MAX_ORDER #define will work for every
On Mon, 25 Sep 2000, Alan Cox wrote:
> > > GFP_KERNEL has to be able to fail for 2.4. Otherwise you can get
> > > everything jammed in kernel space waiting on GFP_KERNEL and if the
> > > swapper cannot make space you die.
> >
> > if one can get everything jammed waiting for GFP_KERNEL, and not b
On Mon, Sep 25, 2000 at 06:19:07PM +0200, Ingo Molnar wrote:
> > Another thing I would worry about are ports with multiple user page
> > sizes in 2.5. Another ugly case is the x86-64 port which has 4K pages
> > but may likely need a 16K kernel stack due to the 64bit stack bloat.
>
> yep, but thes
On Mon, 25 Sep 2000, Ingo Molnar wrote:
> On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
>
> > > ie. 99.45% of all allocations are single-page! 0.50% is the 8kb
> >
> > You're right. That's why it's a waste to have so many order in the
> > buddy allocator. [...]
>
> yep, i agree. I'm not sure
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
> > ie. 99.45% of all allocations are single-page! 0.50% is the 8kb
>
> You're right. That's why it's a waste to have so many order in the
> buddy allocator. [...]
yep, i agree. I'm not sure what the biggest allocation is, some drivers
might use meg
On Mon, 25 Sep 2000, Andi Kleen wrote:
> An important exception in 2.2/2.4 is NFS with bigger rsize (will be fixed
> in 2.5, but 2.4 does it this way). For an 8K r/wsize you need reliable
> (=GFP_ATOMIC) 16K allocations.
the discussion does not affect GFP_ATOMIC - GFP_ATOMIC allocators *must
On Mon, Sep 25, 2000 at 06:02:18PM +0200, Ingo Molnar wrote:
> Frankly, how often do we allocate multi-order pages? I've just made quick
> statistics wrt. how allocation orders are distributed on a more or less
> typical system:
>
> (ALLOC ORDER)
> 0: 167081
> 1: 850
> 2:
On Mon, Sep 25, 2000 at 06:02:18PM +0200, Ingo Molnar wrote:
> Frankly, how often do we allocate multi-order pages? I've just made quick
The deadlock Alan pointed out can happen also with single page allocation
if we in 2.4.x-current put a loop in GFP_KERNEL.
> ie. 99.45% of all allocations are
On Mon, Sep 25, 2000 at 04:42:49PM +0100, Stephen C. Tweedie wrote:
> Progress is made, clean pages are discarded and dirty ones queued for
How can you make progress if there isn't swap avaiable and all the
freeable page/buffer cache is just been freed? The deadlock happens
in OOM condition (not
On Mon, 25 Sep 2000, Andrea Arcangeli wrote:
> Ingo's point is that the underlined line won't ever happen in the
> first place
please dont misinterpret my point ...
Frankly, how often do we allocate multi-order pages? I've just made quick
statistics wrt. how allocation orders are distributed o
Hi,
On Mon, Sep 25, 2000 at 04:16:56PM +0100, Alan Cox wrote:
>
> Unless Im missing something here think about this case
>
> 2 active processes, no swap
>
> #1#2
> kmalloc 32K kmalloc 16K
> OKOK
>
On Mon, Sep 25, 2000 at 04:16:56PM +0100, Alan Cox wrote:
> Unless Im missing something here think about this case
>
> 2 active processes, no swap
>
> #1#2
> kmalloc 32K kmalloc 16K
> OKOK
> kmalloc
> > GFP_KERNEL has to be able to fail for 2.4. Otherwise you can get
> > everything jammed in kernel space waiting on GFP_KERNEL and if the
> > swapper cannot make space you die.
>
> if one can get everything jammed waiting for GFP_KERNEL, and not being
> able to deallocate anything, thats a VM o
82 matches
Mail list logo