On Thu, 2007-01-18 at 13:41 +0300, Evgeniy Polyakov wrote:
> > > What about 'level-7' ack as you described in introduction?
> >
> > Take NFS, it does full data traffic in kernel.
>
> NFS case is exactly the situation, when you only need to generate an ACK.
No it is not, it needs the full RPC re
On Wed, Jan 17, 2007 at 10:07:28AM +0100, Peter Zijlstra ([EMAIL PROTECTED])
wrote:
> > You operate with 'current' in different contexts without any locks which
> > looks racy and even is not allowed. What will be 'current' for
> > netif_rx() case, which schedules softirq from hard irq context -
>
On Wed, 2007-01-17 at 10:12 +0100, Pavel Machek wrote:
> Hi!
>
> > These patches implement the basic infrastructure to allow swap over
> > networked
> > storage.
> >
> > The basic idea is to reserve some memory up front to use when regular memory
> > runs out.
> >
> > To bound network behaviour
Hi!
> These patches implement the basic infrastructure to allow swap over networked
> storage.
>
> The basic idea is to reserve some memory up front to use when regular memory
> runs out.
>
> To bound network behaviour we accept only a limited number of concurrent
> packets and drop those packe
On Wed, 2007-01-17 at 07:54 +0300, Evgeniy Polyakov wrote:
> On Tue, Jan 16, 2007 at 05:08:15PM +0100, Peter Zijlstra ([EMAIL PROTECTED])
> wrote:
> > On Tue, 2007-01-16 at 18:33 +0300, Evgeniy Polyakov wrote:
> > > On Tue, Jan 16, 2007 at 02:47:54PM +0100, Peter Zijlstra ([EMAIL
> > > PROTECTED]
On Tue, Jan 16, 2007 at 05:08:15PM +0100, Peter Zijlstra ([EMAIL PROTECTED])
wrote:
> On Tue, 2007-01-16 at 18:33 +0300, Evgeniy Polyakov wrote:
> > On Tue, Jan 16, 2007 at 02:47:54PM +0100, Peter Zijlstra ([EMAIL
> > PROTECTED]) wrote:
> > > > > + if (unlikely(skb->emergency))
> > > > > +
On Tue, 2007-01-16 at 18:33 +0300, Evgeniy Polyakov wrote:
> On Tue, Jan 16, 2007 at 02:47:54PM +0100, Peter Zijlstra ([EMAIL PROTECTED])
> wrote:
> > > > + if (unlikely(skb->emergency))
> > > > + current->flags |= PF_MEMALLOC;
> > >
> > > Access to 'current' in netif_receive_
On Tue, Jan 16, 2007 at 02:47:54PM +0100, Peter Zijlstra ([EMAIL PROTECTED])
wrote:
> > > + if (unlikely(skb->emergency))
> > > + current->flags |= PF_MEMALLOC;
> >
> > Access to 'current' in netif_receive_skb()???
> > Why do you want to work with, for example keventd?
>
> Can this run i
On Tue, 2007-01-16 at 16:25 +0300, Evgeniy Polyakov wrote:
> On Tue, Jan 16, 2007 at 10:46:06AM +0100, Peter Zijlstra ([EMAIL PROTECTED])
> wrote:
> > @@ -1767,10 +1767,23 @@ int netif_receive_skb(struct sk_buff *sk
> > struct net_device *orig_dev;
> > int ret = NET_RX_DROP;
> > __be1
On Tue, Jan 16, 2007 at 10:46:06AM +0100, Peter Zijlstra ([EMAIL PROTECTED])
wrote:
> In order to provide robust networked storage there must be a guarantee
> of progress. That is, the storage device must never stall because of
> (physical)
> OOM, because the device itself might be needed to get
In order to provide robust networked storage there must be a guarantee
of progress. That is, the storage device must never stall because of (physical)
OOM, because the device itself might be needed to get out of it (reclaim).
This means that the device must always find enough memory to build/send
These patches implement the basic infrastructure to allow swap over networked
storage.
The basic idea is to reserve some memory up front to use when regular memory
runs out.
To bound network behaviour we accept only a limited number of concurrent
packets and drop those packets that are not aimed
On Thursday, June 28, 2001 01:21:28 PM +1000 Andrew Morton
<[EMAIL PROTECTED]> wrote:
> Chris Mason wrote:
>>
>> ...
>> The work around I've been using is the dirty_inode method. Whenever
>> mark_inode_dirty is called, reiserfs logs the dirty inode. This means
>> inode changes are _always_ r
Marcelo Tosatti wrote:
> On Wed, 27 Jun 2001, Xuan Baldauf wrote:
>
> > Hello,
> >
> > I'm not sure wether this is a reiserfs bug or a kernel bug,
> > so I'm posting to both lists...
> >
> > My linux box suddenly was not availbale using ssh|telnet,
> > but it responded to pings. On console logi
On Fri, 22 Sep 2000, Linus Torvalds wrote:
> On Fri, 22 Sep 2000, Molnar Ingo wrote:
> >
> > i'm still getting VM related lockups during heavy write load, in
> > test9-pre5 + your 2.4.0-t9p2-vmpatch (which i understand as being your
> > last VM related fix-patch, correct?). Here is a histogram of
On Fri, 22 Sep 2000, Molnar Ingo wrote:
>
> i'm still getting VM related lockups during heavy write load, in
> test9-pre5 + your 2.4.0-t9p2-vmpatch (which i understand as being your
> last VM related fix-patch, correct?). Here is a histogram of such a
> lockup:
Rik,
those VM patches are goin
I also encounter instant lockup of test9-pre3 + t9p2-vmpatch / SMP (two CPU).
under high I/O via UNIX domain sockets:
- running 10 simple tasks doing
#define BUFFERSIZE 204800
for (j = 0; ; j++) {
if (socketpair(PF_LOCAL, SOCK_STREAM, 0, p) == -1) {
If the process that barfed is swapper then this is the oops that I got
in test9-pre4 w/o any patches.
http://marc.theaimsgroup.com/?l=linux-kernel&m=96936789621245&w=2
On Fri, 22 Sep 2000, André Dahlqvist wrote:
> On Fri, Sep 22, 2000 at 07:27:30AM -0300, Rik van Riel wrote:
>
> > Linus,
> >
> I had to type the oops down by hand, but I will provide ksymoops
> output soon if you need it.
Let's hope I typed down the oops from the screen without misstakes. Here
is the ksymoops output:
ksymoops 2.3.4 on i586 2.4.0-test9. Options used
-V (default)
-k 2922143001.ksyms (spec
On Fri, Sep 22, 2000 at 07:27:30AM -0300, Rik van Riel wrote:
> Linus,
>
> could you please include this patch in the next
> pre patch?
Rik,
I just had an oops with this patch applied. I ran into BUG at
buffer.c:730. The machine was not under load when the oops occured, I
was just reading e-ma
On Thu, 21 Sep 2000, Rik van Riel wrote:
> I've found and fixed the deadlocks in the new VM. They turned out
> to be single-cpu only bugs, which explains why they didn't crash my
> SMP tesnt box ;)
Hi,
tried
> http://www.surriel.com/patches/2.4.0-t9p2-vmpatch
applied to 2.4.0-t9p4 on UP box b
On Fri, 22 Sep 2000, James Lewis Nance wrote:
> On Thu, Sep 21, 2000 at 01:44:35PM -0300, Rik van Riel wrote:
>
> > I've found and fixed the deadlocks in the new VM. They turned out
> > to be single-cpu only bugs, which explains why they didn't crash my
> > SMP tesnt box ;)
>
> I applied the pa
On Fri, 22 Sep 2000, Molnar Ingo wrote:
> yep this has done the trick, the deadlock is gone. I've attached the full
> VM-fixes patch (this fix included) against vanilla test9-pre5.
Linus,
could you please include this patch in the next
pre patch?
(in the mean time, I'll go back to looking at t
yep this has done the trick, the deadlock is gone. I've attached the full
VM-fixes patch (this fix included) against vanilla test9-pre5.
Ingo
--- linux/fs/buffer.c.orig Fri Sep 22 02:31:07 2000
+++ linux/fs/buffer.c Fri Sep 22 02:31:13 2000
@@ -706,9 +706,7 @@
static void re
On Thu, Sep 21, 2000 at 01:44:35PM -0300, Rik van Riel wrote:
> I've found and fixed the deadlocks in the new VM. They turned out
> to be single-cpu only bugs, which explains why they didn't crash my
> SMP tesnt box ;)
I applied the patches and ran my "build mozilla with mem=48M" test again.
It
On Fri, 22 Sep 2000, Rik van Riel wrote:
> 894 if (current->need_resched && !(gfp_mask & __GFP_IO)) {
> 895 __set_current_state(TASK_RUNNING);
> 896 schedule();
> 897 }
> The idea was to not allow processes which have IO locks
> to schedul
On Fri, 22 Sep 2000, Molnar Ingo wrote:
> i'm still getting VM related lockups during heavy write load, in
> test9-pre5 + your 2.4.0-t9p2-vmpatch (which i understand as being your
> last VM related fix-patch, correct?). Here is a histogram of such a
> lockup:
> this lockup happens both during va
btw. - no swapdevice here.
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/
i'm still getting VM related lockups during heavy write load, in
test9-pre5 + your 2.4.0-t9p2-vmpatch (which i understand as being your
last VM related fix-patch, correct?). Here is a histogram of such a
lockup:
1 Trace; 4010a720 <__switch_to+38/e8>
5 Trace; 4010a74b <__switch_to+63/
On Thu, 21 Sep 2000, David S. Miller wrote:
> How did you get away with adding a new member to task_struct yet
> not updating the INIT_TASK() macro appropriately? :-) Does it
> really compile?
There are a lot of fields in the task_struct which
do not have fields declared in the INIT_TASK macro.
Date: Fri, 22 Sep 2000 02:18:05 +0200
From: Andrea Arcangeli <[EMAIL PROTECTED]>
As far as sleep_time is ok to be set to zero its missing
initialization is right.
Indeed.
Later,
David S. Miller
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-kern
On Thu, Sep 21, 2000 at 03:23:17PM -0700, David S. Miller wrote:
>
> How did you get away with adding a new member to task_struct yet not
> updating the INIT_TASK() macro appropriately? :-) Does it really
> compile?
As far as sleep_time is ok to be set to zero its missing initialization is
righ
Hi again,
Further hints.
More testing (printks in refill_inactive and page_launder)
reveals that refill_inactive works ok (16 pages) but
page_launder never succeeds in my lockup state... (WHY)
alloc fails since there is no inactive_clean and free is
less than MIN. And then when page_launder fai
How did you get away with adding a new member to task_struct yet not
updating the INIT_TASK() macro appropriately? :-) Does it really
compile?
Later,
David S. Miller
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PR
Hi,
Tried your patch on 2.2.4-test9-pre4
with the included debug patch applied.
Rebooted, started mmap002
After a while it starts outputting (magic did not work
this time - usually does):
- - -
"VM: try_to_free_pages (result: 1) try_again # 12345"
"VM: try_to_free_pages (result: 1) try_again #
Hi,
I've found and fixed the deadlocks in the new VM. They turned out
to be single-cpu only bugs, which explains why they didn't crash my
SMP tesnt box ;)
They have to do with the fact that processes schedule away while
holding IO locks after waking up kswapd. At that point kswapd
spends its ti
36 matches
Mail list logo