On Wed, Sep 05, 2007 at 05:14:06AM -0700, Christoph Lameter wrote:
> On Wed, 5 Sep 2007, Nick Piggin wrote:
>
> > However I really have an aversion to the near enough is good enough way of
> > thinking. Especially when it comes to fundamental deadlocks in the VM. I
> > do
On Thu, Sep 06, 2007 at 01:05:40AM +0200, Oleg Verych wrote:
> * Sun, 2 Sep 2007 20:36:19 +0200
> >
>
> I see, that in many places all pre-checks are done in negative form
> with resulting return or jump out. In this case, if function was called,
> what likely() path is?
>
> > +static void resize
On Thu, Aug 02, 2007 at 05:52:28PM -0700, Christoph Lameter wrote:
> On Fri, 3 Aug 2007, Nick Piggin wrote:
>
> > > Add a (slow) kmalloc_policy? Strict Object round robin for interleave
> > > right? It probably needs its own RR counter otherwise it disturbs the
On Thu, Aug 02, 2007 at 11:33:39AM -0700, Martin Bligh wrote:
> Nick Piggin wrote:
> >On Wed, Aug 01, 2007 at 03:52:11PM -0700, Martin Bligh wrote:
> >>>And so forth. Initial forks will balance. If the children refuse to
> >>>die, forks will continue to bala
On Thu, Aug 02, 2007 at 05:44:47PM +0200, Ingo Molnar wrote:
>
> * Nick Piggin <[EMAIL PROTECTED]> wrote:
>
> > > > > One thing to check out is whether the lmbench numbers are
> > > > > "correct". Especially on SMP systems, the lmbe
On Thu, Aug 02, 2007 at 12:58:13PM -0700, Christoph Lameter wrote:
> On Thu, 2 Aug 2007, Nick Piggin wrote:
>
> > > It does in the sense that slabs are allocated following policies. If you
> > > want to place individual objects then you need to use kmalloc_node().
>
On Thu, Aug 02, 2007 at 06:02:56PM -0700, Christoph Lameter wrote:
> On Fri, 3 Aug 2007, Nick Piggin wrote:
>
> > > Ok. So MPOL_BIND on a single node. We would have to save the current
> > > memory policy on the stack and then restore it later. Then you would need
>
On Thu, Aug 02, 2007 at 06:34:04PM -0700, Christoph Lameter wrote:
> On Fri, 3 Aug 2007, Nick Piggin wrote:
>
> > Yeah it only gets set if the parent is initially using a default policy
> > at this stage (and then is restored afterwards of course).
>
> Uggh. Looks li
On Fri, Aug 03, 2007 at 01:10:13PM -0700, Suresh B wrote:
> On Fri, Aug 03, 2007 at 02:20:10AM +0200, Nick Piggin wrote:
> > On Thu, Aug 02, 2007 at 11:33:39AM -0700, Martin Bligh wrote:
> > > Nick Piggin wrote:
> > > >On Wed, Aug 01, 2007 at 03:52:11PM -0700, Marti
Matthew Hawkins wrote:
On 7/25/07, Nick Piggin <[EMAIL PROTECTED]> wrote:
I guess /proc/meminfo, /proc/zoneinfo, /proc/vmstat, /proc/slabinfo
before and after the updatedb run with the latest kernel would be a
first step. top and vmstat output during the run wouldn't hurt either.
.
It has not. Concerns that were raised (by specifically Nick Piggin)
weren't being addressed.
I may have missed them, but what I saw from him weren't specific issues,
but instead a nebulous 'something better may come along later'
Something better, ie. the problems wit
On Sat, Aug 04, 2007 at 08:50:37AM +0200, Ingo Molnar wrote:
>
> * Nick Piggin <[EMAIL PROTECTED]> wrote:
>
> > Oh good. Thanks for getting to the bottom of it. We have normally
> > disliked too much runtime tunables in the scheduler, so I assume these
> >
--- [EMAIL PROTECTED] wrote:
> On Mon, 6 Aug 2007, Nick Piggin wrote:
>
> > [EMAIL PROTECTED] wrote:
> >> On Sun, 29 Jul 2007, Rene Herman wrote:
> >>
> >> > On 07/29/2007 01:41 PM, [EMAIL PROTECTED] wrote:
> >> >
> >> > &g
On Mon, Aug 06, 2007 at 11:40:55AM -0700, Andrew Morton wrote:
> On Thu, 2 Aug 2007 07:24:46 +0200 Nick Piggin <[EMAIL PROTECTED]> wrote:
>
> > Rather than sign direct radix-tree pointers with a special bit, sign
> > the indirect one that hangs off the root.
do they even get bloated up
with that break_lock then?).
Signed-off-by: Nick Piggin <[EMAIL PROTECTED]>
---
Index: linux-2.6/include/linux/sched.h
===
--- linux-2.6.orig/include/linux/sched.h
+++ linux-2.6/include/linux/s
tical sections short, and ensure locks are reasonably fair (which this
patch does).
Signed-off-by: Nick Piggin <[EMAIL PROTECTED]>
---
Index: linux-2.6/include/asm-x86_64/spinlock.h
===
--- linux-2.6.orig/include/asm-x86_64/spinlock.
On Wed, Aug 08, 2007 at 01:31:58PM -0400, [EMAIL PROTECTED] wrote:
> On Wed, 08 Aug 2007 06:24:44 +0200, Nick Piggin said:
>
> > After this, we can no longer spin on any locks with preempt enabled,
> > and cannot reenable interrupts when spinning on an irq safe lock, because
&g
On Wed, Aug 08, 2007 at 12:26:55PM +0200, Andi Kleen wrote:
>
> > *
> > * (the type definitions are in asm/spinlock_types.h)
> > */
> >
> > +#if (NR_CPUS > 256)
> > +#error spinlock supports a maximum of 256 CPUs
> > +#endif
> > +
> > static inline int __raw_spin_is_locked(raw_spinlock_t
On Thu, Aug 09, 2007 at 04:37:35PM +0100, Hugh Dickins wrote:
> On Thu, 9 Aug 2007, Mariusz Kozlowski wrote:
> > Hello,
> >
> > Nothing unusual happening, allmodconfig compiling etc.
> > Not sure why it says kernel was tainted though ... hmmm.
> >
> > [ cut here ]
> >
Andrew Morton wrote:
On Thu, 26 Jul 2007 15:53:37 +1000 Nick Piggin <[EMAIL PROTECTED]> wrote:
Not that I want to say anything about swap prefetch getting merged: my
inbox is already full of enough "helpful suggestions" about that,
give them the kernel interfaces, they can
On Thu, Jul 26, 2007 at 03:34:56PM -0700, Suresh B wrote:
> On Fri, Jul 27, 2007 at 12:18:30AM +0200, Ingo Molnar wrote:
> >
> > * Siddha, Suresh B <[EMAIL PROTECTED]> wrote:
> >
> > > Introduce SD_BALANCE_FORK for HT/MC/SMP domains.
> > >
> > > For HT/MC, as caches are shared, SD_BALANCE_FORK i
lt 64BIT
Index: linux-2.6/mm/Makefile
===
--- linux-2.6.orig/mm/Makefile
+++ linux-2.6/mm/Makefile
@@ -29,4 +29,4 @@ obj-$(CONFIG_FS_XIP) += filemap_xip.o
obj-$(CONFIG_MIGRATION) += migrate.o
obj-$(CONFIG_SMP) += allocpercpu.o
On Fri, Jul 27, 2007 at 10:30:47AM -0400, Lee Schermerhorn wrote:
> On Fri, 2007-07-27 at 10:42 +0200, Nick Piggin wrote:
> > Hi,
> >
> > Just got a bit of time to take another look at the replicated pagecache
> > patch. The nopage vs invalidate race and clear_page_dirt
On Mon, Jul 30, 2007 at 01:35:19PM -0700, Suresh B wrote:
> On Mon, Jul 30, 2007 at 11:20:04AM -0700, Christoph Lameter wrote:
> > On Fri, 27 Jul 2007, Siddha, Suresh B wrote:
>
> > > Observation #2: This introduces some migration overhead during IO
> > > submission.
> > > With the current protot
Hi,
I haven't given this idea testing yet, but I just wanted to get some
opinions on it first. NUMA placement still isn't ideal (eg. tasks with
a memory policy will not do any placement, and process migrations of
course will leave the memory behind...), but it does give a bit more
chance for the m
On Tue, Jul 31, 2007 at 10:01:14AM +0200, Ingo Molnar wrote:
>
> * Nick Piggin <[EMAIL PROTECTED]> wrote:
>
> > This patch uses memory policies to attempt to improve this. It
> > requires that we ask the scheduler to suggest the child's new CPU
> >
On Tue, Jul 31, 2007 at 11:14:08AM +0200, Andi Kleen wrote:
> On Tuesday 31 July 2007 07:41, Nick Piggin wrote:
>
> > I haven't given this idea testing yet, but I just wanted to get some
> > opinions on it first. NUMA placement still isn't ideal (eg. tasks with
> &g
On Tue, Jul 31, 2007 at 10:14:03AM -0700, Suresh B wrote:
> On Tue, Jul 31, 2007 at 06:19:17AM +0200, Nick Piggin wrote:
> > On Mon, Jul 30, 2007 at 01:35:19PM -0700, Suresh B wrote:
> > > So any suggestions for making this clean and acceptable to everyone?
> >
> >
On Tue, Jul 31, 2007 at 05:55:13PM -0700, Suresh B wrote:
> On Wed, Aug 01, 2007 at 02:41:18AM +0200, Nick Piggin wrote:
> > On Tue, Jul 31, 2007 at 10:14:03AM -0700, Suresh B wrote:
> > > task by taking away some of its cpu time. With CFS micro accounting,
> > > per
On Wed, Aug 01, 2007 at 03:52:11PM -0700, Martin Bligh wrote:
>
> >And so forth. Initial forks will balance. If the children refuse to
> >die, forks will continue to balance. If the parent starts seeing short
> >lived children, fork()s will eventually start to stay local.
>
> Fork without ex
Hi,
I didn't follow all of the scheduler debates and flamewars, so apologies
if this was already covered. Anyway.
lmbench 3 lat_ctx context switching time with 2 processes bound to a
single core increases by between 25%-35% on my Core2 system (didn't do
enough runs to get more significance, but i
On Wed, Aug 01, 2007 at 07:31:26PM -0700, Linus Torvalds wrote:
>
>
> On Thu, 2 Aug 2007, Nick Piggin wrote:
> >
> > lmbench 3 lat_ctx context switching time with 2 processes bound to a
> > single core increases by between 25%-35% on my Core2 system (didn'
On Tue, Jul 31, 2007 at 04:40:18PM -0700, Christoph Lameter wrote:
> On Tue, 31 Jul 2007, Andi Kleen wrote:
>
> > On Tuesday 31 July 2007 07:41, Nick Piggin wrote:
> >
> > > I haven't given this idea testing yet, but I just wanted to get some
> > > o
not affect slot lookups which occur under lock -- they
can never return an invalid result. Is needed in future for lockless
pagecache.
Signed-off-by: Nick Piggin <[EMAIL PROTECTED]>
Index: linux-2.6/include/linux/radix-
On Thu, Aug 02, 2007 at 09:19:56AM +0200, Ingo Molnar wrote:
>
> * Nick Piggin <[EMAIL PROTECTED]> wrote:
>
> > > One thing to check out is whether the lmbench numbers are "correct".
> > > Especially on SMP systems, the lmbench numbers are actually
On Sat, Aug 11, 2007 at 02:07:43AM +0200, Andi Kleen wrote:
>
> Nick,
>
> These two patches make my P4 (single socket HT) test box not boot. I dropped
> them for now.
>
> Some oopses
Sorry, the trylock had a race where it would not work correctly :(
Have fixed it now and will do more testing a
On Fri, Aug 10, 2007 at 05:08:18PM -0400, Lee Schermerhorn wrote:
> On Wed, 2007-08-08 at 16:25 -0400, Lee Schermerhorn wrote:
> > On Fri, 2007-07-27 at 10:42 +0200, Nick Piggin wrote:
> > > Hi,
> > >
> > > Just got a bit of time to take another look at the r
On Mon, Aug 13, 2007 at 10:05:01AM -0400, Lee Schermerhorn wrote:
> On Mon, 2007-08-13 at 09:43 +0200, Nick Piggin wrote:
> >
> > Replication may be putting more stress on some locks. It will cause more
> > tlb flushing that can not be batched well, which could cause the
On Mon, Aug 13, 2007 at 08:00:38PM -0700, Andrew Morton wrote:
> On Mon, 13 Aug 2007 14:30:31 +0200 Jens Axboe <[EMAIL PROTECTED]> wrote:
>
> > On Mon, Aug 06 2007, Nick Piggin wrote:
> > > > > What CPU did you get these numbers on? Do the indirect calls hurt
&
Paul E. McKenney wrote:
On Mon, Aug 13, 2007 at 01:15:52PM +0800, Herbert Xu wrote:
Paul E. McKenney <[EMAIL PROTECTED]> wrote:
On Sat, Aug 11, 2007 at 08:54:46AM +0800, Herbert Xu wrote:
Chris Snook <[EMAIL PROTECTED]> wrote:
cpu_relax() contains a barrier, so it should do the right thing
Chris Snook wrote:
David Howells wrote:
Chris Snook <[EMAIL PROTECTED]> wrote:
cpu_relax() contains a barrier, so it should do the right thing. For
non-smp
architectures, I'm concerned about interacting with interrupt
handlers. Some
drivers do use atomic_* operations.
I'm not sure that
On Tue, Aug 14, 2007 at 07:21:03AM -0700, Christoph Lameter wrote:
> The following patchset implements recursive reclaim. Recursive reclaim
> is necessary if we run out of memory in the writeout patch from reclaim.
>
> This is f.e. important for stacked filesystems or anything that does
> complica
Paul E. McKenney wrote:
On Tue, Aug 14, 2007 at 03:34:25PM +1000, Nick Piggin wrote:
Maybe it is the safe way to go, but it does obscure cases where there
is a real need for barriers.
I prefer burying barriers into other primitives.
When they should naturally be there, eg. locking or the
Segher Boessenkool wrote:
Please check the definition of "cache coherence".
Which of the twelve thousand such definitions? :-)
Every definition I have seen says that writes to a single memory
location have a serial order as seen by all CPUs, and that a read
will return the most recent write
Paul E. McKenney wrote:
On Wed, Aug 15, 2007 at 11:30:05PM +1000, Nick Piggin wrote:
Especially since several big architectures don't have volatile in their
atomic_get and _set, I think it would be a step backwards to add them in
as a "just in case" thin now (unless there is
Herbert Xu wrote:
On Wed, Aug 15, 2007 at 01:02:23PM -0400, Chris Snook wrote:
Herbert Xu wrote:
I'm still unconvinced why we need this because nobody has
brought up any examples of kernel code that legitimately
need this.
There's plenty of kernel code that *wants* this though. If we can
Segher Boessenkool wrote:
Part of the motivation here is to fix heisenbugs. If I knew where they
By the same token we should probably disable optimisations
altogether since that too can create heisenbugs.
Almost everything is a tradeoff; and so is this. I don't
believe most people would f
On Tue, Aug 14, 2007 at 08:30:21AM -0700, Christoph Lameter wrote:
> This is the extended version of the reclaim patchset. It enables reclaim from
> clean file backed pages during GFP_ATOMIC allocs. A bit invasive since
> may locks must now be taken with saving flags. But it works.
>
> Tested by r
On Tue, Aug 14, 2007 at 02:41:21PM -0500, Adam Litke wrote:
> It seems a simple mistake was made when converting follow_hugetlb_page()
> over to the VM_FAULT flags bitmask stuff:
> (commit 83c54070ee1a2d05c89793884bea1a03f2851ed4).
>
> By using the wrong bitmask, hugetlb_fault() failures ar
defined
> as PAGE_SHIFT, but should that ever change this calculation would break.
>
> Signed-off-by: Dean Nelson <[EMAIL PROTECTED]>
>
Acked-by: Nick Piggin <[EMAIL PROTECTED]>
>
> Index: linux-2.6/mm/memory.c
> ==
On Wed, Aug 15, 2007 at 03:12:06PM +0200, Peter Zijlstra wrote:
> On Wed, 2007-08-15 at 14:22 +0200, Nick Piggin wrote:
> > On Tue, Aug 14, 2007 at 07:21:03AM -0700, Christoph Lameter wrote:
> > > The following patchset implements recursive reclaim. Recursive reclaim
> >
Segher Boessenkool wrote:
Part of the motivation here is to fix heisenbugs. If I knew where
they
By the same token we should probably disable optimisations
altogether since that too can create heisenbugs.
Almost everything is a tradeoff; and so is this. I don't
believe most people would f
Chris Snook wrote:
Herbert Xu wrote:
On Thu, Aug 16, 2007 at 03:48:54PM -0400, Chris Snook wrote:
Can you find an actual atomic_read code snippet there that is
broken without the volatile modifier?
A whole bunch of atomic_read uses will be broken without the volatile
modifier once we start
Paul E. McKenney wrote:
On Thu, Aug 16, 2007 at 06:42:50PM +0800, Herbert Xu wrote:
In fact, volatile doesn't guarantee that the memory gets
read anyway. You might be reading some stale value out
of the cache. Granted this doesn't happen on x86 but
when you're coding for the kernel you can't
Paul Mackerras wrote:
Nick Piggin writes:
So i386 and x86-64 don't have volatiles there, and it saves them a
few K of kernel text. What you need to justify is why it is a good
I'm really surprised it's as much as a few K. I tried it on powerpc
and it only saved 40 bytes (
Paul Mackerras wrote:
Nick Piggin writes:
Why are people making these undocumented and just plain false
assumptions about atomic_t?
Well, it has only been false since December 2006. Prior to that
atomics *were* volatile on all platforms.
Hmm, although I don't think it has ever
Satyam Sharma wrote:
#define atomic_read_volatile(v) \
({ \
forget((v)->counter);\
((v)->counter); \
})
where:
*vomit* :)
Stefan Richter wrote:
Nick Piggin wrote:
I don't know why people would assume volatile of atomics. AFAIK, most
of the documentation is pretty clear that all the atomic stuff can be
reordered etc. except for those that modify and return a value.
Which documentation is there?
Document
Satyam Sharma wrote:
On Fri, 17 Aug 2007, Herbert Xu wrote:
On Fri, Aug 17, 2007 at 01:43:27PM +1000, Paul Mackerras wrote:
BTW, the sort of missing barriers that triggered this thread
aren't that subtle. It'll result in a simple lock-up if the
loop condition holds upon entry. At which poi
Satyam Sharma wrote:
On Fri, 17 Aug 2007, Nick Piggin wrote:
Sure, now
that I learned of these properties I can start to audit code and insert
barriers where I believe they are needed, but this simply means that
almost all occurrences of atomic_read will get barriers (unless there
already
Satyam Sharma wrote:
On Fri, 17 Aug 2007, Nick Piggin wrote:
Also, why would you want to make these insane accessors for atomic_t
types? Just make sure everybody knows the basics of barriers, and they
can apply that knowledge to atomic_t and all other lockless memory
accesses as well
Satyam Sharma wrote:
On Fri, 17 Aug 2007, Nick Piggin wrote:
Satyam Sharma wrote:
[...]
Granted, the above IS buggy code. But, the stated objective is to avoid
heisenbugs.
^^
Anyway, why are you making up code snippets that are buggy in other
ways in order to support this
Satyam Sharma wrote:
On Fri, 17 Aug 2007, Nick Piggin wrote:
Satyam Sharma wrote:
It is very obvious. msleep calls schedule() (ie. sleeps), which is
always a barrier.
Probably you didn't mean that, but no, schedule() is not barrier because
it sleeps. It's a barrier be
Satyam Sharma wrote:
On Fri, 17 Aug 2007, Nick Piggin wrote:
I think they would both be equally ugly,
You think both these are equivalent in terms of "looks":
|
while (!atomic_read(&v)) { | while (!atom
Satyam Sharma wrote:
On Fri, 17 Aug 2007, Nick Piggin wrote:
Because they should be thinking about them in terms of barriers, over
which the compiler / CPU is not to reorder accesses or cache memory
operations, rather than "special" "volatile" accesses.
This is ob
Satyam Sharma wrote:
On Fri, 17 Aug 2007, Nick Piggin wrote:
Satyam Sharma wrote:
On Fri, 17 Aug 2007, Nick Piggin wrote:
Satyam Sharma wrote:
It is very obvious. msleep calls schedule() (ie. sleeps), which is
always a barrier.
Probably you didn't mean that, but no, schedule() i
On Friday 19 October 2007 17:03, Nick Piggin wrote:
> On Friday 19 October 2007 16:05, Erez Zadok wrote:
> > David,
> >
> > I'm testing unionfs on top of jffs2, using 2.6.24 as of linus's commit
> > 4fa4d23fa20de67df919030c1216295664866ad7. All of my unionfs t
On Friday 19 October 2007 16:05, Erez Zadok wrote:
> David,
>
> I'm testing unionfs on top of jffs2, using 2.6.24 as of linus's commit
> 4fa4d23fa20de67df919030c1216295664866ad7. All of my unionfs tests pass
> when unionfs is stacked on top of jffs2, other than my truncate test --
> whic tries to
removes the mfence from __clear_bit_unlock (which is already a useful
primitive for SLUB).
Signed-off-by: Nick Piggin <[EMAIL PROTECTED]>
---
Index: linux-2.6/include/asm-x86/bitops_32.h
===
--- linux-2.6.orig/include/asm-x86/bitop
On Friday 19 October 2007 13:28, Herbert Xu wrote:
> Nick Piggin <[EMAIL PROTECTED]> wrote:
> >> First of all let's agree on some basic assumptions:
> >>
> >> * A pair of spin lock/unlock subsumes the effect of a full mb.
> >
> > Not unless you
On Friday 19 October 2007 12:32, Herbert Xu wrote:
> First of all let's agree on some basic assumptions:
>
> * A pair of spin lock/unlock subsumes the effect of a full mb.
Not unless you mean a pair of spin lock/unlock as in
2 spin lock/unlock pairs (4 operations).
*X = 10;
spin_lock(&lock);
/*
On Friday 19 October 2007 11:21, Christoph Lameter wrote:
> On Fri, 19 Oct 2007, Nick Piggin wrote:
> > Ah, thanks, but can we just use my earlier patch that does the
> > proper __bit_spin_unlock which is provided by
> > bit_spin_lock-use-lock-bitops.patch
>
> Ok.
>
On Friday 19 October 2007 08:05, Richard Jelinek wrote:
> Hello guys,
>
> I'm not subscribed to this list, so if you find this question valid
> enough to answer it, please cc me. Thanks.
>
> This is what the top-output looks like on my machine after having
> copied about 550GB of data from a twofis
(only vice versa),
so you might have been confused by looking at x86's spinlocks
into thinking this will work. However on powerpc and sparc, I
don't think it gives you the right types of barriers.
Slub can use the non-atomic version to unlock because other flags will not
get mod
On Friday 19 October 2007 12:01, Christoph Lameter wrote:
> On Fri, 19 Oct 2007, Nick Piggin wrote:
> > > Yes that is what I attempted to do with the write barrier. To my
> > > knowledge there are no reads that could bleed out and I wanted to avoid
> > > a full fence
might have been confused by looking at x86's spinlocks
into thinking this will work. However on powerpc and sparc, I
don't think it gives you the right types of barriers.
Slub can use the non-atomic version to unlock because other flags will not
get modified with the loc
On Saturday 20 October 2007 07:27, Eric W. Biederman wrote:
> Andrew Morton <[EMAIL PROTECTED]> writes:
> > I don't think we little angels want to tread here. There are so many
> > weirdo things out there which will break if we bust the coherence between
> > the fs and /dev/hda1.
>
> We broke cohe
On Saturday 20 October 2007 08:51, Eric W. Biederman wrote:
> Currently the ramdisk tries to keep the block device page cache pages
> from being marked clean and dropped from memory. That fails for
> filesystems that use the buffer cache because the buffer cache is not
> an ordinary buffer cache u
On Sunday 21 October 2007 15:10, Eric W. Biederman wrote:
> Nick Piggin <[EMAIL PROTECTED]> writes:
> > On Saturday 20 October 2007 08:51, Eric W. Biederman wrote:
> >> Currently the ramdisk tries to keep the block device page cache pages
> >> from being marked cle
On Sunday 21 October 2007 14:53, Eric W. Biederman wrote:
> Nick Piggin <[EMAIL PROTECTED]> writes:
> > On Saturday 20 October 2007 07:27, Eric W. Biederman wrote:
> >> Andrew Morton <[EMAIL PROTECTED]> writes:
> >> > I don't think we littl
On Sunday 21 October 2007 16:48, Eric W. Biederman wrote:
> Nick Piggin <[EMAIL PROTECTED]> writes:
> > Yes it does. It is exactly breaking the coherency between block
> > device and filesystem metadata coherency that Andrew cared about.
> > Whether or not that mat
On Sunday 21 October 2007 18:23, Eric W. Biederman wrote:
> Christian Borntraeger <[EMAIL PROTECTED]> writes:
> Let me put it another way. Looking at /proc/slabinfo I can get
> 37 buffer_heads per page. I can allocate 10% of memory in
> buffer_heads before we start to reclaim them. So it requir
On Sunday 21 October 2007 18:55, David Woodhouse wrote:
> On Fri, 2007-10-19 at 17:16 +1000, Nick Piggin wrote:
> > if (writtenlen) {
> > - if (inode->i_size < (pg->index << PAGE_CACHE_SHIFT) +
> > start + writtenlen) { -
On Thursday 18 October 2007 18:52, Takenori Nagano wrote:
> Vivek Goyal wrote:
> > > My stance is that _all_ the RAS tools (kdb, kgdb, nlkd, netdump, lkcd,
> > > crash, kdump etc.) should be using a common interface that safely puts
> > > the entire system in a stopped state and saves the state of
On Monday 22 October 2007 03:56, Eric W. Biederman wrote:
> Nick Piggin <[EMAIL PROTECTED]> writes:
> > OK, I missed that you set the new inode's aops to the ramdisk_aops
> > rather than the bd_inode. Which doesn't make a lot of sense because
> > you just
On Monday 22 October 2007 04:39, Eric W. Biederman wrote:
> Nick Piggin <[EMAIL PROTECTED]> writes:
> > On Sunday 21 October 2007 18:23, Eric W. Biederman wrote:
> >> Christian Borntraeger <[EMAIL PROTECTED]> writes:
> >>
> >> Let me put it another w
On Monday 22 October 2007 14:28, dean gaudet wrote:
> On Sun, 21 Oct 2007, Jeremy Fitzhardinge wrote:
> > dean gaudet wrote:
> > > On Mon, 15 Oct 2007, Nick Piggin wrote:
> > >> Yes, as Dave said, vmap (more specifically: vunmap) is very expensive
> > >> b
On Tuesday 23 October 2007 10:55, Takenori Nagano wrote:
> Nick Piggin wrote:
> > One thing I'd suggest is not to use debugfs, if it is going to
> > be a useful end-user feature.
>
> Is /sys/kernel/notifier_name/ an appropriate place?
Hi list,
I'm curious about th
On Wednesday 24 October 2007 15:09, Randy Dunlap wrote:
> From: Randy Dunlap <[EMAIL PROTECTED]>
>
> Can we expand this macro definition, or should I look for a way to
> fool^W teach kernel-doc about this?
>
> scripts/kernel-doc says:
> Error(linux-2.6.24-rc1//include/asm-x86/bitops_32.h:188): cann
On Thursday 25 October 2007 11:14, Andrew Morton wrote:
> On Wed, 24 Oct 2007 18:13:06 +1000 [EMAIL PROTECTED] wrote:
> > Signed-off-by: Nick Piggin <[EMAIL PROTECTED]>
> >
> > ---
> > kernel/wait.c |2 +-
> > 1 file changed, 1 insertion(+), 1 dele
On Wednesday 24 October 2007 21:12, Kay Sievers wrote:
> On 10/24/07, Nick Piggin <[EMAIL PROTECTED]> wrote:
> > On Tuesday 23 October 2007 10:55, Takenori Nagano wrote:
> > > Nick Piggin wrote:
> > > > One thing I'd suggest is not to use debugfs, if it
On Thursday 25 October 2007 12:15, Christoph Lameter wrote:
> On Wed, 24 Oct 2007, Alexey Dobriyan wrote:
> > [12728.701398] DMA free:8032kB min:32kB low:40kB high:48kB active:2716kB
> > inactive:2208kB present:12744kB pages_scanned:9299 all_unreclaimable?
> > yes [12728.701567] lowmem_reserve[]: 0
On Thursday 25 October 2007 12:43, Christoph Lameter wrote:
> On Thu, 25 Oct 2007, Nick Piggin wrote:
> > > Ummm... all unreclaimable is set! Are you mlocking the pages in memory?
> > > Or what causes this? All pages under writeback? What is the dirty ratio
> > &g
Hi,
Andi spotted this exchange on the gcc list. I don't think he's
brought it up here yet, but it worries me enough that I'd like
to discuss it.
Starts here
http://gcc.gnu.org/ml/gcc/2007-10/msg00266.html
Concrete example here
http://gcc.gnu.org/ml/gcc/2007-10/msg00275.html
Basically, what the
On Friday 19 October 2007 08:25, Matthew Wilcox wrote:
> Abstracting away direct uses of TASK_ flags allows us to change the
> definitions of the task flags more easily.
>
> Also restructure do_wait() a little
>
> Signed-off-by: Matthew Wilcox <[EMAIL PROTECTED]>
> ---
> arch/ia64/kernel/perfmon.c
On Friday 19 October 2007 08:26, Matthew Wilcox wrote:
> Use TASK_KILLABLE to allow wait_on_retry_sync_kiocb to return -EINTR.
> All callers then check the return value and break out of their loops.
>
> Signed-off-by: Matthew Wilcox <[EMAIL PROTECTED]>
> ---
> fs/read_write.c | 17 --
On Friday 19 October 2007 08:25, Matthew Wilcox wrote:
> This series of patches introduces the facility to deliver only fatal
> signals to tasks which are otherwise waiting uninterruptibly.
This is pretty nice I think. It also is a significant piece of
infrastructure required to fix some of the m
On Thursday 25 October 2007 13:46, Arjan van de Ven wrote:
> On Thu, 25 Oct 2007 13:24:49 +1000
>
> Nick Piggin <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > Andi spotted this exchange on the gcc list. I don't think he's
> > brought it up here y
On Thursday 25 October 2007 14:11, Andrew Morton wrote:
> On Wed, 24 Oct 2007 08:24:57 -0400 Matthew Wilcox <[EMAIL PROTECTED]> wrote:
> > and associated infrastructure such as sync_page_killable and
> > fatal_signal_pending. Use lock_page_killable in
> > do_generic_mapping_read() to allow us to k
Hi David,
[BTW. can you retain cc lists, please?]
On Thursday 25 October 2007 14:29, David Schwartz wrote:
> > Well that's exactly right. For threaded programs (and maybe even
> > real-world non-threaded ones in general), you don't want to be
> > even _reading_ global variables if you don't need
801 - 900 of 1974 matches
Mail list logo