Jeff Layton wrote:
The previous patch removes a kill_proc(... SIGKILL), this one adds it
back.
That makes me wonder if the intermediate state is 'correct'.
But I also wonder what "correct" means.
Do we want all locks to be dropped when the last nfsd thread dies?
The answer is presumably eithe
Adrian Bunk wrote:
On Thu, Jan 11, 2007 at 10:26:27PM -0800, Andrew Morton wrote:
...
Changes since 2.6.20-rc3-mm1:
...
git-gfs2-nmw.patch
...
git trees
...
This patch makes the needlessly globlal gfs2_change_nlink_i() static.
We will probably need to call this routine from other
Steven Whitehouse wrote:
Hi,
On Fri, 2006-12-01 at 11:09 -0800, Andrew Morton wrote:
I was taking my cue here from ext3 which does something similar. The
filemap_fdatawrite() is done by the VFS before this is called with a
filemap_fdatawait() afterwards. This was intended to flush the metada
Russell Cattelan wrote:
Wendy Cheng wrote:
Linux kernel, particularly the VFS layer, is starting to show signs
of inadequacy as the software components built upon it keep growing.
I have doubts that it can keep up and handle this complexity with a
development policy like you just described
Andrew Morton wrote:
On Sun, 03 Dec 2006 12:49:42 -0500
Wendy Cheng <[EMAIL PROTECTED]> wrote:
I read this as "It is ok to give system admin(s) commands (that this
"drop_pagecache_sb() call" is all about) to drop page cache. It is,
however, not ok to give filesystem d
Andrew Morton wrote:
On Thu, 30 Nov 2006 11:05:32 -0500
Wendy Cheng <[EMAIL PROTECTED]> wrote:
The idea is, instead of unconditionally dropping every buffer associated
with the particular mount point (that defeats the purpose of page
caching), base kernel exports the "drop_p
How about a simple and plain change with this uploaded patch
The idea is, instead of unconditionally dropping every buffer associated
with the particular mount point (that defeats the purpose of page
caching), base kernel exports the "drop_pagecache_sb()" call that allows
page cache to be
Andrew Morton wrote:
We shouldn't export this particular implementation to modules because it
has bad failure modes. There might be a case for exposing an
i_sb_list-based API or, perhaps better, a max-unused-inodes mount option.
Ok, thanks for looking into this - it is appreciated. I'll tr
Andrew Morton wrote:
On Mon, 27 Nov 2006 18:52:58 -0500
Wendy Cheng <[EMAIL PROTECTED]> wrote:
Not sure about walking thru sb->s_inodes for several reasons
1. First, the changes made are mostly for file server setup with large
fs size - the entry count in sb->s_inode
Andrew Morton wrote:
This search is potentially inefficient. It would be better walk
sb->s_inodes.
Not sure about walking thru sb->s_inodes for several reasons
1. First, the changes made are mostly for file server setup with large
fs size - the entry count in sb->s_inodes may not be s
ng impression that this call is implemented but just never works. We
have customer inquires about this issue.
Upload a trivial patch to address this confusion.
Signed-off-by: S. Wendy Cheng <[EMAIL PROTECTED]>
--- linux-2.6.12/fs/aio.c 2005-06-17 15:48:29.0 -0400
Previously sent via private mail that doesn't seem to go thru - resend
via office mailer.
Note that other than few exceptions, most of the current filesystem
and/or drivers do not have aio cancel specifically defined
(kiob->ki_cancel field is mostly NULL). However, sys_io_cancel system
call u
12 matches
Mail list logo