Hi,
I'm trying to test the btrfs and ceph contributions to 3.11, without
testing all of 3.11-rc1 (just yet), so I'm testing with the "next"
branch of Chris Mason's tree (commit cbacd76bb3 from
git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs.git)
merged into the for-linus branch of
On 05/30/2013 02:26 PM, Chuck Lever wrote:
>
> On May 30, 2013, at 4:19 PM, Jim Schutt wrote:
>
>> Hi,
>>
>> I've been trying to test 3.10-rc3 on some diskless clients, and found
>> that I can no longer mount my root file system via NFSv3.
>>
>
Hi,
I've been trying to test 3.10-rc3 on some diskless clients, and found
that I can no longer mount my root file system via NFSv3.
I poked around looking at NFS changes for 3.10, and found these two
commits:
d497ab9751 "NFSv3: match sec= flavor against server list"
4580a92d44 "NFS: Use server
On 05/15/2013 10:49 AM, Alex Elder wrote:
> On 05/15/2013 11:38 AM, Jim Schutt wrote:
>> > Ceph's encode_caps_cb() worked hard to not call __page_cache_alloc() while
>> > holding a lock, but it's spoiled because ceph_pagelist_addpage() always
>> > call
thread_worker+0x70/0x70
[13439.587132] ceph: mds0 reconnect success
[13490.720032] ceph: mds0 caps stale
[13501.235257] ceph: mds0 recovery completed
[13501.300419] ceph: mds0 caps renewed
Fix it up by encoding locks into a buffer first, and when the
number of encoded locks is stable, copy that into
er.cc and
src/include/encoding.h in the Ceph server code (git://github.com/ceph/ceph).
I also checked the server side for flock_len decoding, and I believe that
also happens correctly, by virtue of having been declared __le32 in
struct ceph_mds_cap_reconnect, in src/include/ceph_fs.h.
Signed-off-by
Signed-off-by: Jim Schutt
---
fs/ceph/locks.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/fs/ceph/locks.c b/fs/ceph/locks.c
index 202dd3d..ffc86cb 100644
--- a/fs/ceph/locks.c
+++ b/fs/ceph/locks.c
@@ -169,7 +169,7 @@ int ceph_flock(struct file *file, int cmd
n src/include/ceph_fs.h.
Jim Schutt (3):
ceph: fix up comment for ceph_count_locks() as to which lock to hold
ceph: add missing cpu_to_le32() calls when encoding a reconnect capability
ceph: ceph_pagelist_append might sleep while atomic
fs/ceph/locks.c |
On 05/14/2013 10:44 AM, Alex Elder wrote:
> On 05/09/2013 09:42 AM, Jim Schutt wrote:
>> Ceph's encode_caps_cb() worked hard to not call __page_cache_alloc while
>> holding a lock, but it's spoiled because ceph_pagelist_addpage() always
>> calls kmap(), which
_worker+0x70/0x70
[13439.587132] ceph: mds0 reconnect success
[13490.720032] ceph: mds0 caps stale
[13501.235257] ceph: mds0 recovery completed
[13501.300419] ceph: mds0 caps renewed
Fix it up by encoding locks into a buffer first, and when the
number of encoded locks is stable, copy that into
On 12/11/2012 06:37 PM, Liu Bo wrote:
> On Tue, Dec 11, 2012 at 09:33:15AM -0700, Jim Schutt wrote:
>> On 12/09/2012 07:04 AM, Liu Bo wrote:
>>> On Wed, Dec 05, 2012 at 09:07:05AM -0700, Jim Schutt wrote:
>>> Hi Jim,
>>>
>>> Could you please apply the
On 12/09/2012 07:04 AM, Liu Bo wrote:
> On Wed, Dec 05, 2012 at 09:07:05AM -0700, Jim Schutt wrote:
>> > Hi,
>> >
>> > I'm hitting a btrfs locking issue with 3.7.0-rc8.
>> >
>> > The btrfs filesystem in question is backing a Ceph OSD
&
On 12/05/2012 09:07 AM, Jim Schutt wrote:
Hi,
I'm hitting a btrfs locking issue with 3.7.0-rc8.
The btrfs filesystem in question is backing a Ceph OSD
under a heavy write load from many cephfs clients.
I reported this issue a while ago:
http://www.spinics.net/lists/linux-btrfs/msg19370
Hi,
I'm hitting a btrfs locking issue with 3.7.0-rc8.
The btrfs filesystem in question is backing a Ceph OSD
under a heavy write load from many cephfs clients.
I reported this issue a while ago:
http://www.spinics.net/lists/linux-btrfs/msg19370.html
when I was testing what I thought might be
Hi Mel,
On 08/12/2012 02:22 PM, Mel Gorman wrote:
I went through the patch again but only found the following which is a
weak candidate. Still, can you retest with the following patch on top and
CONFIG_PROVE_LOCKING set please?
I've gotten in several hours of testing on this patch with
no i
On 08/10/2012 05:02 AM, Mel Gorman wrote:
On Thu, Aug 09, 2012 at 04:38:24PM -0600, Jim Schutt wrote:
Ok, this is an untested hack and I expect it would drop allocation success
rates again under load (but not as much). Can you test again and see what
effect, if any, it has please?
---8
On 08/09/2012 02:46 PM, Mel Gorman wrote:
On Thu, Aug 09, 2012 at 12:16:35PM -0600, Jim Schutt wrote:
On 08/09/2012 07:49 AM, Mel Gorman wrote:
Changelog since V2
o Capture !MIGRATE_MOVABLE pages where possible
o Document the treatment of MIGRATE_MOVABLE pages while capturing
o Expand
der load -
to 0% in one case. There is a proposed change to that patch in this series
and it would be ideal if Jim Schutt could retest the workload that led to
commit [7db8889a: mm: have order> 0 compaction start off where it left].
On my first test of this patch series on top of 3.5, I ran i
ess rates under load -
to 0% in one case. There is a proposed change to that patch in this series
and it would be ideal if Jim Schutt could retest the workload that led to
commit [7db8889a: mm: have order> 0 compaction start off where it left].
I was successful at resolving my Ceph issue on 3.6-rc
On 08/07/2012 08:52 AM, Mel Gorman wrote:
On Tue, Aug 07, 2012 at 10:45:25AM -0400, Rik van Riel wrote:
On 08/07/2012 08:31 AM, Mel Gorman wrote:
commit [7db8889a: mm: have order> 0 compaction start off where it left]
introduced a caching mechanism to reduce the amount work the free page
scan
On Thu, 2006-11-23 at 12:24 +0100, Jens Axboe wrote:
> On Wed, Nov 22 2006, Jim Schutt wrote:
> >
> > On Wed, 2006-11-22 at 09:57 +0100, Jens Axboe wrote:
> > > On Tue, Nov 21 2006, Jim Schutt wrote:
> > [snip]
> > > >
> > > >
On Thu, 2006-11-16 at 21:25 +0100, Jens Axboe wrote:
> On Thu, Nov 16 2006, Jim Schutt wrote:
> > Hi,
> >
> > My test program can do one of the following:
> >
> > send data:
> > A) read() from file into buffer, write() buffer into socket
> > B) mma
On Thu, 2006-11-16 at 21:25 +0100, Jens Axboe wrote:
> On Thu, Nov 16 2006, Jim Schutt wrote:
> > Hi,
> >
> >
> > My test program can do one of the following:
> >
> > send data:
> > A) read() from file into buffer, write() buffer into socket
>
harder at my test program?
Or is read+write really the fastest way to get data off a
socket and into a file?
-- Jim Schutt
(Please Cc: me, as I'm not subscribed to lkml.)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTEC
er - bet that helps
It does -- no more spurious interrupts.
Thanks -- Jim
--
Jim Schutt <[EMAIL PROTECTED]>
Sandia National Laboratories, Albuquerque, New Mexico USA
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [
=y
Details available on request.
Thanks -- Jim
--
Jim Schutt <[EMAIL PROTECTED]>
Sandia National Laboratories, Albuquerque, New Mexico USA
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo i
Jeff Garzik wrote:
>
>
> de4x5 is becoming EISA-only in 2.5.x too, since its PCI support is
> duplicated now in tulip driver.
>
I've got some DEC Miatas with DECchip 21142/43 ethernet cards, and I
don't get the same link speeds when using the de4x5 and tulip drivers,
as of 2.4.0-test10. The m
27 matches
Mail list logo