Greg, thanks for your comment. Could you please share what OS, kernel and any 
nfs/cephfs settings you've used to achieve the pretty well stability? Also, 
what kind of tests have you ran to check that? 

Thanks 

----- Original Message -----

> From: "Gregory Farnum" <[email protected]>
> To: "Ilya Dryomov" <[email protected]>, "Andrei Mikhailovsky"
> <[email protected]>
> Cc: "ceph-users" <[email protected]>
> Sent: Saturday, 29 November, 2014 10:19:32 PM
> Subject: Re: [ceph-users] Giant + nfs over cephfs hang tasks

> Ilya, do you have a ticket reference for the bug?
> Andrei, we run NFS tests on CephFS in our nightlies and it does
> pretty well so in the general case we expect it to work. Obviously
> not at the moment with whatever bug Ilya is looking at, though. ;)
> -Greg

> On Sat, Nov 29, 2014 at 4:51 AM Ilya Dryomov <
> [email protected] > wrote:

> > On Sat, Nov 29, 2014 at 3:49 PM, Ilya Dryomov <
> > [email protected] > wrote:
> 
> > > On Sat, Nov 29, 2014 at 3:22 PM, Andrei Mikhailovsky <
> > > [email protected] > wrote:
> 
> > >> Ilya,
> 
> > >>
> 
> > >> I think i spoke too soon in my last message. I've not given it
> > >> more load
> 
> > >> (running 8 concurrent dds with bs=4M) and about a minute or so
> > >> after
> 
> > >> starting i've seen problems in dmesg output. I am attaching
> > >> kern.log file
> 
> > >> for you reference.
> 
> > >>
> 
> > >> Please check starting with the following line: Nov 29 12:07:38
> 
> > >> arh-ibstorage1-ib kernel: [ 3831.906510]. This is when I've
> > >> started the
> 
> > >> concurrent 8 dds.
> 
> > >>
> 
> > >> The command that caused this is:
> 
> > >>
> 
> > >> time dd if=/dev/zero of=4G00 bs=4M count=5K oflag=direct & time
> > >> dd
> 
> > >> if=/dev/zero of=4G11 bs=4M count=5K oflag=direct &time dd
> > >> if=/dev/zero
> 
> > >> of=4G22 bs=4M count=5K oflag=direct &time dd if=/dev/zero
> > >> of=4G33
> > >> bs=4M
> 
> > >> count=5K oflag=direct & time dd if=/dev/zero of=4G44 bs=4M
> > >> count=5K
> 
> > >> oflag=direct & time dd if=/dev/zero of=4G55 bs=4M count=5K
> > >> oflag=direct
> 
> > >> &time dd if=/dev/zero of=4G66 bs=4M count=5K oflag=direct &time
> > >> dd
> 
> > >> if=/dev/zero of=4G77 bs=4M count=5K oflag=direct &
> 
> > >>
> 
> > >> I've ran the same test about 10 times but with only 4 concurrent
> > >> dds and
> 
> > >> that didn't cause the issue.
> 
> > >>
> 
> > >> Should I try the 3.18 kernel again to see if 8dds produce
> > >> similar
> > >> output?
> 
> > >
> 
> > > Missing attachment.
> 

> > Definitely try the 3.18 testing kernel.
> 

> > Thanks,
> 

> > Ilya
> 
> > ______________________________ _________________
> 
> > ceph-users mailing list
> 
> > [email protected]
> 
> > http://lists.ceph.com/ listinfo.cgi/ceph-users-ceph. com
> 
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to