Re: [ceph-users] co-located cephfs client deadlock

2019-05-02 Thread Dan van der Ster
y restarting > the osd that it is reading from? > > > > > -Original Message- > From: Dan van der Ster [mailto:d...@vanderster.com] > Sent: donderdag 2 mei 2019 8:51 > To: Yan, Zheng > Cc: ceph-users; pablo.llo...@cern.ch > Subject: Re: [ceph-users] co-located

Re: [ceph-users] co-located cephfs client deadlock

2019-05-02 Thread Marc Roos
...@vanderster.com] Sent: donderdag 2 mei 2019 8:51 To: Yan, Zheng Cc: ceph-users; pablo.llo...@cern.ch Subject: Re: [ceph-users] co-located cephfs client deadlock On Mon, Apr 1, 2019 at 1:46 PM Yan, Zheng wrote: > > On Mon, Apr 1, 2019 at 6:45 PM Dan van der Ster wrote: > > > > Hi al

Re: [ceph-users] co-located cephfs client deadlock

2019-05-01 Thread Dan van der Ster
On Mon, Apr 1, 2019 at 1:46 PM Yan, Zheng wrote: > > On Mon, Apr 1, 2019 at 6:45 PM Dan van der Ster wrote: > > > > Hi all, > > > > We have been benchmarking a hyperconverged cephfs cluster (kernel > > clients + osd on same machines) for awhile. Over the weekend (for the > > first time) we had on

Re: [ceph-users] co-located cephfs client deadlock

2019-04-01 Thread Yan, Zheng
On Mon, Apr 1, 2019 at 6:45 PM Dan van der Ster wrote: > > Hi all, > > We have been benchmarking a hyperconverged cephfs cluster (kernel > clients + osd on same machines) for awhile. Over the weekend (for the > first time) we had one cephfs mount deadlock while some clients were > running ior. > >

Re: [ceph-users] co-located cephfs client deadlock

2019-04-01 Thread Dan van der Ster
It's the latest CentOS 7.6 kernel. Known pain there? The user was running a 1.95TiB ior benchmark -- so, trying to do parallel writes to one single 1.95TiB file. We have max_file_size 219902322 (exactly 2 TiB) so it should fit. Thanks! Dan On Mon, Apr 1, 2019 at 1:06 PM Paul Emmerich wr

Re: [ceph-users] co-located cephfs client deadlock

2019-04-01 Thread Paul Emmerich
Which kernel version are you using? We've had lots of problems with random deadlocks in kernels with cephfs but 4.19 seems to be pretty stable. Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io