On Mon, Jan 28, 2019 at 7:31 AM ST Wong (ITSC) wrote:
>
> > That doesn't appear to be an error -- that's just stating that it found a
> > dead client that was holding the exclusice-lock, so it broke the dead
> > client's lock on the image (by blacklisting the client).
>
> As there is only 1 RBD
e client host?
Thanks a lot.
/st
-Original Message-
From: Jason Dillaman
Sent: Friday, January 25, 2019 10:04 PM
To: ST Wong (ITSC)
Cc: dilla...@redhat.com; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] RBD client hangs
That doesn't appear to be an error -- that's just st
al)
>
> Would you help ? Thanks.
> /st
>
> -Original Message-
> From: ceph-users On Behalf Of ST Wong
> (ITSC)
> Sent: Friday, January 25, 2019 5:58 PM
> To: dilla...@redhat.com
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] RBD client hangs
>
essage-
From: ceph-users On Behalf Of ST Wong (ITSC)
Sent: Friday, January 25, 2019 5:58 PM
To: dilla...@redhat.com
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] RBD client hangs
Hi, It works. Thanks a lot.
/st
-Original Message-
From: Jason Dillaman mailto:jdill...@redh
Hi, It works. Thanks a lot.
/st
-Original Message-
From: Jason Dillaman
Sent: Tuesday, January 22, 2019 9:29 PM
To: ST Wong (ITSC)
Cc: Ilya Dryomov ; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] RBD client hangs
Your "mon" cap should be "profile rbd" ins
"allow r"
> caps osd = "allow rwx pool=2copy, allow rwx pool=4copy"
> --- cut here ---
>
> Thanks a lot.
> /st
>
>
>
> -----Original Message-
> From: Ilya Dryomov
> Sent: Monday, January 21, 2019 7:33 PM
> To: ST Wong
caps osd = "allow rwx pool=2copy, allow rwx pool=4copy"
--- cut here ---
Thanks a lot.
/st
-Original Message-
From: Ilya Dryomov
Sent: Monday, January 21, 2019 7:33 PM
To: ST Wong (ITSC)
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] RBD client hangs
On Mon, Jan 21, 2019 at 11:43 AM ST Wong (ITSC) wrote:
>
> Hi, we’re trying mimic on an VM farm. It consists 4 OSD hosts (8 OSDs) and 3
> MON. We tried mounting as RBD and CephFS (fuse and kernel mount) on
> different clients without problem.
Is this an upgraded or a fresh cluster?
>
> Th
Hi, we're trying mimic on an VM farm. It consists 4 OSD hosts (8 OSDs) and 3
MON. We tried mounting as RBD and CephFS (fuse and kernel mount) on
different clients without problem.
Then one day we perform failover test and stopped one of the OSD. Not sure if
it's related but after that test