On Tue, May 8, 2018 at 8:49 PM, Ryan Leimenstoll
wrote:
> Hi Gregg, John,
>
> Thanks for the warning. It was definitely conveyed that they are dangerous. I
> thought the online part was implied to be a bad idea, but just wanted to
> verify.
>
> John,
>
> We were mostly operating off of what the
Hi Gregg, John,
Thanks for the warning. It was definitely conveyed that they are dangerous. I
thought the online part was implied to be a bad idea, but just wanted to verify.
John,
We were mostly operating off of what the mds logs reported. After bringing the
mds back online and active, we mo
On Mon, May 7, 2018 at 8:50 PM, Ryan Leimenstoll
wrote:
> Hi All,
>
> We recently experienced a failure with our 12.2.4 cluster running a CephFS
> instance that resulted in some data loss due to a seemingly problematic OSD
> blocking IO on its PGs. We restarted the (single active) mds daemon durin
Absolutely not. Please don't do this. None of the CephFS disaster recovery
tooling in any way plays nicely with a live filesystem.
I haven't looked at these docs in a while, are they not crystal clear about
all these operations being offline and in every way dangerous? :/
-Greg
On Mon, May 7, 2018
Hi All,
We recently experienced a failure with our 12.2.4 cluster running a CephFS
instance that resulted in some data loss due to a seemingly problematic OSD
blocking IO on its PGs. We restarted the (single active) mds daemon during
this, which caused damage due to the journal not having the