Hi!
> We are fixing the release note. https://github.com/ceph/ceph/pull/22445
Thank you! It will help others.
Cheers,
Tobias Florek
signature.asc
Description: signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/list
On Thu, Jun 7, 2018 at 2:44 PM, Tobias Florek wrote:
> Hi!
>
> Thank you for your help! The cluster is running healthily for a day now.
>
> Regarding the problem, I just checked in the release notes [1] and on
> docs.ceph.com and did not find the right invocation after an upgrade.
> Maybe that oug
Hi!
Thank you for your help! The cluster is running healthily for a day now.
Regarding the problem, I just checked in the release notes [1] and on
docs.ceph.com and did not find the right invocation after an upgrade.
Maybe that ought to be fixed.
>> [upgrade from luminous to mimic with prior cep
Tob 于 2018年6月6日周三 22:21写道:
> Hi!
>
> Thank you for your reply.
>
> I just did:
>
> > The correct commands should be:
> >
> > ceph daemon scrub_path / force recursive repair
> > ceph daemon scrub_path '~mdsdir' force recursive repair
>
> They returned instantly and in the mds' logfile only the f
On Wed, Jun 6, 2018 at 3:25 PM, Tobias Florek wrote:
> Hi,
>
> I upgraded a ceph cluster to mimic yesterday according to the release
> notes. Specifically I did stop all standby MDS and then restarted the
> only active MDS with the new version.
>
> The cluster was installed with luminous. Its ceph
Hi,
I upgraded a ceph cluster to mimic yesterday according to the release
notes. Specifically I did stop all standby MDS and then restarted the
only active MDS with the new version.
The cluster was installed with luminous. Its cephfs volume had snapshots
prior to the update, but only one active M