Hi Gregory,
Sorry, I was sure I mentioned it. We installed as Luminous, upgraded to
Mimic and this happend on Nautilus. (14.2.0)
The data was moving until the fasthdds pool1 was "empty". The PG's do not
migrate, it's going up to 377 active+clean and then the following log
appears:
in ceph -w:
20
On Wed, May 8, 2019 at 2:37 AM Marco Stuurman
wrote:
>
> Hi,
>
> I've got an issue with the data in our pool. A RBD image containing 4TB+ data
> has moved over to a different pool after a crush rule set change, which
> should not be possible. Besides that it loops over and over to start
> remap
Hi,
I've got an issue with the data in our pool. A RBD image containing 4TB+
data has moved over to a different pool after a crush rule set change,
which should not be possible. Besides that it loops over and over to start
remapping and backfilling (goes up to 377 pg active+clean then suddenly
dro