Hi!

So I've got an old dumpling production cluster which has slowly been upgraded 
to Jewel.
Now I'm facing the Ceph Health warning that straw_calc_version = 0

According to an old thread from 2016 and the docs it could trigger a small to 
moderate amount of migration.
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-May/009702.html 
(https://link.getmailspring.com/link/1529849940.local-c03a6474-c0ef-v1.2.2-96fb3...@getmailspring.com/0?redirect=http%3A%2F%2Flists.ceph.com%2Fpipermail%2Fceph-users-ceph.com%2F2016-May%2F009702.html&recipient=Y2VwaC11c2Vyc0BsaXN0cy5jZXBoLmNvbQ%3D%3D)
http://docs.ceph.com/docs/master/rados/operations/crush-map/#straw-calc-version-tunable-introduced-with-firefly-too
 
(https://link.getmailspring.com/link/1529849940.local-c03a6474-c0ef-v1.2.2-96fb3...@getmailspring.com/1?redirect=http%3A%2F%2Fdocs.ceph.com%2Fdocs%2Fmaster%2Frados%2Foperations%2Fcrush-map%2F%23straw-calc-version-tunable-introduced-with-firefly-too&recipient=Y2VwaC11c2Vyc0BsaXN0cy5jZXBoLmNvbQ%3D%3D)
Since we're heading on to Luminous and later on Mimic, I'm not sure it's wise 
to leave it as it is. Since this is a filestore HDD + SSD journals cluster, a 
moderate migration might cause issues to our production servers.
Any way to "test" how much migration it will cause? The servers/disks are 
homogeneous.
Also, would ignoring it cause any issues with Luminous/Mimic? The plan is to 
set up another pool and replicate all data to the new pool on the same OSDs 
(not sure that's in Mimic yet though?)
Kind Regards,
David Majchrzak
> Moving to straw_calc_version 1 and then adjusting a straw bucket (by adding, 
> removing, or reweighting an item, or by using the reweight-all command) can 
> trigger a small to moderate amount of data movement if the cluster has hit 
> one of the problematic conditions.
>
>
>

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to