Have you tried a more aggressive reweight value?

I've seen some stubborn crush maps that don't start moving date until 0.9 or 
lower in some cases.

Reed

> On Mar 11, 2021, at 10:29 AM, Brent Kennedy <bkenn...@cfl.rr.com> wrote:
> 
> We have a ceph octopus cluster running 15.2.6, its indicating a near full
> osd which I can see is not weighted equally with the rest of the osds.  I
> tried to do the usual "ceph osd reweight osd.0 0.95" to force it down a
> little bit, but unlike the nautilus clusters, I see no data movement when
> issuing the command.  If I run a ceph osd tree, it shows the reweight
> setting, but no data movement appears to be occurring.  
> 
> 
> 
> Is there some new thing in ocotopus I am missing?  I looked through the
> release notes for .7, .8 and .9 and didn't see any fixes that jumped out as
> resolving a bug related to this.  The Octopus cluster was deployed using
> ceph-ansible and upgraded to 15.2.6.  I plan to upgrade to 15.2.9 in the
> coming month.
> 
> 
> 
> Any thoughts?
> 
> 
> 
> Regards,
> 
> -Brent
> 
> 
> 
> Existing Clusters:
> 
> Test: Ocotpus 15.2.5 ( all virtual on nvme )
> 
> US Production(HDD): Nautilus 14.2.11 with 11 osd servers, 3 mons, 4
> gateways, 2 iscsi gateways
> 
> UK Production(HDD): Nautilus 14.2.11 with 18 osd servers, 3 mons, 4
> gateways, 2 iscsi gateways
> 
> US Production(SSD): Nautilus 14.2.11 with 6 osd servers, 3 mons, 4 gateways,
> 2 iscsi gateways
> 
> UK Production(SSD): Octopus 15.2.6 with 5 osd servers, 3 mons, 4 gateways
> 
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to