Hi,
I have a ceph cluster with 26 osd's in 4 hosts only use for rbd for an
OpenStack cluster (started at 0.48 I think), currently running 0.94.2 on
Ubuntu 14.04. A few days ago one of the osd's was at 85% disk usage while
only 30% of the raw disk space is used. I ran reweight-by-utilization with
1
isk+write+known_if_redirected e61799) currently no flag points
reached
rbd_data.47a42a1fba00d3.130e is an object in an VM disk that
openstack is trying to delete.
gr,
Bart
On Sun, Aug 16, 2015 at 1:27 PM Bart Vanbrabant wrote:
> Hi,
>
> I have a ceph cluster with 26 osd'
08/17/2015 03:44 PM, minchen wrote:
It looks like the crushrule does't work properly by osdmap changed,
there are 3 pgs unclean: 5.6c7 5.2c7 15.2bd
I think you can try follow method to help locate the problem
1st, ceph pg query to lookup detail of pg state,
eg, blocked by whic
ting and 19 client ops,
> 1. check osd.19's log to see if any errors
> 2. if not, out 19 from osdmap to remap pg 5.6c7
> ceph osd out 19 // this will cause data migration
> I am not sure whether this will help you!
>
>
> -- Original --
&