ian
> Regards,
> Quenten Grasso
>
>
> -Original Message-
> From: Christian Balzer [mailto:ch...@gol.com]
> Sent: Tuesday, 27 January 2015 11:53 AM
> To: ceph-users@lists.ceph.com
> Cc: Quenten Grasso
> Subject: Re: [ceph-users] OSD removal rebalancing again
>
en rebalanced anyway?
Regards,
Quenten Grasso
-Original Message-
From: Christian Balzer [mailto:ch...@gol.com]
Sent: Tuesday, 27 January 2015 11:53 AM
To: ceph-users@lists.ceph.com
Cc: Quenten Grasso
Subject: Re: [ceph-users] OSD removal rebalancing again
On Tue, 27 Jan 2015 01:37:52 +00
up 1
> 52 1 osd.52 up 1
> 53 1 osd.53 up 1
> 54 1 osd.54 up 1
>
> Regards,
> Quenten Grasso
>
> -Original Message-
> From: Christian Balzer
sts.ceph.com
Cc: Quenten Grasso
Subject: Re: [ceph-users] OSD removal rebalancing again
Hello,
A "ceph -s" and "ceph osd tree" would have been nice, but my guess is that
osd.0 was the only osd on that particular storage server?
In that case the removal of the bucket (host) b
Hello,
A "ceph -s" and "ceph osd tree" would have been nice, but my guess is that
osd.0 was the only osd on that particular storage server?
In that case the removal of the bucket (host) by removing the last OSD in
it also triggered a re-balancing.
Not really/well documented AFAIK and annoying, b
Hi All,
I just removed an OSD from our cluster following the steps on
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/
First I set the OSD as out,
ceph osd out osd.0
This emptied the OSD and eventually health of the cluster came back to
normal/ok. and OSD was up and out. (took abo