What's the current health of the cluster?
It may help to compact the monitors' LevelDB store if they have grown in
size
http://www.sebastien-han.fr/blog/2014/10/27/ceph-mon-store-taking-up-a-lot-of-space/
Depends on the size of the mon's store size it may take some time to
compact, make sure to do only one at a time.
*Kobi Laredo*
*Cloud Systems Engineer* | (*408) 409-KOBI*

On Fri, Mar 27, 2015 at 10:31 AM, Chu Duc Minh <chu.ducm...@gmail.com>
wrote:

> All my monitors running.
> But i deleting pool .rgw.buckets, now having 13 million objects (just test
> data).
> The reason that i must delete this pool is my cluster become unstable, and
> sometimes an OSD down, PG peering, incomplete,...
> Therefore i must delete this pool to re-stablize my cluster.  (radosgw is
> too slow for delete objects when one of my bucket reachs few million
> objects).
>
> Regards,
>
>
> On Sat, Mar 28, 2015 at 12:23 AM, Gregory Farnum <g...@gregs42.com> wrote:
>
>> Are all your monitors running? Usually a temporary hang means that the
>> Ceph client tries to reach a monitor that isn't up, then times out and
>> contacts a different one.
>>
>> I have also seen it just be slow if the monitors are processing so many
>> updates that they're behind, but that's usually on a very unhappy cluster.
>> -Greg
>> On Fri, Mar 27, 2015 at 8:50 AM Chu Duc Minh <chu.ducm...@gmail.com>
>> wrote:
>>
>>> On my CEPH cluster, "ceph -s" return result quite slow.
>>> Sometimes it return result immediately, sometimes i hang few seconds
>>> before return result.
>>>
>>> Do you think this problem (ceph -s slow return) only relate to
>>> ceph-mon(s) process? or maybe it relate to ceph-osd(s) too?
>>> (i deleting a big bucket, .rgw.buckets, and ceph-osd(s) disk util quite
>>> high)
>>>
>>> Regards,
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to