Hi Irek,
the default value of replication level in version firefly is 3, while in
version emperor is 2, this is reason to make my cluster unstable.
I have another issue:
the size of the folder omap of some osds is very big, it is at about 2GB -
8GB, is there any way to clean up this folder?
Best
Hi Irek,
I stopped radosgw, then I deleted all pools that is default for radosgw as
below, wait for ceph delete objects, and re-created the pools. I stopped
and started the whole cluster including starting radosgw. Now it is very
unstable. Osds is usually marked down or crash. Please see a part of
thanks Irek, it is correct as you did.
Best regards,
Thanh Tran
On Wed, May 7, 2014 at 2:15 PM, Irek Fasikhov wrote:
> Yes, delete all the objects stored in the pool.
>
>
> 2014-05-07 6:58 GMT+04:00 Thanh Tran :
>
>> Hi,
>>
>> If i use command "ceph osd pool delete .rgw.bucket .rgw.bucket
>> -
Yes, delete all the objects stored in the pool.
2014-05-07 6:58 GMT+04:00 Thanh Tran :
> Hi,
>
> If i use command "ceph osd pool delete .rgw.bucket .rgw.bucket
> --yes-i-really-really-mean-it" to delete the pool .rgw.bucket, will this
> delete the pool, its objects and clean the data on osds?
>
Hi,
If i use command "ceph osd pool delete .rgw.bucket .rgw.bucket
--yes-i-really-really-mean-it" to delete the pool .rgw.bucket, will this
delete the pool, its objects and clean the data on osds?
Best regards,
Thanh Tran
___
ceph-users mailing list
cep