Il 30/01/19 17:00, Paul Emmerich ha scritto:
Quick and dirty solution: take the full OSD down to issue the deletion
command ;)
Better solutions: temporarily incrase the full limit (ceph osd
set-full-ratio) or reduce the OSD's reweight (ceph osd reweight)
Paul
Many thanks
___
Il 30/01/19 17:04, Amit Ghadge ha scritto:
Better way is increase osd set-full-ratio slightly (.97) and then
remove buckets.
Many thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Better way is increase osd set-full-ratio slightly (.97) and then remove
buckets.
-AmitG
On Wed, 30 Jan 2019, 21:30 Paul Emmerich, wrote:
> Quick and dirty solution: take the full OSD down to issue the deletion
> command ;)
>
> Better solutions: temporarily incrase the full limit (ceph osd
> se
Quick and dirty solution: take the full OSD down to issue the deletion
command ;)
Better solutions: temporarily incrase the full limit (ceph osd
set-full-ratio) or reduce the OSD's reweight (ceph osd reweight)
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https:
Hello guys,
i have a Ceph with a full S3
~# ceph health detail
HEALTH_ERR 1 full osd(s); 1 near full osd(s)
osd.2 is full at 95%
osd.5 is near full at 85%
I want to delete some bucket but when i tried to show list bucket
~# radosgw-admin bucket list
2019-01-30 11:41:47.933621 7f467a9d0780 0