Still didn't find out what will happen when the pool is full - but tried a
little bit in our testing environment and i were not able to get the pool full
before an OSD got full. So in first place one OSD reached the full ratio (pool
not quite full, about 98%) and IO stopped (like expected when a
Hey ceph-users,
may I ask (nag) again about this issue? I am wondering if anybody can
confirm my observations?
I raised a bug https://tracker.ceph.com/issues/54136, but apart from the
assignment to a
dev a while ago here was not response yet.
Maybe I am just holding it wrong, please someone
Hi,
Still interested by some feedback... FYI, today I changed the
configuration of the RGW to https (for reasons unrelated to this
problem) and it seems the problem preventing the use of a RGW https with
the dashboard is fixed now. The problem described in my previous email
remains the same (
Dear Ceph-users,
in the meantime I found this ticket which seems to have the same assertion /
stacktrace but was solved: https://tracker.ceph.com/issues/44532
Anyone have any ideas how it could still happen in 16.2.7?
Greetings
André
- Am 17. Apr 2023 um 10:30 schrieb Andre Gemuend
andre
Hi,
On 21.04.23 05:44, Tao LIU wrote:
I build a Ceph Cluster with cephadm.
Every cehp node has 4 OSDs. These 4 OSD were build with 4 HDD (block) and 1
SDD (DB).
At present , one HDD is broken, and I am trying to replace the HDD,and
build the OSD with the new HDD and the free space of the SDD. I