Hi @all,
I have good news!
Indeed, by passing the datastore in proxmox conf in kernel rbd (krbd) and by
putting the IPs separated by commas (separated by spaces before), the VMs do
not shut down.
I will do further testing to confirm the info.
Pierre.
- Mail original -
> De: "Pierre Bel
Hi Lokendra,
To make monitor looked up through dns, ceph-mon also need to be resolved
correctly by dns server just like _ceph-mon._tcp.
And ceph-mon is default service name, which doesn’t need to be in the conf file
anyway.
Ben
> 2023年2月3日 12:14,Lokendra Rathour 写道:
>
> Hi Robert and Team,
>
Hi! 😊
It would be very kind of you to help us with that!
We have pools in our ceph cluster that are set to replicated size 2 min_size 1.
Obviously we want to go to size 3 / min_size 2 but we experience problems with
that.
USED goes to 100% instantly and MAX AVAIL goes to 0. Write operations see
We've been running into a strange issue with ceph-fuse on some nodes
lately. After some job runs on the node (and finishes or gets killed),
ceph-fuse gets stuck busy requesting objects from the OSDs without any
processes on the node using cephfs. When this happens, ceph-fuse uses
2-3 cores, s
Question:
What does the future hold with regard to cephadm vs rpm/deb packages? If it is
now suggested to use cephadm and thus containers to deploy new clusters, what
does the future hold? Is there an intent, at sometime in the future, to no
longer support rpm/deb packages for Linux systems, a
Hello Ceph community.
The company that recently hired me has a 3 mode ceph cluster that has been
running and stable. I am the new lone administrator here and do not know ceph
and this is my first experience with it.
The issue was that it is/was running out of space, which is why I made a 4th
Congrats landing a fun new job! That’s quite the mess you have to untangle
there.
I’d suggest, since all of those versions will support orchestrator/cephadm,
running through the cephadm conversion process here:
https://docs.ceph.com/en/latest/cephadm/adoption/
That should get you to the point
>
> As for why the balancer isn’t working, first is 89 the correct number of OSDs
> after you added the 4th host?
OSD 0-88 is on hosts 1-3 for a total of 89 OSDs. Host #4 has 20 drives and
while ceph is trying to add them, it gets as far as trying to mkfs and then it
errors out — you can see