Hi all,
I'm looking for some advice on reducing my ceph cluster in half. I
currently have 40 hosts and 160 osds on a cephadm managed pacific
cluster. The storage space is only 12% utilized. I want to reduce the
cluster to 20 hosts and 80 osds while keeping the cluster operational.
I'd prefer
Thank you Matt, Etienne, and Frank for your great advice. I'm going to
set up a small test cluster to familiarize myself with the process
before making the change on my production environment. Thank you all
again, I really appreciate it!
Jason
On 2022-02-21 17:58, Jason Borden wrote:
Greetings,
I have a question regarding the use of cephadm and disk partitions. I have
noticed that the cephadm documentation mentions that a device cannot have
partitions to be considered "available" for use. In my situation I don't want
to use a device with partitions, but rather a partition i
Hi Robert!
Thanks for answering my question. I take it you're working a lot with Ceph
these days! On my pre-octopus clusters I did use LVM backed by partitions, but
I always kind of wondered if it was a good practice or not as it added an
additional layer and obscures the underlying disk topolo
We have been using ceph-deploy in our existing cluster running as a non root
user with sudo permissions. I've been working on getting an octopus cluster
working using cephadm. During bootstrap I ran into a
"execnet.gateway_bootstrap.HostNotFound" issue. It turns out that the problem
was caused
Thanks for the quick reply! I am using the cephadm package. I just wasn't aware
that of the user that was created as part of the package install. My
/etc/sudoers.d/cephadm seems to be incorrect. It gives root permission to
/usr/bin/cephadm, but cephadm is installed in /usr/sbin. That is easily f
Ok, I've been digging around a bit with the code and made progress, but haven't
got it all working yet. Here's what I've done:
# yum install cephadm
# ln -s ../sbin/cephadm /usr/bin/cephadm #Needed to reference the correct path
# cephadm bootstrap --output-config /etc/ceph/ceph.conf --output-key
I missed a line while pasting the previous message:
# ceph orchestrator set backend cephadm
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io