Hi Community,
currently i’m installing an nvme only storage cluster with cephadm from scratch
(v.17.2.5). Everything works fine. Each of my nodes (6) has 3 enterprise nvme’s
with 7TB capacity.
At the beginning I only installed one OSD per nvme, now I want to use four
instead of one but I’m str
This post took a while to be checked from a moderator and meanwhile I found a
Service rule, that fetched all my available diskes. I deleted it and after
that, all commands works as foreseen.
Thanks to all for reading.
___
ceph-users mailing list -- cep
Hi,
I’m in the process of upgrading all our ceph servers from 14.2.9 to 16.2.7.
Two of three monitors are on 16.2.6 and one is 16.2.7. I will update them
soon.
Before updating to 16.2.6/7 I set the “bluestore_fsck_quick_fix_on_mount
false” flag and I already upgraded more than the half of my O
Hello,
I have an SSD pool that was initially created years ago with 128PG. This
seems to be suboptimal to me. In this pool are 32 OSDs á 1.6TiB. 8 servers
with 4 OSDs each.
ceph osd pool autoscale-status recommends 2048 PGs.
Is it safe to enable the autoscale mode? Is the pool still accessible
du