[ceph-users] Problem with cephadm and deploying 4 ODSs on nvme Storage

2023-03-03 Thread claas . goltz
Hi Community,
currently i’m installing an nvme only storage cluster with cephadm from scratch 
(v.17.2.5). Everything works fine. Each of my nodes (6) has 3 enterprise nvme’s 
with 7TB capacity. 

At the beginning I only installed one OSD per nvme, now I want to use four 
instead of one but I’m struggling with that.

Frist of all I set the following option in my cluster:
ceph orch apply osd --all-available-devices --unmanaged=true

As I understand this option should prevent cephadm to automatically fetch new, 
available disks and deploying OSD daemons. But that seems not to work.

If I delete and purge my OSD and zapping the disk with
ceph orch device zap ceph-nvme01 /dev/nvme2n1 –force

the disk came available for the cluster and seconds later it deploys the same 
OSD ID than it has before. I checked that the old OSD was completely removed 
and the docker container was not started. 

My next try was to set:
ceph orch host label add ceph-nvme01 _no_schedule
purge the OSD
zapping the disk and following:
ceph orch daemon add osd ceph-nvme01:/dev/nvme2n1,osds_per_device=4
removing the _no_schedule flag 

and again: the old OSD was recreated and not 4.

So where is my mistake?
Thank you!
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Problem with cephadm and deploying 4 ODSs on nvme Storage

2023-03-07 Thread claas . goltz
This post took a while to be checked from a moderator and meanwhile I found a 
Service rule, that fetched all my available diskes. I deleted it and after 
that, all commands works as foreseen.

Thanks to all for reading.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] OSD(s) reporting legacy (not per-pool) BlueStore omap usage stats

2022-03-10 Thread Claas Goltz
Hi,

I’m in the process of upgrading all our ceph servers from 14.2.9 to 16.2.7.

Two of three monitors are on 16.2.6 and one is 16.2.7. I will update them
soon.



Before updating to 16.2.6/7 I set the “bluestore_fsck_quick_fix_on_mount
false” flag and I already upgraded more than the half of my OSD Hosts (10
so far) to the latest Version without any problems. My Health Check now
says:

“92 OSD(s) reporting legacy (not per-pool) BlueStore omap usage stats”



How should I handle the warning now?

Thanks!
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Question about auto scale and changing the PG Num

2022-03-18 Thread Claas Goltz
Hello,
I have an SSD pool that was initially created years ago with 128PG. This
seems to be suboptimal to me. In this pool are 32 OSDs á 1.6TiB. 8 servers
with 4 OSDs each.

ceph osd pool autoscale-status recommends 2048 PGs.

Is it safe to enable the autoscale mode? Is the pool still accessible
during this time? Or should I rather go up step by step, my idea would be
to go first to 512, then 1024 and finally to 2048. I would set the recovery
priority to low at the working hours and default outside that time.
Thank you very much!
Claas
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io