[ceph-users] Re: Cephadm recreating osd with multiple block devices

2022-12-15 Thread Ali Akil
I think the issue has been described in this note <https://docs.ceph.com/en/quincy/cephadm/services/osd/#remove-an-osd> in the documentation. On 15.12.22 11:47, Ali Akil wrote: Hallo folks, i am encountering a weird behavior from Ceph when i try to remove an OSD to replace it w

[ceph-users] not all pgs not evicted after reweight

2022-12-15 Thread Ali Akil
Hallo folks, i want to replace an OSD, so i reweight it to 0 `ceph osd reweight osd. 0`. The OSD hat 24 PGs and the number went down to 7, but stuck there. `ceph osd tree` shows: 22    hdd  0 0  0 B  0 B  0 B  0 B  0 B  0 B 0 0    7  up

[ceph-users] Cephadm recreating osd with multiple block devices

2022-12-15 Thread Ali Akil
cient. I am running Ceph quincy version 17.2.0 Best regards, Ali Akil ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] osd encryption is failing due to device-mapper

2022-11-22 Thread Ali Akil
or directory 2022-11-22 17:32:30,408 7fb201442000 INFO /usr/bin/podman: stderr --> Was unable to complete a new OSD, will rollback changes ``` Seems to be an issue with the device-mapper. Without the encryption option, the osds are being deployed. Anybody has an idea, how to resolve it ?

[ceph-users] Re: encrypt OSDs after creation

2022-10-19 Thread Ali Akil
2022 г. в 00:32, Ali Akil : Hallo folks, i created before couple of months a quincy ceph cluster with cephadm. I didn't encpryt the OSDs at that time. What would be the process to encrypt these OSDs afterwards? The documentation states only adding `encrypted: true` to the osd manifest, which

[ceph-users] encrypt OSDs after creation

2022-10-11 Thread Ali Akil
Hallo folks, i created before couple of months a quincy ceph cluster with cephadm. I didn't encpryt the OSDs at that time. What would be the process to encrypt these OSDs afterwards? The documentation states only adding `encrypted: true` to the osd manifest, which will work only upon creation. R

[ceph-users] Re: [cephadm] ceph config as yaml

2022-07-19 Thread Ali Akil
e added to any service spec. cephadm will parse it > > > and > > > apply all the values included in the same. > > > > > > There's no documentation because this wasn't documented so far. I've just > > > created a

[ceph-users] Re: [cephadm] ceph config as yaml

2022-07-15 Thread Ali Akil
15, 2022 at 2:45 PM Ali Akil wrote: Hallo, i used to set the configuration for Ceph using the cli aka `ceph config set global osd_deep_scrub_interval `. I would like though to store these configuration in my git repository. Is there a way to apply these configuration

[ceph-users] [cephadm] ceph config as yaml

2022-07-15 Thread Ali Akil
cephadm. Best Regards, Ali Akil ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Is Ceph with rook ready for production?

2022-07-04 Thread Ali Akil
ach as i believe that K8s nodes should be immutable, which won't be the case with Ceph on K8s. Ali Akil, Senior Infrastructure Engineer @Codesphere <https://codesphere.com/> On 04.07.22 08:15, Szabo, Istvan (Agoda) wrote: Hi, Is ceph with rook ready for production? Not really cl

[ceph-users] cephadm export config

2022-04-22 Thread Ali Akil
hallo everybody, i noticed that adding a config for example `ceph config set global public_network ` doesn't update ceph.conf. Is there any way to export the global configurations as a file? Regards, Ali ___ ceph-users mailing list -- ceph-users@ceph.

[ceph-users] cephadm db size

2022-04-21 Thread Ali Akil
hallo everybody, i want to split my OSDs on 2 NVMEs (250G) and 1 SSD(900G) for bluestorage . I used the following configuration ``` service_type: osd service_id: osd_spec_a placement:    host_pattern: "*" spec:    data_devices:   paths:    - /dev/sdc    - /dev/sdd    - /dev/sde    

[ceph-users] cephadm filter OSDs

2022-04-20 Thread Ali Akil
hallo everybody, i have the following hardware which consist of 3 Nodes with the following specs: * 8 HDDs 8TB * 1 SSD 900G * 2 NVME 260G i planned to use the HDDs for the OSDs and the other devices for bluestorage(db) according to the documentation 2% of storage is needed for bluestorage as

[ceph-users] mons on osd nodes with replication

2022-04-06 Thread Ali Akil
Hallo together, i am planning a Ceph cluster on 3 storage nodes (12 OSDs per Cluster with Bluestorage). Each node has 192 GB of memory nad 24 cores of cpu. I know it's recommended to have separated MON and ODS hosts, in order to minimize disruption since monitor and OSD daemons are not inactive a

[ceph-users] memory recommendation for monitors

2022-04-05 Thread Ali Akil
Hallo everybody, the official documentation recommends for the monitor nodes 32GB for a small clusters. Is that per node? Like i would need 3 nodes with 32GB RAM each in addition to the OSD nodes? my cluster will consist of 3 replicated OSD nodes (12 OSD each), how can i calculate the required a

[ceph-users] ceph bluestore

2022-04-05 Thread Ali Akil
Hallo everybody, I have two questions regarding bluestore. I am struggling to understand the documentation :/ I am planning to deploy 3 ceph nodes with 10xHDDs for OSD data, Raid 0 2xSSDs for block.db with replication on host level. First Question : Is it possible to deploy block.db on RAID 0 p