I think the issue has been described in this note
<https://docs.ceph.com/en/quincy/cephadm/services/osd/#remove-an-osd> in
the documentation.
On 15.12.22 11:47, Ali Akil wrote:
Hallo folks,
i am encountering a weird behavior from Ceph when i try to remove an
OSD to replace it w
Hallo folks,
i want to replace an OSD, so i reweight it to 0 `ceph osd reweight
osd. 0`.
The OSD hat 24 PGs and the number went down to 7, but stuck there.
`ceph osd tree` shows:
22 hdd 0 0 0 B 0 B 0 B 0 B
0 B 0 B 0 0 7 up
cient.
I am running Ceph quincy version 17.2.0
Best regards,
Ali Akil
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
or
directory
2022-11-22 17:32:30,408 7fb201442000 INFO /usr/bin/podman: stderr -->
Was unable to complete a new OSD, will rollback changes
```
Seems to be an issue with the device-mapper.
Without the encryption option, the osds are being deployed.
Anybody has an idea, how to resolve it ?
2022 г. в 00:32, Ali Akil :
Hallo folks,
i created before couple of months a quincy ceph cluster with cephadm. I
didn't encpryt the OSDs at that time.
What would be the process to encrypt these OSDs afterwards?
The documentation states only adding `encrypted: true` to the osd
manifest, which
Hallo folks,
i created before couple of months a quincy ceph cluster with cephadm. I
didn't encpryt the OSDs at that time.
What would be the process to encrypt these OSDs afterwards?
The documentation states only adding `encrypted: true` to the osd
manifest, which will work only upon creation.
R
e added to any service spec. cephadm
will parse it
> > > and
> > > apply all the values included in the same.
> > >
> > > There's no documentation because this wasn't documented so
far. I've just
> > > created a
15, 2022 at 2:45 PM Ali Akil wrote:
Hallo,
i used to set the configuration for Ceph using the cli aka `ceph
config
set global osd_deep_scrub_interval `. I would like though to
store these configuration in my git repository. Is there a way to
apply
these configuration
cephadm.
Best Regards,
Ali Akil
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ach as i believe that K8s nodes should be
immutable, which won't be the case with Ceph on K8s.
Ali Akil,
Senior Infrastructure Engineer
@Codesphere <https://codesphere.com/>
On 04.07.22 08:15, Szabo, Istvan (Agoda) wrote:
Hi,
Is ceph with rook ready for production?
Not really cl
hallo everybody,
i noticed that adding a config for example `ceph config set global
public_network ` doesn't update ceph.conf.
Is there any way to export the global configurations as a file?
Regards,
Ali
___
ceph-users mailing list -- ceph-users@ceph.
hallo everybody,
i want to split my OSDs on 2 NVMEs (250G) and 1 SSD(900G) for
bluestorage . I used the following configuration
```
service_type: osd
service_id: osd_spec_a
placement:
host_pattern: "*"
spec:
data_devices:
paths:
- /dev/sdc
- /dev/sdd
- /dev/sde
hallo everybody,
i have the following hardware which consist of 3 Nodes with the
following specs:
* 8 HDDs 8TB
* 1 SSD 900G
* 2 NVME 260G
i planned to use the HDDs for the OSDs and the other devices for
bluestorage(db)
according to the documentation 2% of storage is needed for bluestorage
as
Hallo together,
i am planning a Ceph cluster on 3 storage nodes (12 OSDs per Cluster
with Bluestorage). Each node has 192 GB of memory nad 24 cores of cpu.
I know it's recommended to have separated MON and ODS hosts, in order to
minimize disruption since monitor and OSD daemons are not inactive a
Hallo everybody,
the official documentation recommends for the monitor nodes 32GB for a
small clusters. Is that per node?
Like i would need 3 nodes with 32GB RAM each in addition to the OSD nodes?
my cluster will consist of 3 replicated OSD nodes (12 OSD each), how can
i calculate the required a
Hallo everybody,
I have two questions regarding bluestore. I am struggling to understand
the documentation :/
I am planning to deploy 3 ceph nodes with 10xHDDs for OSD data, Raid 0
2xSSDs for block.db with replication on host level.
First Question :
Is it possible to deploy block.db on RAID 0 p
16 matches
Mail list logo