[ceph-users] Re: OSDs cannot join cluster anymore

2023-06-23 Thread Malte Stroem
Hello Eugen, thanks. We found the cause. Somehow all /var/lib/ceph/fsid/osd.XX/config files on every host were still filled with expired information about the mons. So refreshing the files helped to bring the osds up again. Damn. All other configs for the mons, mds', rgws and so on were u

[ceph-users] Re: users caps change unexpected

2023-06-23 Thread Eugen Block
Hi, without knowing the details I just assume that it’s just „translated“, the syntax you set is the older way of setting rbd caps, since a couple of years it’s sufficient to use „profile rbd“. Do you notice client access issues (which I would not expect) or are you just curious about the

[ceph-users] ceph-dashboard python warning with new pyo3 0.17 lib (debian12)

2023-06-23 Thread DERUMIER, Alexandre
Hi, on debian12, ceph-dashboard is throwing a warning "Module 'dashboard' has failed dependency: PyO3 modules may only be initialized once per interpreter process" Seem to be related to pyo3 0.17 change https://github.com/PyO3/pyo3/blob/7bdc504252a2f972ba3490c44249b202a4ce6180/guide/src/migrat

[ceph-users] users caps change unexpected

2023-06-23 Thread Alessandro Italiano
Hi we have a brand new ceph instance deployed by ceph puppet module. We are experiencing a funny issues. users caps change unexpected. logs do not report any message about the user caps even with auth/debug_auth: 5/5 who/what can change the caps ? thanks in advance Ale root@cephmon1:~# ceph v

[ceph-users] Re: Ceph iSCSI GW not working with VMware VMFS and Windows Clustered Storage Volumes (CSV)

2023-06-23 Thread Work Ceph
Thanks for the help so far guys! Has anybody used (made it work) the default ceph-iscsi implementation with VMware and/or Windows CSV storage system with a single target/portal in iSCSI? On Wed, Jun 21, 2023 at 6:02 AM Maged Mokhtar wrote: > > On 20/06/2023 01:16, Work Ceph wrote: > > I see, th

[ceph-users] Re: Ceph iSCSI GW is too slow when compared with Raw RBD performance

2023-06-23 Thread Work Ceph
Awesome, thanks for the info! By any chance, do you happen to know what configurations you needed to adjust to make Veeam perform a bit better? On Fri, Jun 23, 2023 at 10:42 AM Anthony D'Atri wrote: > Yes, with someone I did some consulting for. Veeam seems to be one of the > prevalent uses fo

[ceph-users] Re: radosgw hang under pressure

2023-06-23 Thread Rok Jaklič
We are experiencing something similar (slow GETs responses) when sending 1k delete requests for example in ceph v16.2.13. Rok On Mon, Jun 12, 2023 at 7:16 PM grin wrote: > Hello, > > ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy > (stable) > > There is a single (test) ra

[ceph-users] cephfs - unable to create new subvolume

2023-06-23 Thread karon karon
Hello, I recently use cephfs in version 17.2.6 I have a pool named "*data*" and a fs "*kube*" it was working fine until a few days ago, now i can no longer create a new subvolume*, *it gives me the following error: Error EINVAL: invalid value specified for ceph.dir.subvolume > here is the comman

[ceph-users] Re: Grafana service fails to start due to bad directory name after Quincy upgrade

2023-06-23 Thread Adiga, Anantha
Hi Nizam, Thanks much for the detail. Regards, Anantha From: Nizamudeen A Sent: Friday, June 23, 2023 12:25 AM To: Adiga, Anantha Cc: Eugen Block ; ceph-users@ceph.io Subject: Re: [ceph-users] Re: Grafana service fails to start due to bad directory name after Quincy upgrade Hi, You can u

[ceph-users] Re: Ceph iSCSI GW is too slow when compared with Raw RBD performance

2023-06-23 Thread Maged Mokhtar
On 23/06/2023 04:18, Work Ceph wrote: Hello guys, We have a Ceph cluster that runs just fine with Ceph Octopus; we use RBD for some workloads, RadosGW (via S3) for others, and iSCSI for some Windows clients. We started noticing some unexpected performance issues with iSCSI. I mean, an SSD poo

[ceph-users] Re: changing crush map on the fly?

2023-06-23 Thread Nino Kotur
You are correct, but that will involve massive data movement. You can change the failure domain osd/host/rack/datacenter/etc... You can change the replica_count=2,3,4,5,6 You *CAN'T* change the EC value eg. 4+2 to something else Kind regards, Nino On Fri, Jun 23, 2023 at 12:40 AM Angelo Hönge

[ceph-users] Re: Grafana service fails to start due to bad directory name after Quincy upgrade

2023-06-23 Thread Nizamudeen A
Hi, You can upgrade the grafana version individually by setting the config_opt for grafana container image like: ceph config set mgr mgr/cephadm/container_image_grafana quay.io/ceph/ceph-grafana:8.3.5 and then redeploy the grafana container again either via dashboard or cephadm. Regards, Nizam