[ceph-users] Re: Cephfs path based restricition without cephx

2025-01-07 Thread Rok Jaklič
We understand and we have restricted network access. Thx. Rok On Wed, Jan 8, 2025 at 12:28 AM Dan van der Ster wrote: > Hi Rok, > > Without cephx enabled, any ceph client having network access to the > Ceph mon/osd/mds can connect to the cluster and do whatever they want. > E.g. delete any obj

[ceph-users] Re: Cephfs path based restricition without cephx

2025-01-07 Thread Dan van der Ster
Hi Rok, Without cephx enabled, any ceph client having network access to the Ceph mon/osd/mds can connect to the cluster and do whatever they want. E.g. delete any objects or pools or anything. The only way I can think that this is workable would be to restrict Ceph to an isolated network and re-e

[ceph-users] Re: squid 19.2.1 RC QE validation status

2025-01-07 Thread Laura Flores
I am checking a few things for core and the upgrade suites, but should have a response soon. Laura Flores She/Her/Hers Software Engineer, Ceph Storage Chicago, IL lflo...@ibm.com | lflo...@redhat.com M: +17087388804 On Tue, Jan 7, 2025 at 11:25 AM Adam Emerson wrote: >

[ceph-users] Re: Slow initial boot of OSDs in large cluster with unclean state

2025-01-07 Thread Dan van der Ster
Hi Tom, On Tue, Jan 7, 2025 at 10:15 AM Thomas Byrne - STFC UKRI wrote: > I realise the obvious answer here is don't leave big cluster in an unclean > state for this long. Currently we've got PGs that have been remapped for 5 > days, which matches the 30,000 OSDMap epoch range perfectly. This i

[ceph-users] 18.2.5 reediness for QE Validation

2025-01-07 Thread Yuri Weinstein
Happy New Year! We are getting closer to start testing this point release. Almost all PRs marked with "milestone:v18.2.5" were merged. Dev Leads pls tag PRs that must be included appropriately, so we can test and merge them soon. TIA ___ ceph-users ma

[ceph-users] Re: Slow initial boot of OSDs in large cluster with unclean state

2025-01-07 Thread Wesley Dillingham
It went from normal osdmap range 500-1000 maps to 30,000 maps in 5 days? That seems like excessive accumulation to me in a 5 day period. Respectfully, *Wes Dillingham* LinkedIn w...@wesdillingham.com On Tue, Jan 7, 2025 at 1:18 PM Thomas Byrne - S

[ceph-users] Re: Slow initial boot of OSDs in large cluster with unclean state

2025-01-07 Thread Anthony D'Atri
> On our 6000+ HDD OSD cluster (pacific) That’s the bleeding edge in a number of respects. Updating to at least Reef would bring various improvements, and I have some suggestions I'd like to*love* to run by you wrt upgrade speed in such a cluster, if you’re using cephadm / ceph orch. Would

[ceph-users] Slow initial boot of OSDs in large cluster with unclean state

2025-01-07 Thread Thomas Byrne - STFC UKRI
Hi all, On our 6000+ HDD OSD cluster (pacific), we've been noticing takes significantly longer for brand new OSDs to go from booting to active when the cluster has been in a state of flux for some time. It can take over an hour for a newly created OSD to be marked up in some cases! We've just p

[ceph-users] Re: squid 19.2.1 RC QE validation status

2025-01-07 Thread Adam Emerson
On 16/12/2024, Yuri Weinstein wrote: > rgw - Eric, Adam E Approved for RGW. Failures were in tests and we've got fixes for those now. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Cephfs path based restricition without cephx

2025-01-07 Thread Rok Jaklič
Hi, is it possible somehow to restrict client in cephfs to subdirectory without cephx enabled? We do not have any auth requirements enabled in ceph. auth cluster required = none auth service required = none auth client required = none Kind regards, Rok __

[ceph-users] check Nova keyring file

2025-01-07 Thread Michel Niyoyita
Hello Team, I am integrating Openstack 2023.2 with ceph cluster reef running on ubuntu 22.05, I have configured openstack global as follow: ceph_nova_keyring: "ceph.client.nova.keyring" ceph_nova_user: "nova" ceph_nova_pool_name: "vms" I have copied nova keyring to from /etc/ceph to /etc/kolla/c

[ceph-users] Re: How to configure prometheus password in ceph dashboard.

2025-01-07 Thread Redouane Kachach
Ceph dashboard should automatically get the Prometheus user/password so there's no need to configure anything there. If you want to change the default user/password then you should follow instructions from the docs as pointed out by Eugen BTW: when security is enabled that will affect the whole mo