[ceph-users] Re: Ceph image delete error - NetHandler create_socket couldnt create socket

2024-04-18 Thread Konstantin Shalygin
Hi, Your shell seems reached the default file discriptors limit (1024 mostly) and your cluster maybe more than 1000 OSD Try to set command `ulimit -n 10240` before rbd rm task k Sent from my iPhone > On 18 Apr 2024, at 23:50, Pardhiv Karri wrote: > > Hi, > > Trying to delete images in a C

[ceph-users] Re: reef 18.2.3 QE validation status

2024-04-18 Thread Christian Rohmann
On 18.04.24 8:13 PM, Laura Flores wrote: Thanks for bringing this to our attention. The leads have decided that since this PR hasn't been merged to main yet and isn't approved, it will not go in v18.2.3, but it will be prioritized for v18.2.4. I've already added the PR to the v18.2.4 milestone s

[ceph-users] Ceph image delete error - NetHandler create_socket couldnt create socket

2024-04-18 Thread Pardhiv Karri
Hi, Trying to delete images in a Ceph pool is causing errors in one of the clusters. I rebooted all the monitor nodes sequentially to see if the error went away, but it still persists. What is the best way to fix this? The Ceph cluster is in an OK state, with no rebalancing or scrubbing happening

[ceph-users] Re: Working ceph cluster reports large amount of pgs in state unknown/undersized and objects degraded

2024-04-18 Thread Tobias Langner
Hey Alwin, Thanks for your reply, answers inline. I'd assume (w/o pool config) that the EC 2+1 is putting PG as inactive. Because for EC you need n-2 for redundancy and n-1 for availability. Yeah, I guess this is likely related to the issue. The output got a bit mangled. Could you please pr

[ceph-users] Re: reef 18.2.3 QE validation status

2024-04-18 Thread Laura Flores
Hi Christian, Thanks for bringing this to our attention. The leads have decided that since this PR hasn't been merged to main yet and isn't approved, it will not go in v18.2.3, but it will be prioritized for v18.2.4. I've already added the PR to the v18.2.4 milestone so it's sure to be picked up.

[ceph-users] Working ceph cluster reports large amount of pgs in state unknown/undersized and objects degraded

2024-04-18 Thread Tobias Langner
We operate a tiny ceph cluster (v16.2.7) across three machines, each running two OSDs and one of each mds, mgr, and mon. The cluster serves one main erasure-coded (2+1) storage pool and a few other management-related pools. The cluster has been running smoothly for several months. A few weeks a

[ceph-users] Re: Prevent users to create buckets

2024-04-18 Thread Michel Raabe
Hi Sinan, On 17.04.24 14:45, si...@turka.nl wrote: Hello, I am using Ceph RGW for S3. Is it possible to create (sub)users that cannot create/delete buckets and are limited to specific buckets? At the end, I want to create 3 separate users and for each user I want to create a bucket. The use

[ceph-users] Re: reef 18.2.3 QE validation status

2024-04-18 Thread Christian Rohmann
Hey Laura, On 17.04.24 4:58 PM, Laura Flores wrote: There are two PRs that were added later to the 18.2.3 milestone concerning debian packaging: https://github.com/ceph/ceph/pulls?q=is%3Apr+is%3Aopen+milestone%3Av18.2.3 The user is asking if these can be included. I know everybody always want

[ceph-users] Re: Prevent users to create buckets

2024-04-18 Thread Ondřej Kukla
Hello Sinan, You could create a “master” account that will create all the buckets and “sub” accounts that will have max_bucket set to 0 and using the master account create a bucket policies that will allow the sub accounts to interact with the buckets. One downside to this solution is that the