[ceph-users] Re: v16.2.10 Pacific released

2022-07-23 Thread Konstantin Shalygin
Hi, This is hotfix only release? No another patches was targeted to 16.2.10 landed here? Thanks, k Sent from my iPhone > On 22 Jul 2022, at 03:38, David Galloway wrote: > > This is a hotfix release addressing two security vulnerabilities. We > recommend all users update to this release. >

[ceph-users] creating OSD partition on blockdb ssd

2022-07-23 Thread Boris Behrens
Hi, I would like to use some of the blockdb ssd space for OSDs. We provide some radosgw clusters with 8TB and 16TB rotational OSDs. We added 2TB SSDs and use one SSD per 5 8TB OSDs or 3 16TB OSDs. Now there is still space left on the devices and I thought I could just create another LV of 100GB o

[ceph-users] Re: [Ceph-maintainers] Re: v16.2.10 Pacific released

2022-07-23 Thread Ilya Dryomov
On Sat, Jul 23, 2022 at 12:16 PM Konstantin Shalygin wrote: > > Hi, > > This is hotfix only release? No another patches was targeted to 16.2.10 > landed here? Hi Konstantin, Correct, just fixes for CVE-2022-0670 and potential s3website denial-of-service bug. Thanks, Ilya _

[ceph-users] PySpark write data to Ceph returns 400 Bad Request

2022-07-23 Thread Luigi Cerone
I have a problem with pySpark configuration when writing data inside a ceph bucket. With the following Python code snippet I can read data from the Ceph bucket but when I try to write inside the bucket, I get the following error: ``` 22/07/22 10:00:58 DEBUG S3ErrorResponseHandler: Failed in parsin

[ceph-users] Quincy full osd(s)

2022-07-23 Thread Nigel Williams
With current 17.2.1 (cephadm) I am seeing an unusual HEALTH_ERR Adding files to a new empty cluster, replica 3 (crush is by host), OSDs became 95% full and reweighting them to any value does not cause backfill to start. If I reweight the three too full OSDs to 0.0 I get a large number of misplaced