[ceph-users] Re: quincy v17.2.6 QE Validation status

2023-03-29 Thread Yuri Weinstein
The release has been approved. And the gibba cluster upgraded. We are awaiting the LRC upgrade and then/or in parallel will publish an RC for testing. ETA for the release publishing is 04/07/23 On Tue, Mar 28, 2023 at 2:59 PM Neha Ojha wrote: > upgrades approved! > > Thanks, > Neha > > On Tue

[ceph-users] Re: cephadm automatic sizing of WAL/DB on SSD

2023-03-29 Thread Calhoun, Patrick
I think that the backported fix for this issue made it into ceph v16.2.11. https://ceph.io/en/news/blog/2023/v16-2-11-pacific-released/ "ceph-volume: Pacific backports (pr#47413, Guillaume Abrioux, Zack Cerza, Arthur Outhenin-Chalandre)" https://github.com/ceph/ceph/pull/47413/commits/4252cc44

[ceph-users] RGW can't create bucket

2023-03-29 Thread Kamil Madac
Hi, One of my customers had a correctly working RGW cluster with two zones in one zonegroup and since a few days ago users are not able to create buckets and are always getting Access denied. Working with existing buckets works (like listing/putting objects into existing bucket). The only operatio

[ceph-users] Re: 5 host setup with NVMe's and HDDs

2023-03-29 Thread Alex Gorbachev
Hi Tino, Proxmox has a good wiki for this: https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster You will run their internal deployment process, which is easy and painless. I recommend starting with 3x replication, and setting up your NVMe root with either UI or scripts. Here is an e

[ceph-users] 5 host setup with NVMe's and HDDs

2023-03-29 Thread Tino Todino
Hi folks. Just looking for some up to date advice please from the collective on how best to set up CEPH on 5 Proxmox hosts each configured with the following: AMD Ryzen 7 5800X CPU 64GB RAM 2x SSD (as ZFS boot disk for Proxmox) 1x 500GB NVMe for DB/WAL 1x 1TB NVMe as an OSD 1x 16TB SATA HDD as a

[ceph-users] Re: orphan multipart objects in Ceph cluster

2023-03-29 Thread Jonas Nemeikšis
Hi, I think it's similar to this bug. https://tracker.ceph.com/issues/16767 On Tue, Mar 28, 2023 at 12:35 AM Ramin Najjarbashi < ramin.najarba...@gmail.com> wrote: > I hope this email finds you well. I wanted to share a recent experience I > had with our Ceph cluster and get your feedback on a

[ceph-users] Re: upgrading from 15.2.17 to 16.2.11 - Health ERROR

2023-03-29 Thread xadhoom76
Main error now is [ERR] MGR_MODULE_ERROR: Module 'cephadm' has failed: Expecting value: line 1 column 1 (char 0) Module 'cephadm' has failed: Expecting value: line 1 column 1 (char 0) If we disable cephadm, the health becomes ok. So is there a way to change the cephadm's version ? Mar 27

[ceph-users] Re: Adding new server to existing ceph cluster - with separate block.db on NVME

2023-03-29 Thread Robert Sander
On 29.03.23 01:09, Robert W. Eckert wrote: I did miss seeing the db_devices part. For ceph orch apply - that would have saved a lot of effort. Does the osds_per_device create the partitions on the db device? No, osds_per_device creates multiple OSDs on one data device, could be useful for