The release has been approved.
And the gibba cluster upgraded.
We are awaiting the LRC upgrade and then/or in parallel will publish an RC
for testing.
ETA for the release publishing is 04/07/23
On Tue, Mar 28, 2023 at 2:59 PM Neha Ojha wrote:
> upgrades approved!
>
> Thanks,
> Neha
>
> On Tue
I think that the backported fix for this issue made it into ceph v16.2.11.
https://ceph.io/en/news/blog/2023/v16-2-11-pacific-released/
"ceph-volume: Pacific backports (pr#47413, Guillaume Abrioux, Zack Cerza,
Arthur Outhenin-Chalandre)"
https://github.com/ceph/ceph/pull/47413/commits/4252cc44
Hi,
One of my customers had a correctly working RGW cluster with two zones in
one zonegroup and since a few days ago users are not able to create buckets
and are always getting Access denied. Working with existing buckets works
(like listing/putting objects into existing bucket). The only operatio
Hi Tino,
Proxmox has a good wiki for this:
https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster
You will run their internal deployment process, which is easy and
painless. I recommend starting with 3x replication, and setting up your
NVMe root with either UI or scripts. Here is an e
Hi folks.
Just looking for some up to date advice please from the collective on how best
to set up CEPH on 5 Proxmox hosts each configured with the following:
AMD Ryzen 7 5800X CPU
64GB RAM
2x SSD (as ZFS boot disk for Proxmox)
1x 500GB NVMe for DB/WAL
1x 1TB NVMe as an OSD
1x 16TB SATA HDD as a
Hi,
I think it's similar to this bug.
https://tracker.ceph.com/issues/16767
On Tue, Mar 28, 2023 at 12:35 AM Ramin Najjarbashi <
ramin.najarba...@gmail.com> wrote:
> I hope this email finds you well. I wanted to share a recent experience I
> had with our Ceph cluster and get your feedback on a
Main error now is
[ERR] MGR_MODULE_ERROR: Module 'cephadm' has failed: Expecting value: line 1
column 1 (char 0)
Module 'cephadm' has failed: Expecting value: line 1 column 1 (char 0)
If we disable cephadm, the health becomes ok.
So is there a way to change the cephadm's version ?
Mar 27
On 29.03.23 01:09, Robert W. Eckert wrote:
I did miss seeing the db_devices part. For ceph orch apply - that would have
saved a lot of effort. Does the osds_per_device create the partitions on the
db device?
No, osds_per_device creates multiple OSDs on one data device, could be
useful for