[ceph-users] Issues during Nautilus Pacific upgrade

2022-11-23 Thread Ana Aviles
Hi, We would like to share our experience upgrading one of our clusters from Nautilus (14.2.22-1bionic) to Pacific (16.2.10-1bionic) a few weeks ago. To start with, we had to convert our monitors databases to rockdb in order to continue with the upgrade. Also, we had to migrate all our OSDs to

[ceph-users] Re: Issues during Nautilus Pacific upgrade

2022-11-24 Thread Ana Aviles
On 11/23/22 19:49, Marc wrote: We would like to share our experience upgrading one of our clusters from Nautilus (14.2.22-1bionic) to Pacific (16.2.10-1bionic) a few weeks ago. To start with, we had to convert our monitors databases to rockdb in Weirdly I have just one monitor db in leveldb st

[ceph-users] Re: Very slow snaptrim operations blocking client I/O

2023-01-30 Thread Ana Aviles
Hi, Josh already suggested, but I will one more time. We had similar behaviour upgrading from Nautilus to Pacific. In our case compacting the OSDs did the trick. For us there was no performance impact running the compaction (ceph osd daemon osd.0 compact) although we run them in batches and

[ceph-users] Read errors on NVME disks

2022-05-03 Thread Ana Aviles
Hi, Recently we migrated from SSD to NVME disks (Samsung PM983) on our cluster. We noticed since then infrequent read errors on the NVME disks, resulting on scrub errors on PGs. We did not have this with our previous disks (SSD). Do you discard them automatically or look at internal paramete