[ceph-users] Very slow backfilling/remapping of EC pool PGs

2023-03-20 Thread Gauvain Pocentek
Hello all, We have an EC (4+2) pool for RGW data, with HDDs + SSDs for WAL/DB. This pool has 9 servers with each 12 disks of 16TBs. About 10 days ago we lost a server and we've removed its OSDs from the cluster. Ceph has started to remap and backfill as expected, but the process has been getting s

[ceph-users] Re: s3 compatible interface

2023-03-20 Thread Matt Benjamin
Hi Chris, This looks useful. Note for this thread: this *looks like* it's using the zipper dbstore backend? Yes, that's coming in Reef. We think of dbstore as mostly the zipper reference driver, but it can be useful as a standalone setup, potentially. But there's now a prototype of a posix fi

[ceph-users] Re: Upgrade 16.2.10 --> 16.2.11 OSD "UPGRADE_REDEPLOY_DAEMON" failed

2023-03-20 Thread Marco Pizzolo
Is this by chance corrected in 17.2.5 already? If so, how can we pivot mid-upgrade to 17.2.5? Thanks, On Mon, Mar 20, 2023 at 6:14 PM Marco Pizzolo wrote: > Hello Everyone, > > We made the mistake of trying to patch to 16.2.11 from 16.2.10 which has > been stable as we felt that 16.2.11 had be

[ceph-users] Upgrade 16.2.10 --> 16.2.11 OSD "UPGRADE_REDEPLOY_DAEMON" failed

2023-03-20 Thread Marco Pizzolo
Hello Everyone, We made the mistake of trying to patch to 16.2.11 from 16.2.10 which has been stable as we felt that 16.2.11 had been out for a while already. As luck would have it, we are having failure after failure with OSDs not upgrading successfully, and have 355 more OSDs to go. I'm pretty

[ceph-users] Re: s3 compatible interface

2023-03-20 Thread Chris MacNaughton
On 3/20/23 12:02, Frank Schilder wrote: Hi Marc, I'm also interested in an S3 service that uses a file system as a back-end. I looked at the documentation of https://github.com/aquarist-labs/s3gw and have to say that it doesn't make much sense to me. I don't see this kind of gateway anywhere

[ceph-users] The release time of v16.2.12 is?

2023-03-20 Thread Louis Koo
https://github.com/ceph/ceph/pull/47702?notification_referrer_id=NT_kwDOANWUT7M0MjI5MDgwMzE3OjEzOTk3MTM1¬ifications_query=repo%3Aceph%2Fceph#issuecomment-1423732280 This issue had been backported to pacific, and the release time of v16.2.12 is? ___ ceph-

[ceph-users] Re: s3 compatible interface

2023-03-20 Thread Chris MacNaughton
On 3/20/23 12:02, Frank Schilder wrote: Hi Marc, I'm also interested in an S3 service that uses a file system as a back-end. I looked at the documentation ofhttps://github.com/aquarist-labs/s3gw and have to say that it doesn't make much sense to me. I don't see this kind of gateway anywhere

[ceph-users] Multiple instance_id and services for rbd-mirror daemon

2023-03-20 Thread Aielli, Elia
Hi, I'm facing a strange issue and google doesn't seem to help me. I've a couple of clusters with Octopus v15.2.17, recently upgraded from 15.2.13 I had a rbd mirror service correctly working between the two clusters, i then updated, and after some days where all was ok, I've come to the situatio

[ceph-users] Re: s3 compatible interface

2023-03-20 Thread Frank Schilder
Hi Marc, I'm also interested in an S3 service that uses a file system as a back-end. I looked at the documentation of https://github.com/aquarist-labs/s3gw and have to say that it doesn't make much sense to me. I don't see this kind of gateway anywhere there. What I see is a build of a rados ga

[ceph-users] Re: Unexpected slow read for HDD cluster (good write speed)

2023-03-20 Thread Janne Johansson
Den mån 20 mars 2023 kl 09:45 skrev Marc : > > > While > > reading, we barely hit the mark of 100MB/s; we would expect at least > > something similar to the write speed. These tests are being performed in > > a > > pool with a replication factor of 3. > > > > > > You don't even describe how you tes

[ceph-users] Re: Almalinux 9

2023-03-20 Thread Michael Lipp
Has anyone used almalinux 9 to install ceph. Have you encountered problems? Other tips on this installation are also welcome. I have installed Ceph on AlmaLinux 9.1 (both Ceph and later Ceph/Rook) on a three node VM cluster and then a three node bare metal cluster (with 4 OSDs each) without

[ceph-users] Almalinux 9

2023-03-20 Thread Sere Gerrit
Hello, Has anyone used almalinux 9 to install ceph. Have you encountered problems? Other tips on this installation are also welcome. Regards, Gerrit ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph

[ceph-users] Re: Unexpected slow read for HDD cluster (good write speed)

2023-03-20 Thread Marc
> While > reading, we barely hit the mark of 100MB/s; we would expect at least > something similar to the write speed. These tests are being performed in > a > pool with a replication factor of 3. > > You don't even describe how you test? And why would you expect something like the write speed,