[ceph-users] Data recovery after resharding mishap

2024-08-18 Thread Gauvain Pocentek
Hello list, We have made a mistake and dynamically resharded a bucket in a multi-site RGW setup running Quincy (support for this has been added in Reef). So we have now ~200 million objects still stored in the rados cluster, but completely removed from the bucket index (basically ceph has created

[ceph-users] RGW requests piling up

2023-12-21 Thread Gauvain Pocentek
Hello Ceph users, We've been having an issue with RGW for a couple days and we would appreciate some help, ideas, or guidance to figure out the issue. We run a multi-site setup which has been working pretty fine so far. We don't actually have data replication enabled yet, only metadata replicatio

[ceph-users] Re: RGW requests piling up

2023-12-22 Thread Gauvain Pocentek
gure out what's happening there. Gauvain On Thu, Dec 21, 2023 at 1:40 PM Gauvain Pocentek wrote: > Hello Ceph users, > > We've been having an issue with RGW for a couple days and we would > appreciate some help, ideas, or guidance to figure out the issue. > > We run a mul

[ceph-users] Re: RGW requests piling up

2023-12-22 Thread Gauvain Pocentek
We're going to look into adding CPU/RAM monitoring for all the OSDs next. Gauvain On Fri, Dec 22, 2023 at 2:58 PM Drew Weaver wrote: > Can you say how you determined that this was a problem? > > -Original Message----- > From: Gauvain Pocentek > Sent: Friday, December 22

[ceph-users] Re: RGW requests piling up

2023-12-28 Thread Gauvain Pocentek
n the index pool, and managed to kill the RGWs (14 of them) after a few hours. I hope this can help someone in the future. Gauvain On Fri, Dec 22, 2023 at 3:09 PM Gauvain Pocentek wrote: > I'd like to say that it was something smart but it was a bit of luck. > > I logged in on a

[ceph-users] Slow OSD startup and slow ops

2022-09-21 Thread Gauvain Pocentek
Hello all, We are running several Ceph clusters and are facing an issue on one of them, we would appreciate some input on the problems we're seeing. We run Ceph in containers on Centos Stream 8, and we deploy using ceph-ansible. While upgrading ceph from 16.2.7 to 16.2.10, we noticed that OSDs we

[ceph-users] Re: Slow OSD startup and slow ops

2022-09-26 Thread Gauvain Pocentek
Hello Stefan, Thank you for your answers. On Thu, Sep 22, 2022 at 5:54 PM Stefan Kooman wrote: > Hi, > > On 9/21/22 18:00, Gauvain Pocentek wrote: > > Hello all, > > > > We are running several Ceph clusters and are facing an issue on one of > > them, we w

[ceph-users] Re: Slow OSD startup and slow ops

2022-09-29 Thread Gauvain Pocentek
Hi Stefan, Thanks for your feedback! On Thu, Sep 29, 2022 at 10:28 AM Stefan Kooman wrote: > On 9/26/22 18:04, Gauvain Pocentek wrote: > > > > > > > We are running a Ceph Octopus (15.2.16) cluster with similar > > configuration. We have *a lot* of slo

[ceph-users] Re: Slow OSD startup and slow ops

2022-10-17 Thread Gauvain Pocentek
Hello, On Fri, Sep 30, 2022 at 8:12 AM Gauvain Pocentek wrote: > Hi Stefan, > > Thanks for your feedback! > > > On Thu, Sep 29, 2022 at 10:28 AM Stefan Kooman wrote: > >> On 9/26/22 18:04, Gauvain Pocentek wrote: >> >> > >> > >> &

[ceph-users] Limited set of permissions for an RGW user (S3)

2023-02-13 Thread Gauvain Pocentek
Hi list, A little bit of background: we provide S3 buckets using RGW (running quincy), but users are not allowed to manage their buckets, just read and write objects in them. Buckets are created by an admin user, and read/write permissions are given to end users using S3 bucket policies. We set th

[ceph-users] Very slow backfilling/remapping of EC pool PGs

2023-03-20 Thread Gauvain Pocentek
Hello all, We have an EC (4+2) pool for RGW data, with HDDs + SSDs for WAL/DB. This pool has 9 servers with each 12 disks of 16TBs. About 10 days ago we lost a server and we've removed its OSDs from the cluster. Ceph has started to remap and backfill as expected, but the process has been getting s

[ceph-users] Re: Very slow backfilling/remapping of EC pool PGs

2023-03-21 Thread Gauvain Pocentek
delsregister beim Amtsgericht: Augsburg > Handelsregister-Nummer: HRB 25866 > USt. ID-Nr.: DE275430677 > > Am 21.03.23 um 11:14 schrieb Gauvain Pocentek: > > Hi Joachim, > > > On Tue, Mar 21, 2023 at 10:13 AM Joachim Kraftmayer < > joachim.kraftma...@clyso.com> wrote:

[ceph-users] Re: Very slow backfilling/remapping of EC pool PGs

2023-03-21 Thread Gauvain Pocentek
urs. I will change the osd_op_queue value once the cluster is stable. Thanks for the help, it's been really useful, and I know a little bit more about Ceph :) Gauvain > ___ > Clyso GmbH - Ceph Foundation Member > > Am 21.03.23 um 12:51 schrieb Gau