[ceph-users] Re: ceph-ansible LARGE OMAP in RGW pool

2025-03-24 Thread Frédéric Nass
Hi Danish, While reviewing the backports for upcoming v18.2.5, I came across this [1]. Could be your issue. Can you try the suggested workaround (--marker=9) and report back? Regards, Frédéric. [1] https://tracker.ceph.com/issues/62845 De : Danish Khan Envoyé

[ceph-users] Re: [ceph-users] Re: Experience with 100G Ceph in Proxmox

2025-03-24 Thread Giovanna Ratini
hello Eneko,  Yes I did.  No significant changes.  :-(  Cheers,  Gio Am Mittwoch, März 19, 2025 13:09 CET, schrieb Eneko Lacunza :   Hi Giovanna, Have you tried increasing iothreads option for the VM? Cheers El 18/3/25 a las 19:13, Giovanna Ratini escribió: > Hello Antony, > > no, no QoS ap

[ceph-users] Re: Question about cluster expansion

2025-03-24 Thread Anthony D'Atri
> > AD> What use-case (s) ? Are your pools R3, EC? Mix? > > My use case is storage for virtual machines (Proxmox). So probably all small-block RBD? > AD> I like to solve first for at least 9-10 nodes, but assuming that you’re > using replicated size=3 pools 5 is okay. > > Yes, I am using rep

[ceph-users] reef 18.2.5 QE validation status

2025-03-24 Thread Yuri Weinstein
Details of this release are summarized here: https://tracker.ceph.com/issues/70563#note-1 Release Notes - TBD LRC upgrade - TBD Seeking approvals/reviews for: smoke - Laura approved? rados - Radek, Laura approved? Travis? Nizamudeen? Adam King approved? rgw - Adam E approved? fs - Venky is f

[ceph-users] Re: Downgrading the osdmap

2025-03-24 Thread Laura Flores
Great! I have raised an Enhancement tracker for a patch to make the balancer smarter about alerting users on this. You are welcome to follow it or add any comments: https://tracker.ceph.com/issues/70615 Thanks, Laura On Sat, Mar 22, 2025 at 6:53 AM Marek Szuba wrote: > On 2025-03-21 18:37, Lau

[ceph-users] Re: Question about cluster expansion

2025-03-24 Thread Alan Murrell
Hello, AD> What use-case (s) ? Are your pools R3, EC? Mix? My use case is storage for virtual machines (Proxmox). AD> I like to solve first for at least 9-10 nodes, but assuming that you’re using replicated size=3 pools 5 is okay. Yes, I am using replication=3 AD> Conventional wisdom is that

[ceph-users] Re: OSD failed: still recovering

2025-03-24 Thread Alan Murrell
Hello, Thanks for the response. OK, good to know about the 5% misplaced objects report 😊 I just checked 'ceph -s' and the misplaced objects is showing 1.948%, but I suspect I will see this up to 5% or so later on 😊 It does finally look like there is progress being made, as my "active+clean" is

[ceph-users] Reper Ceph Cluster only OSDs intakt

2025-03-24 Thread filip Mutterer
Playing around with Ceph I did a Big mistake. My root filesystem uses snapshot and only excluding /home & /root. When I switched back to an older Snapshot  I noticed Ceph is complaining all OSDs are not working. How can I Fix this? Rebuild cluster and then add the OSDs back in? Is there a gu

[ceph-users] Re: CephFS Snapshot Mirroring

2025-03-24 Thread Alexander Patrakov
On Mon, Mar 24, 2025 at 3:17 PM Venky Shankar wrote: > > [cc Jos] > > Hi Alexander, > > I have a few questions/concerns about the list mentioned below: > > On Fri, Mar 21, 2025 at 11:09 AM Alexander Patrakov > wrote: > > > > Hello Vladimir, > > > > Please contact croit via https://www.croit.io/c

[ceph-users] Re: OSD creation from service spec fails to check all db_devices for available space

2025-03-24 Thread Eugen Block
Hi Torkil, I feel like this is some kind of corner case with DB devices of different sizes. I'm not really surprised that ceph-volume can't handle that as you would expect. Maybe one of the devs can chime in here. Did you eventually manage to deploy all the OSDs? Zitat von Torkil Svensgaa

[ceph-users] Re: CephFS Snapshot Mirroring

2025-03-24 Thread Venky Shankar
[cc Jos] Hi Alexander, I have a few questions/concerns about the list mentioned below: On Fri, Mar 21, 2025 at 11:09 AM Alexander Patrakov wrote: > > Hello Vladimir, > > Please contact croit via https://www.croit.io/contact for unofficial > (not yet fully reviewed) patches and mention me. They

[ceph-users] Re: Rogue EXDEV errors when hardlinking

2025-03-24 Thread Frédéric Nass
Hi, So for the record, any version above v16.2.14+, v17.2.7+ or v18.1.2+ has the fix. Regards, Frédéric. - Le 21 Mar 25, à 18:55, Gregory Farnum a écrit : Sounds like the scenario addressed in this PR: [ https://github.com/ceph/ceph/pull/47399 | https://github.com/ceph/ceph/pull/