Hi Danish,
While reviewing the backports for upcoming v18.2.5, I came across this [1].
Could be your issue.
Can you try the suggested workaround (--marker=9) and report back?
Regards,
Frédéric.
[1] https://tracker.ceph.com/issues/62845
De : Danish Khan
Envoyé
hello Eneko,
Yes I did. No significant changes. :-(
Cheers,
Gio
Am Mittwoch, März 19, 2025 13:09 CET, schrieb Eneko Lacunza
:
Hi Giovanna,
Have you tried increasing iothreads option for the VM?
Cheers
El 18/3/25 a las 19:13, Giovanna Ratini escribió:
> Hello Antony,
>
> no, no QoS ap
>
> AD> What use-case (s) ? Are your pools R3, EC? Mix?
>
> My use case is storage for virtual machines (Proxmox).
So probably all small-block RBD?
> AD> I like to solve first for at least 9-10 nodes, but assuming that you’re
> using replicated size=3 pools 5 is okay.
>
> Yes, I am using rep
Details of this release are summarized here:
https://tracker.ceph.com/issues/70563#note-1
Release Notes - TBD
LRC upgrade - TBD
Seeking approvals/reviews for:
smoke - Laura approved?
rados - Radek, Laura approved? Travis? Nizamudeen? Adam King approved?
rgw - Adam E approved?
fs - Venky is f
Great!
I have raised an Enhancement tracker for a patch to make the balancer
smarter about alerting users on this. You are welcome to follow it or add
any comments: https://tracker.ceph.com/issues/70615
Thanks,
Laura
On Sat, Mar 22, 2025 at 6:53 AM Marek Szuba wrote:
> On 2025-03-21 18:37, Lau
Hello,
AD> What use-case (s) ? Are your pools R3, EC? Mix?
My use case is storage for virtual machines (Proxmox).
AD> I like to solve first for at least 9-10 nodes, but assuming that you’re
using replicated size=3 pools 5 is okay.
Yes, I am using replication=3
AD> Conventional wisdom is that
Hello,
Thanks for the response.
OK, good to know about the 5% misplaced objects report 😊
I just checked 'ceph -s' and the misplaced objects is showing 1.948%, but I
suspect I will see this up to 5% or so later on 😊
It does finally look like there is progress being made, as my "active+clean" is
Playing around with Ceph I did a Big mistake. My root filesystem uses
snapshot and only excluding /home & /root.
When I switched back to an older Snapshot I noticed Ceph is complaining
all OSDs are not working.
How can I Fix this? Rebuild cluster and then add the OSDs back in? Is
there a gu
On Mon, Mar 24, 2025 at 3:17 PM Venky Shankar wrote:
>
> [cc Jos]
>
> Hi Alexander,
>
> I have a few questions/concerns about the list mentioned below:
>
> On Fri, Mar 21, 2025 at 11:09 AM Alexander Patrakov
> wrote:
> >
> > Hello Vladimir,
> >
> > Please contact croit via https://www.croit.io/c
Hi Torkil,
I feel like this is some kind of corner case with DB devices of
different sizes. I'm not really surprised that ceph-volume can't
handle that as you would expect. Maybe one of the devs can chime in
here. Did you eventually manage to deploy all the OSDs?
Zitat von Torkil Svensgaa
[cc Jos]
Hi Alexander,
I have a few questions/concerns about the list mentioned below:
On Fri, Mar 21, 2025 at 11:09 AM Alexander Patrakov wrote:
>
> Hello Vladimir,
>
> Please contact croit via https://www.croit.io/contact for unofficial
> (not yet fully reviewed) patches and mention me. They
Hi,
So for the record, any version above v16.2.14+, v17.2.7+ or v18.1.2+ has the
fix.
Regards,
Frédéric.
- Le 21 Mar 25, à 18:55, Gregory Farnum a écrit :
Sounds like the scenario addressed in this PR:
[ https://github.com/ceph/ceph/pull/47399 |
https://github.com/ceph/ceph/pull/
12 matches
Mail list logo