26/23 18:47, Jeremy Austin wrote:
> > Are there alternatives to TheJJ balancer? I have a (temporary) rebalance
> > problem, and that code chokes[1].
>
>
> https://github.com/digitalocean/pgremapper
>
>
> Gr. Stefan
>
--
Jeremy Austin
jhaus...@gmail.com
__
quot;); for (i in poollist) printf("%s\t",sumpool[i]);
> >> printf("|\n");
> >> }'"
> >>
> >> 11/15/2022 14:35 UTC there is a talk about this: New workload
> >> balancer in Ceph (Ceph virtual 2022).
> >>
> >> The balancer made by Jonas Jelten works very well for us (though does
> >> not balance primary PGs): https://github.com/TheJJ/ceph-balancer. It
> >> outperforms the ceph-balancer module by far. And had faster
> >> convergence. This is true up to and including octopus release.
> >>
> >> Gr. Stefan
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Jeremy Austin
jhaus...@gmail.com
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
marking of clay vs erasure (either
> in normal write and read, or in recovery scenarios)?
>
> Ngā mihi,
>
> Sean Matheny
> HPC Cloud Platform DevOps Lead
> New Zealand eScience Infrastructure (NeSI)
>
> e: sean.math...@nesi.org.nz
>
> On 12/11/2022, at 9:43 AM, Jeremy A
mihi,
>
> Sean Matheny
> HPC Cloud Platform DevOps Lead
> New Zealand eScience Infrastructure (NeSI)
>
> e: sean.math...@nesi.org.nz
>
>
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph
On Thu, May 5, 2022 at 11:15 AM Anthony D'Atri
wrote:
>
>
> This calculator can help when you have multiple pools:
>
> https://old.ceph.com/pgcalc/
Did an EC-aware version of this calculator ever escape the Red Hat paywall?
Thanks,
--
Jeremy Austi
sers@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Jeremy Austin
jhaus...@gmail.com
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
na integration is more or less
supported, I thought I'd try here first among active mgr/Prometheus users.
--
Jeremy Austin
jhaus...@gmail.com
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
n is more or less
supported, I thought I'd try here first.
--
Jeremy Austin
jhaus...@gmail.com
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
;
> What do you think of this setup - or is there any information /
> recommendation for an optimized setup of a 10G storage network?
>
> Best Regards,
> Hermann
>
> --
> herm...@qwer.tk
> PGP/GPG: 299893C7 (on keyservers)
> ___
; the PG -- is your MGR running correctly now?
>
> -- Dan
>
>
>
>
> On Fri, Feb 5, 2021 at 4:49 PM Jeremy Austin wrote:
> >
> > I was in the middle of a rebalance on a small test cluster with about 1%
> of
> > pgs degraded, and shut the cluster entirely down
x27;t
even moving to an inactive state?
I'm not concerned about data loss due to the shutdown (all activity to the
cluster had been stopped), so should I be setting some or all OSDs "
osd_find_best_info_ignore_history_les = true"?
Thank you,
--
Jeremy
> Tony
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >
> ___
> ceph-users mailing list -- ceph-users@c
>
> If not please do that first - you might still hit into issues along the
> process since DB is still corrupted. Disabling compaction just bypassed the
> reads from broken files - but this might happen again during different
> operations.
>
>
> On 12/22/2020 7:17 AM, Jeremy
1824"
>
>
> Please note bluestore specific defaults for rocksdb settings are
> re-provided to make sure they aren't reset to rocksdb's ones.
>
> Hope this helps.
>
> Thanks,
>
> Igor
>
>
>
>
>
>
>
> On 12/21/2020 2:56 AM, Jeremy Austin
On Sun, Dec 20, 2020 at 6:56 PM Alexander E. Patrakov
wrote:
> On Mon, Dec 21, 2020 at 4:57 AM Jeremy Austin wrote:
> >
> > On Sun, Dec 20, 2020 at 2:25 PM Jeremy Austin
> wrote:
> >
> > > Will attempt to disable compaction and report.
> > >
>
On Sun, Dec 20, 2020 at 2:25 PM Jeremy Austin wrote:
> Will attempt to disable compaction and report.
>
Not sure I'm doing this right. In [osd] section of ceph.conf, I added
periodic_compaction_seconds=0
and attempted to start the OSDs in question. Same error as before. Am I
setting
put for any disk errors as well?
>
I had, which is why I became aware of the SATA failure.
> 4) Haven't you performed Ceph upgrade recently. Or more generally - was
> the cluster deployed with the current Ceph version or it was an earlier one?
>
Cluster deployed with 14.2.9, IIRC;
n
> db/000348.sst offset 47935290 size 4704 code = 2 Rocksdb transaction:
>
> ?
>
> Thanks,
> Igor
> On 12/13/2020 8:48 AM, Jeremy Austin wrote:
>
> I could use some input from more experienced folks…
>
> First time seeing this behavior. I've been running cep
572
pg 15.6a not scrubbed since 2020-10-24 15:03:09.189964
pg 15.10 not scrubbed since 2020-10-24 16:25:08.826981
pg 15.1e not scrubbed since 2020-10-24 16:05:03.080127
pg 15.40 not scrubbed since 2020-10-24 11:58:04.290488
pg 15.4a not scrubbed since 2020-10-24 11:32:44.573551
-
19 matches
Mail list logo