[ceph-users] Re: OSDs are not utilized evenly

2023-01-26 Thread Jeremy Austin
26/23 18:47, Jeremy Austin wrote: > > Are there alternatives to TheJJ balancer? I have a (temporary) rebalance > > problem, and that code chokes[1]. > > > https://github.com/digitalocean/pgremapper > > > Gr. Stefan > -- Jeremy Austin jhaus...@gmail.com __

[ceph-users] Re: OSDs are not utilized evenly

2023-01-26 Thread Jeremy Austin
quot;); for (i in poollist) printf("%s\t",sumpool[i]); > >> printf("|\n"); > >> }'" > >> > >> 11/15/2022 14:35 UTC there is a talk about this: New workload > >> balancer in Ceph (Ceph virtual 2022). > >> > >> The balancer made by Jonas Jelten works very well for us (though does > >> not balance primary PGs): https://github.com/TheJJ/ceph-balancer. It > >> outperforms the ceph-balancer module by far. And had faster > >> convergence. This is true up to and including octopus release. > >> > >> Gr. Stefan > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io > -- Jeremy Austin jhaus...@gmail.com ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Any concerns using EC with CLAY in Quincy (or Pacific)?

2022-11-18 Thread Jeremy Austin
marking of clay vs erasure (either > in normal write and read, or in recovery scenarios)? > > Ngā mihi, > > Sean Matheny > HPC Cloud Platform DevOps Lead > New Zealand eScience Infrastructure (NeSI) > > e: sean.math...@nesi.org.nz > > On 12/11/2022, at 9:43 AM, Jeremy A

[ceph-users] Re: Any concerns using EC with CLAY in Quincy (or Pacific)?

2022-11-11 Thread Jeremy Austin
mihi, > > Sean Matheny > HPC Cloud Platform DevOps Lead > New Zealand eScience Infrastructure (NeSI) > > e: sean.math...@nesi.org.nz > > > > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph

[ceph-users] Re: Unbalanced Cluster

2022-05-05 Thread Jeremy Austin
On Thu, May 5, 2022 at 11:15 AM Anthony D'Atri wrote: > > > This calculator can help when you have multiple pools: > > https://old.ceph.com/pgcalc/ Did an EC-aware version of this calculator ever escape the Red Hat paywall? Thanks, -- Jeremy Austi

[ceph-users] Re: EC CLAY production-ready or technology preview in Pacific?

2021-08-19 Thread Jeremy Austin
sers@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io > -- Jeremy Austin jhaus...@gmail.com ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] mgr+Prometheus, grafana, consul

2021-05-21 Thread Jeremy Austin
na integration is more or less supported, I thought I'd try here first among active mgr/Prometheus users. -- Jeremy Austin jhaus...@gmail.com ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] mgr+Prometheus/grafana (+consul)

2021-05-20 Thread Jeremy Austin
n is more or less supported, I thought I'd try here first. -- Jeremy Austin jhaus...@gmail.com ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Suitable 10G Switches for ceph storage - any recommendations?

2021-05-19 Thread Jeremy Austin
; > What do you think of this setup - or is there any information / > recommendation for an optimized setup of a 10G storage network? > > Best Regards, > Hermann > > -- > herm...@qwer.tk > PGP/GPG: 299893C7 (on keyservers) > ___

[ceph-users] Re: can't query most pgs after restart

2021-02-05 Thread Jeremy Austin
; the PG -- is your MGR running correctly now? > > -- Dan > > > > > On Fri, Feb 5, 2021 at 4:49 PM Jeremy Austin wrote: > > > > I was in the middle of a rebalance on a small test cluster with about 1% > of > > pgs degraded, and shut the cluster entirely down

[ceph-users] can't query most pgs after restart

2021-02-05 Thread Jeremy Austin
x27;t even moving to an inactive state? I'm not concerned about data loss due to the shutdown (all activity to the cluster had been stopped), so should I be setting some or all OSDs " osd_find_best_info_ignore_history_les = true"? Thank you, -- Jeremy

[ceph-users] Re: is unknown pg going to be active after osds are fixed?

2021-02-02 Thread Jeremy Austin
> Tony > > ___ > > ceph-users mailing list -- ceph-users@ceph.io > > To unsubscribe send an email to ceph-users-le...@ceph.io > > > ___ > ceph-users mailing list -- ceph-users@c

[ceph-users] Re: PGs down

2020-12-22 Thread Jeremy Austin
> > If not please do that first - you might still hit into issues along the > process since DB is still corrupted. Disabling compaction just bypassed the > reads from broken files - but this might happen again during different > operations. > > > On 12/22/2020 7:17 AM, Jeremy

[ceph-users] Re: PGs down

2020-12-21 Thread Jeremy Austin
1824" > > > Please note bluestore specific defaults for rocksdb settings are > re-provided to make sure they aren't reset to rocksdb's ones. > > Hope this helps. > > Thanks, > > Igor > > > > > > > > On 12/21/2020 2:56 AM, Jeremy Austin

[ceph-users] Re: PGs down

2020-12-21 Thread Jeremy Austin
On Sun, Dec 20, 2020 at 6:56 PM Alexander E. Patrakov wrote: > On Mon, Dec 21, 2020 at 4:57 AM Jeremy Austin wrote: > > > > On Sun, Dec 20, 2020 at 2:25 PM Jeremy Austin > wrote: > > > > > Will attempt to disable compaction and report. > > > >

[ceph-users] Re: PGs down

2020-12-20 Thread Jeremy Austin
On Sun, Dec 20, 2020 at 2:25 PM Jeremy Austin wrote: > Will attempt to disable compaction and report. > Not sure I'm doing this right. In [osd] section of ceph.conf, I added periodic_compaction_seconds=0 and attempted to start the OSDs in question. Same error as before. Am I setting

[ceph-users] Re: PGs down

2020-12-20 Thread Jeremy Austin
put for any disk errors as well? > I had, which is why I became aware of the SATA failure. > 4) Haven't you performed Ceph upgrade recently. Or more generally - was > the cluster deployed with the current Ceph version or it was an earlier one? > Cluster deployed with 14.2.9, IIRC;

[ceph-users] Re: PGs down

2020-12-13 Thread Jeremy Austin
n > db/000348.sst offset 47935290 size 4704 code = 2 Rocksdb transaction: > > ? > > Thanks, > Igor > On 12/13/2020 8:48 AM, Jeremy Austin wrote: > > I could use some input from more experienced folks… > > First time seeing this behavior. I've been running cep

[ceph-users] PGs down

2020-12-12 Thread Jeremy Austin
572 pg 15.6a not scrubbed since 2020-10-24 15:03:09.189964 pg 15.10 not scrubbed since 2020-10-24 16:25:08.826981 pg 15.1e not scrubbed since 2020-10-24 16:05:03.080127 pg 15.40 not scrubbed since 2020-10-24 11:58:04.290488 pg 15.4a not scrubbed since 2020-10-24 11:32:44.573551 -