[ceph-users] RGW: Migrating a long-lived cluster to multi-site, fixing an EC pool mistake

2023-06-09 Thread Christian Theune
” “cache-flush-evict-all” approach was unfeasible here as it only yielded around 50MiB/s. Using cache limits and targetting the cache sizes to 0 caused proper parallelization and was able to flush/evict at almost constant 1GiB/s in the cluster. -- Christian Theune · c...@flyingcircus.io · +49 345

[ceph-users] Re: RGW: Migrating a long-lived cluster to multi-site, fixing an EC pool mistake

2023-06-13 Thread Christian Theune
still 2.4 hours … Cheers, Christian > On 9. Jun 2023, at 11:16, Christian Theune wrote: > > Hi, > > we are running a cluster that has been alive for a long time and we tread > carefully regarding updates. We are still a bit lagging and our cluster (that > started around

[ceph-users] Re: RGW: Migrating a long-lived cluster to multi-site, fixing an EC pool mistake

2023-06-14 Thread Christian Theune
few very large buckets (200T+) that will take a while to copy. We can pre-sync them of course, so the downtime will only be during the second copy. Christian > On 13. Jun 2023, at 14:52, Christian Theune wrote: > > Following up to myself and for posterity: > > I’m going to t

[ceph-users] Re: RGW: Migrating a long-lived cluster to multi-site, fixing an EC pool mistake

2023-06-16 Thread Christian Theune
id i get something wrong? > > > > > Kind regards, > Nino > > > On Wed, Jun 14, 2023 at 5:44 PM Christian Theune wrote: > Hi, > > further note to self and for posterity … ;) > > This turned out to be a no-go as well, because you can’t silently switch the &g

[ceph-users] Re: RGW: Migrating a long-lived cluster to multi-site, fixing an EC pool mistake

2023-06-21 Thread Christian Theune
zonegroups referring to the same pools and this should only run through proper abstractions … o_O Cheers, Christian > On 14. Jun 2023, at 17:42, Christian Theune wrote: > > Hi, > > further note to self and for posterity … ;) > > This turned out to be a no-go as well, becau

[ceph-users] Contionuous spurious repairs without cause?

2023-09-05 Thread Christian Theune
any relevant issue either. Any ideas? Liebe Grüße, Christian Theune -- Christian Theune · c...@flyingcircus.io · +49 345 219401 0 Flying Circus Internet Operations GmbH · https://flyingcircus.io Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland HR Stendal HRB 21169 · Geschäftsführer

[ceph-users] Re: Contionuous spurious repairs without cause?

2023-09-06 Thread Christian Theune
dated all daemons to the same minor version those > errors were gone. > > Regards, > Eugen > > Zitat von Christian Theune : > >> Hi, >> >> this is a bit older cluster (Nautilus, bluestore only). >> >> We’ve noticed that the cluster is almost conti

[ceph-users] Re: Contionuous spurious repairs without cause?

2023-09-06 Thread Christian Theune
a repair fixed them everytime. After they updated all >> daemons to the same minor version those errors were gone. >> >> Regards, >> Eugen >> >> Zitat von Christian Theune : >> >>> Hi, >>> >>> this is a bit older cluster (Nautilus, bl

[ceph-users] Re: ceph's replicas question

2019-09-03 Thread Christian Theune
y shutting down your whole cluster and starting it up again, including your network equipment. It’s normal that this is a period where cluster activity is quite flaky and this has caused multiple instances of data loss for us when we had clusters with min_size 1. Cheers, Christian --

[ceph-users] Re: Ceph Tentacle release timeline — when?

2025-02-06 Thread Christian Theune
ll-tested releases that provide a smooth upgrade path. Taking care of the testing infrastructure is a big part of that IMHO, so I’d applaud you to take the time to do it with sufficient attention to detail and not try to push out a release while juggling that. Christian -- Christian Theune · c..