”
“cache-flush-evict-all” approach was unfeasible here as it only yielded around
50MiB/s. Using cache limits and targetting the cache sizes to 0 caused proper
parallelization and was able to flush/evict at almost constant 1GiB/s in the
cluster.
--
Christian Theune · c...@flyingcircus.io · +49 345
still 2.4 hours …
Cheers,
Christian
> On 9. Jun 2023, at 11:16, Christian Theune wrote:
>
> Hi,
>
> we are running a cluster that has been alive for a long time and we tread
> carefully regarding updates. We are still a bit lagging and our cluster (that
> started around
few very large buckets (200T+) that will take a
while to copy. We can pre-sync them of course, so the downtime will only be
during the second copy.
Christian
> On 13. Jun 2023, at 14:52, Christian Theune wrote:
>
> Following up to myself and for posterity:
>
> I’m going to t
id i get something wrong?
>
>
>
>
> Kind regards,
> Nino
>
>
> On Wed, Jun 14, 2023 at 5:44 PM Christian Theune wrote:
> Hi,
>
> further note to self and for posterity … ;)
>
> This turned out to be a no-go as well, because you can’t silently switch the
&g
zonegroups referring to the same pools and this
should only run through proper abstractions … o_O
Cheers,
Christian
> On 14. Jun 2023, at 17:42, Christian Theune wrote:
>
> Hi,
>
> further note to self and for posterity … ;)
>
> This turned out to be a no-go as well, becau
any relevant issue either.
Any ideas?
Liebe Grüße,
Christian Theune
--
Christian Theune · c...@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer
dated all daemons to the same minor version those
> errors were gone.
>
> Regards,
> Eugen
>
> Zitat von Christian Theune :
>
>> Hi,
>>
>> this is a bit older cluster (Nautilus, bluestore only).
>>
>> We’ve noticed that the cluster is almost conti
a repair fixed them everytime. After they updated all
>> daemons to the same minor version those errors were gone.
>>
>> Regards,
>> Eugen
>>
>> Zitat von Christian Theune :
>>
>>> Hi,
>>>
>>> this is a bit older cluster (Nautilus, bl
y shutting down your whole cluster and starting it up again, including
your network equipment. It’s normal that this is a period where cluster
activity is quite flaky and this has caused multiple instances of data loss for
us when we had clusters with min_size 1.
Cheers,
Christian
--
ll-tested releases that
provide a smooth upgrade path.
Taking care of the testing infrastructure is a big part of that IMHO, so I’d
applaud you to take the time to do it with sufficient attention to detail and
not try to push out a release while juggling that.
Christian
--
Christian Theune · c..
10 matches
Mail list logo