e LTS branches to maintain
> - more upgrade paths to consider
>
> Other options we should consider? Other thoughts?
>
> Thanks!
> sage
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.o
has been
identical with previous host restarts.
Also, thanks in advance for any light that can be shed on this,
Christian
--
Christian Theune · c...@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · http://flyingcircus.io
Forsterstraße 29 · 06112 Halle (Saale) · D
ters aren’t always able to predict all
performance/stability issues that we have encountered later in production.
Cheers,
Christian
--
Christian Theune · c...@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · http://flyingcircus.io
Forsterstraße 29 · 06112 Halle
availability issues for our customer services due to slow requests.
Kind regards,
Christian
--
Christian Theune · c...@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · http://flyingcircus.io
Forsterstraße 29 · 06112 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschä
34 osd.85 up 1.00
90 3.49219 osd.90 up 1.0 1.0
92 3.49219 osd.92 up 1.0 1.0
Liebe Grüße,
Christian Theune
--
Christian Theune · c...@flyingcircus.io
Hi,
> On Sep 18, 2017, at 10:06 AM, Christian Theune wrote:
>
> We’re doing the typical SSD/non-SSD pool separation. Currently we effectively
> only use 2 pools: rbd.hdd and rbd.ssd. The ~4TB OSDs in the rbd.hdd pool are
> “capacity endurance” SSDs (Micron S610DC). We have 10
=6000 --time_based
Liebe Grüße,
Christian Theune
--
Christian Theune · c...@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · http://flyingcircus.io
Forsterstraße 29 · 06112 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian Theune, Chri
is if your cluster consists of 2 pools
where each runs using a completely disjoint set of OSDs: I guess it’s an
accidental (not intentional) behaviour that the one pool would be affecting the
other, right?
Thoughts?
Hugs,
Christian
--
Christian Theune · c...@flyingcircus.io · +49 345 219401 0
F
that it means given the message above?
Cheers,
Christian
--
Christian Theune · c...@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · http://flyingcircus.io
Forsterstraße 29 · 06112 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer: Christian. Theune, C
itch, I’d be happy to help future travellers.
> On 6 Dec 2016, at 00:59, Christian Balzer wrote:
>
> Hello,
>
> On Mon, 5 Dec 2016 15:25:37 +0100 Christian Theune wrote:
>
>> Hi,
>>
>> we’re currently expanding our cluster to grow the number of IOPS we can
, this
indicates it’s a bit more which pattern is happening than the hardware not
performing what I’m expecting from it. To me this means, I still don’t have an
indicator of a specific bottleneck.
Cheers and good night, :)
Christian
> On 6 Dec 2016, at 03:37, Christian Theune wrote:
>
Hi,
> On 7 Dec 2016, at 05:14, Christian Balzer wrote:
>
> Hello,
>
> On Tue, 6 Dec 2016 20:58:52 +0100 Christian Theune wrote:
>
>> Alright. We’re postponing this for now. Is that actually a more widespread
>> assumption that Jewel has “prime time” issues?
Hi,
> On 7 Dec 2016, at 09:04, Christian Theune wrote:
>
> I guess you’re running XFS? I’m going through code and reading up on the
> specific sync behaviour of the journal. I noticed in an XFS comment that
> various levels of SYNC might behave differently whether you’re going
t of (some very small)
things that can go wrong to screw this everything up at the VM level …
Cheers,
Christian
--
Christian Theune · c...@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · http://flyingcircus.io
Forsterstraße 29 · 06112 Halle (Saale) ·
Hi,
> On 7 Dec 2016, at 14:39, Peter Maloney
> wrote:
>
> On 12/07/16 13:52, Christian Balzer wrote:
>> On Wed, 7 Dec 2016 12:39:11 +0100 Christian Theune wrote:
>>
>> | cartman06 ~ # fio --filename=/dev/sdl --direct=1 --sync=1 --rw=write
>> --bs=128k
a 100% success,
> but the problems where relative small and the cluster stayed on-line and
> there where only a few virtual openstack instances that did not like the
> blocked I/O and had to be restarted.
>
> --
> With regards,
>
> Richard Arends.
> Sno
16 matches
Mail list logo