; if only read, maybe a read ahead bug could explain this.
>
> - Mail original -
> De: "Olivier Bonvalet"
> À: "aderumier"
> Cc: "ceph-users"
> Envoyé: Mercredi 4 Mars 2015 15:13:30
> Objet: Re: [ceph-users] Perf problem after upgra
, could give you more logs ?
>
>
> - Mail original -
> De: "Olivier Bonvalet"
> À: "aderumier"
> Cc: "ceph-users"
> Envoyé: Mercredi 4 Mars 2015 16:42:13
> Objet: Re: [ceph-users] Perf problem after upgrade from dumpling to firefly
>
-- Mail original -
> De: "Olivier Bonvalet"
> À: "aderumier"
> Cc: "ceph-users"
> Envoyé: Mercredi 4 Mars 2015 15:13:30
> Objet: Re: [ceph-users] Perf problem after upgrade from dumpling to firefly
>
> Ceph health is OK yes.
>
> The «firefly-
gt;
> do you see also twice ios/ ops in "ceph -w " stats ?
>
> is the ceph health ok ?
>
>
>
> - Mail original -
> De: "Olivier Bonvalet"
> À: "aderumier"
> Cc: "ceph-users"
> Envoyé: Mercredi 4 Ma
h health ok ?
>
>
>
> - Mail original -
> De: "Olivier Bonvalet"
> À: "aderumier"
> Cc: "ceph-users"
> Envoyé: Mercredi 4 Mars 2015 14:49:41
> Objet: Re: [ceph-users] Perf problem after upgrade from dumpling to firefly
>
> Thanks
- Mail original -
De: "Olivier Bonvalet"
À: "aderumier"
Cc: "ceph-users"
Envoyé: Mercredi 4 Mars 2015 14:49:41
Objet: Re: [ceph-users] Perf problem after upgrade from dumpling to firefly
Thanks Alexandre.
The load problem is permanent : I have twice IO/s on
kport in dumpling, not sure it's already done for
> firefly
>
>
> Alexandre
>
>
>
> - Mail original -
> De: "Olivier Bonvalet"
> À: "ceph-users"
> Envoyé: Mercredi 4 Mars 2015 12:10:30
> Objet: [ceph-users] Perf problem after upgrade from dump
ars 2015 12:10:30
Objet: [ceph-users] Perf problem after upgrade from dumpling to firefly
Hi,
last saturday I upgraded my production cluster from dumpling to emperor
(since we were successfully using it on a test cluster).
A couple of hours later, we had falling OSD : some of them were marked
Hi,
last saturday I upgraded my production cluster from dumpling to emperor
(since we were successfully using it on a test cluster).
A couple of hours later, we had falling OSD : some of them were marked
as down by Ceph, probably because of IO starvation. I marked the cluster
in «noout», start dow