I'm hitting this same issue on 12.2.5. Upgraded one node to 12.2.10 and it
didn't clear.
6 OSDs flapping with this error. I know this is an older issue but are
traces still needed? I don't see a resolution available.
Thanks,
Dan
On Wed, Sep 6, 2017 at 10:30 PM Brad Hubbard wrote:
> These erro
These error logs look like they are being generated here,
https://github.com/ceph/ceph/blob/master/src/os/bluestore/BlueStore.cc#L8987-L8993
or possibly here,
https://github.com/ceph/ceph/blob/master/src/os/bluestore/BlueStore.cc#L9230-L9236.
Sep 05 17:02:58 r72-k7-06-01.k8s.ash1.cloudsys.tmcs cep
Hi,
I have the same problem. A bug [1] is reported since months, but
unfortunately this is not fixed yet. I hope, if more people are having
this problem the developers can reproduce and fix it.
I was using Kernel-RBD with a Cache Tier.
so long
Thomas Coelho
[1] http://tracker.ceph.com/issues/20
On 17-09-06 16:24, Jean-Francois Nadeau wrote:
Hi,
On a 4 node / 48 OSDs Luminous cluster Im giving a try at RBD on EC
pools + Bluestore.
Setup went fine but after a few bench runs several OSD are failing and
many wont even restart.
ceph osd erasure-code-profile set myprofile \
k=2\
Hi,
On a 4 node / 48 OSDs Luminous cluster Im giving a try at RBD on EC pools +
Bluestore.
Setup went fine but after a few bench runs several OSD are failing and many
wont even restart.
ceph osd erasure-code-profile set myprofile \
k=2\
m=1 \
crush-failure-domain=host
ceph osd pool crea