Re: [ceph-users] OSD crashes on EC recovery

2016-08-10 Thread Brian Felton
Roeland, We're seeing the same problems in our cluster. I can't offer you a solution that gets the OSD back, but I can tell you what I did to work around it. We're running 5 0.94.6 clusters with 9 nodes / 648 HDD OSDs with a k=7, m=2 erasure coded .rgw.buckets pool. During the backfilling after

[ceph-users] OSD crashes on EC recovery

2016-08-10 Thread Roeland Mertens
Hi, we run a Ceph 10.2.1 cluster across 35 nodes with a total of 595 OSDs, we have a mixture of normally replicated volumes and EC volumes using the following erasure-code-profile: # ceph osd erasure-code-profile get rsk8m5 jerasure-per-chunk-alignment=false k=8 m=5 plugin=jerasure ruleset-fa