Hello Greg!

Thank you for your advice, first of all!

I have tried to adjust the Ceph tunables detailed in this
<http://docs.ceph.com/docs/master/rados/operations/crush-map/> page, but
without success. I have tried both '*ceph osd crush tunables optimal*'
and '*ceph
osd crush tunables hammer*', but both lead to the same 'feature set
mismatch' issue, whenever I tried to create a new RBD image, afterwards.
The only way I could restore the proper functioning of the cluster was to
set the tunables to default ('*ceph osd crush tunables default*'), which
are the default values for a new cluster.

So... either I'm doing something incompletely, or I'm doing something
wrong. Any further advice on how to be able to use EC pools is highly
welcomed.

Thank you!

Regards,
Bogdan


On Mon, Nov 9, 2015 at 12:20 AM, Gregory Farnum <gfar...@redhat.com> wrote:

> With that release it shouldn't be the EC pool causing trouble; it's the
> CRUSH tunables also mentioned in that thread. Instructions should be
> available in the docs for using older tunable that are compatible with
> kernel 3.13.
> -Greg
>
>
> On Saturday, November 7, 2015, Bogdan SOLGA <bogdan.so...@gmail.com>
> wrote:
>
>> Hello, everyone!
>>
>> I have recently created a Ceph cluster (v 0.94.5) on Ubuntu 14.04.3 and I
>> have created an erasure coded pool, which has a caching pool in front of it.
>>
>> When trying to map RBD images, regardless if they are created in the rbd
>> or in the erasure coded pool, the operation fails with 'rbd: map failed:
>> (5) Input/output error'. Searching the internet for a solution... I came
>> across this
>> <http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-June/040493.html>
>> page, which seems to detail exactly the same issue - a 'misunderstanding'
>> between erasure coded pools and the 3.13 kernel (used by Ubuntu).
>>
>> Can you please advise on a fix for that issue? As we would prefer to use
>> erasure coded pools, the only solutions which came into my mind were:
>>
>>    - upgrade to the Infernalis Ceph release, although I'm not sure the
>>    issue is fixed in that version;
>>
>>
>>    - upgrade the kernel (on all the OSDs and Ceph clients) to the 3.14+
>>    kernel;
>>
>> Any better / easier solution is highly appreciated.
>>
>> Regards,
>>
>> Bogdan
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to