Hello Adam!

Thank you very much for your advice, I will try setting the tunables to
'firefly'.

As there seem to be a few features which would require the 4.1 kernel... is
there any 'advised' Linux distribution on which Ceph is known to work best?
According to this
<http://docs.ceph.com/docs/master/start/os-recommendations/> page... it
seems to be most tested on Ubuntu 14.04 and CentOS 7, although both are on
the 3.1x kernel.

Sorry if this topic was previously discussed, I might have missed it.

Thank you!

Regards,
Bogdan


On Mon, Nov 9, 2015 at 7:55 AM, Adam Tygart <mo...@ksu.edu> wrote:

> The problem is that "hammer" tunables (i.e. "optimal" in v0.94.x) are
> incompatible with the kernel interfaces before Linux 4.1 (namely due
> to straw2 buckets). To make use of the kernel interfaces in 3.13, I
> believe you'll need "firefly" tunables.
>
> --
> Adam
>
> On Sun, Nov 8, 2015 at 11:48 PM, Bogdan SOLGA <bogdan.so...@gmail.com>
> wrote:
> > Hello Greg!
> >
> > Thank you for your advice, first of all!
> >
> > I have tried to adjust the Ceph tunables detailed in this page, but
> without
> > success. I have tried both 'ceph osd crush tunables optimal' and 'ceph
> osd
> > crush tunables hammer', but both lead to the same 'feature set mismatch'
> > issue, whenever I tried to create a new RBD image, afterwards. The only
> way
> > I could restore the proper functioning of the cluster was to set the
> > tunables to default ('ceph osd crush tunables default'), which are the
> > default values for a new cluster.
> >
> > So... either I'm doing something incompletely, or I'm doing something
> wrong.
> > Any further advice on how to be able to use EC pools is highly welcomed.
> >
> > Thank you!
> >
> > Regards,
> > Bogdan
> >
> >
> > On Mon, Nov 9, 2015 at 12:20 AM, Gregory Farnum <gfar...@redhat.com>
> wrote:
> >>
> >> With that release it shouldn't be the EC pool causing trouble; it's the
> >> CRUSH tunables also mentioned in that thread. Instructions should be
> >> available in the docs for using older tunable that are compatible with
> >> kernel 3.13.
> >> -Greg
> >>
> >>
> >> On Saturday, November 7, 2015, Bogdan SOLGA <bogdan.so...@gmail.com>
> >> wrote:
> >>>
> >>> Hello, everyone!
> >>>
> >>> I have recently created a Ceph cluster (v 0.94.5) on Ubuntu 14.04.3
> and I
> >>> have created an erasure coded pool, which has a caching pool in front
> of it.
> >>>
> >>> When trying to map RBD images, regardless if they are created in the
> rbd
> >>> or in the erasure coded pool, the operation fails with 'rbd: map
> failed: (5)
> >>> Input/output error'. Searching the internet for a solution... I came
> across
> >>> this page, which seems to detail exactly the same issue - a
> >>> 'misunderstanding' between erasure coded pools and the 3.13 kernel
> (used by
> >>> Ubuntu).
> >>>
> >>> Can you please advise on a fix for that issue? As we would prefer to
> use
> >>> erasure coded pools, the only solutions which came into my mind were:
> >>>
> >>> upgrade to the Infernalis Ceph release, although I'm not sure the issue
> >>> is fixed in that version;
> >>>
> >>> upgrade the kernel (on all the OSDs and Ceph clients) to the 3.14+
> >>> kernel;
> >>>
> >>> Any better / easier solution is highly appreciated.
> >>>
> >>> Regards,
> >>>
> >>> Bogdan
> >
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to