thanks for that link, It will be nice to have that interface supported by .ko, regardless i raised this http://tracker.ceph.com/issues/19095
On Mon, Feb 27, 2017 at 9:47 AM, Shinobu Kinjo <ski...@redhat.com> wrote: > We already discussed this: > > https://www.spinics.net/lists/ceph-devel/msg34559.html > > What do you think of comment posted in that ML? > Would that make sense to you as well? > > > On Tue, Feb 28, 2017 at 2:41 AM, Vasu Kulkarni <vakul...@redhat.com> > wrote: > > Ilya, > > > > Many folks hit this and its quite difficult since the error is not > properly > > printed out(unless one scans syslogs), Is it possible to default the > feature > > to > > the one that kernel supports or its not possible to handle that case? > > > > Thanks > > > > On Mon, Feb 27, 2017 at 5:59 AM, Ilya Dryomov <idryo...@gmail.com> > wrote: > >> > >> On Mon, Feb 27, 2017 at 2:37 PM, Simon Weald <si...@simonweald.com> > wrote: > >> > I've currently having some issues making some Jessie-based Xen hosts > >> > talk to a Trusty-based cluster due to feature mismatch errors. Our > >> > Trusty hosts are using 3.19.0-80 (the Vivid LTS kernel), and our > Jessie > >> > hosts were using the standard Jessie kernel (3.16). Volumes wouldn't > >> > map, so I tried the kernel from jessie-backports (4.9.2-2~bpo8+1); > still > >> > no joy. I then tried compiling the latest kernel in the 4.9 branch > >> > (4.9.12) from source with the Debian kernel config - still no joy. As > I > >> > understand it there have been a lot of changes in krbd which I should > >> > have pulled in when building from source - am I missing something? > Some > >> > info about the Xen hosts: > >> > > >> > root@xen-host:~# uname -r > >> > 4.9.12-internal > >> > > >> > root@xen-host:~# ceph -v > >> > ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367) > >> > > >> > root@xen-host:~# rbd map -p cinder > >> > volume-88188973-0f40-48a3-8a88-302d1cb5e093 > >> > rbd: sysfs write failed > >> > RBD image feature set mismatch. You can disable features unsupported > by > >> > the kernel with "rbd feature disable". > >> > In some cases useful info is found in syslog - try "dmesg | tail" or > so. > >> > rbd: map failed: (6) No such device or address > >> > > >> > root@xen-host:~# dmesg | grep 'unsupported' > >> > [252723.885948] rbd: image volume-88188973-0f40-48a3- > 8a88-302d1cb5e093: > >> > image uses unsupported features: 0x38 > >> > > >> > root@xen-host:~# rbd info -p cinder > >> > volume-88188973-0f40-48a3-8a88-302d1cb5e093 > >> > rbd image 'volume-88188973-0f40-48a3-8a88-302d1cb5e093': > >> > size 1024 MB in 256 objects > >> > order 22 (4096 kB objects) > >> > block_name_prefix: rbd_data.c6bd3c5f705426 > >> > format: 2 > >> > features: layering, exclusive-lock, object-map, fast-diff, > >> > deep-flatten > >> > flags: > >> > >> object-map, fast-diff, deep-flatten are still unsupported. > >> > >> > Do > >> > > >> > $ rbd feature disable <image-name> > >> > deep-flatten,fast-diff,object-map,exclusive-lock > >> > > >> > to disable features unsupported by the kernel client. If you are > using > >> > the > >> > kernel client, you should create your images with > >> > > >> > $ rbd create --size <size> --image-feature layering <image-name> > >> > > >> > or add > >> > > >> > rbd default features = 3 > >> > > >> > to ceph.conf on the client side. (Setting rbd default features on the > >> > OSDs will have no effect.) > >> > >> exclusive-lock is supported starting with 4.9. The above becomes > >> > >> > $ rbd feature disable <image-name> deep-flatten,fast-diff,object- > map > >> > $ rbd create --size <size> --image-feature layering,exclusive-lock > >> > <image-name> > >> > rbd default features = 5 > >> > >> if you want it. > >> > >> Thanks, > >> > >> Ilya > >> _______________________________________________ > >> ceph-users mailing list > >> ceph-users@lists.ceph.com > >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > > > > > _______________________________________________ > > ceph-users mailing list > > ceph-users@lists.ceph.com > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > >
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com