"Status:
This code is now being ported to the upstream linux kernel reservation API
added in this commit:

https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/block/ioctl.c?id=bbd3e064362e5057cc4799ba2e4d68c7593e490b

When this is completed, LIO will call into the iblock backend which will
then call rbd's pr_ops."


Does anyone know how up to date this page?
http://tracker.ceph.com/projects/ceph/wiki/Clustered_SCSI_target_using_RBD


Is currently only Suse supporting active/active multipath for RBD over
iSCSI?  https://www.susecon.com/doc/2015/sessions/TUT16512.pdf


I'm trying to configure active/passive iSCSI gateway on OSD nodes serving
RBD image. Clustering is done using pacemaker/corosync. Does anyone have a
similar  working setup? Anything I should be aware of?


Thanks

Dominik

On Mon, Jan 18, 2016 at 11:35 AM, Dominik Zalewski <dzalew...@optlink.co.uk>
wrote:

> Hi,
>
> I'm looking into implementing iscsi gateway with MPIO using lrbd -
> https://github.com/swiftgist/lrb
>
>
>
> https://www.suse.com/docrep/documents/kgu61iyowz/suse_enterprise_storage_2_and_iscsi.pdf
>
> https://www.susecon.com/doc/2015/sessions/TUT16512.pdf
>
> From above examples:
>
> *For iSCSI failover and load-balancing,*
>
> *these servers must run a kernel supporting the target_core_*
>
> *rbd module. This also requires that the target servers run at*
>
> *least the version 3.12.48-52.27.1 of the kernel-default ­package.*
>
> *Updates packages are available from the SUSE Linux*
>
> *Enterprise Server maintenance channel.*
>
>
> I understand that lrbd is basically a nice way to configure LIO and rbd
> across ceph osd nodes/iscsi gatways. Does CentOS 7 have same
> target_core_rbd module in the kernel or this is something Suse Enterprise
> Storage specific only?
>
>
> Basically will LIO+rbd work the same way on CentOS 7? Has anyone using it
> with CentOS?
>
>
> Thanks
>
>
> Dominik
>
>
>
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to