To answer the 1st question, yes you can mount the RBD’s on the existing nodes, 
however there have been reported problems with RBD clients on the same server 
as the OSD’s. From memory these have been mainly crashes and hangs. Whether or 
not you will come across these problems is something you will have to test.

 

However potentially more of a concern would be if you are using pacemaker. If 
you are configuring pacemaker correctly you will need to set up stonith, which 
brings the possibility that it might start forcibly restarting your OSD’s or 
monitors as part of the process.

 

Regarding iSCSI multipath, currently you can configure ALUA active/passive but 
not active/active. Mike Christie from Redhat is currently working towards 
getting active/active working with Ceph but it is not currently ready for use. 
If you try and just use active/active in its current state you will likely end 
up with corruption.

 

If you want to use ALUA active/passive you will still need something like 
pacemaker to manage the ALUA states for each node. There is also outstanding 
problems with LIO+Ceph causing kernel panics and hangs. If you are intending to 
use this LIO+Ceph with ESXi there is also another problem which seems to 
frequently happen, where ESXI+LIO get stuck in a loop and the LUNs go offline. 
I’m currently using tgt with the RBD backend as the best solution for exporting 
RBD’s as iSCSI.

 

Generally, if you are happy with NFS sync performance and don’t require iSCSI, 
I would stick with it for the mean time.

 

Nick

 

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
Vasiliy Angapov
Sent: 22 May 2015 13:06
To: Gerson Ariel
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] iSCSI ceph rbd

 

Hi, Ariel, gentlemen,

 

I have the same question but with regard to multipath. Is it possible to just 
export iSCSI target on each Ceph node and use a multipath on client side?

Can it possibly lead to data inconsistency?

 

Regards, Vasily.

 

 

On Fri, May 22, 2015 at 12:59 PM, Gerson Ariel <ar...@bisnis2030.com 
<mailto:ar...@bisnis2030.com> > wrote:

I apologize beforehand for not using more descriptive subject for my question.

 

 

On Fri, May 22, 2015 at 4:55 PM, Gerson Ariel <ar...@bisnis2030.com 
<mailto:ar...@bisnis2030.com> > wrote:

Our hardware is like this, three identical servers with 8 osd disks, 1 ssd disk
as journal, 1 for os, 32GB of ECC RAM, 4 GiB copper ethernet. We deploy this
cluster since February 2015 and most of the the system load is not too great,
lots of idle time.

Right now we have a node that mounts rbd blocks and export them as nfs. It
works quite well but at a cost of one extra node as bridge between storage
client (vms) and storage provider  cluster (ceph osd and mon).

What I want to know is, is there any reason why I shouldn't mount rbd disks on
one of the server, the ones that also runs OSD and MON daemons, and export them
as nfs or iSCSI?  Assuming that I already done my homework to make my setup
highly available using pacemaker (eg. floating IP, iSCSI/NFS resource), isn't
something like this would be better as it is more reliable? ie. I remove the
middle-man node(s) so I only have to make sure about those ceph nodes and 
vm-hosts.

Thank you

 


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com> 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 




_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to