Hi Max,

I have a working Xenserver pool using Ceph RBD as a backend.. I got it
working by using the RBDSR plugin here:
https://github.com/mstarikov/rbdsr

I don't have much time, but I just wanted to respond in case it's
helpful... Here is how I got it working:

On CEPH

Set tunables to legacy

# ceph osd crush tunables legacy
Disable chooseleaf_vary_r

# ceph osd getcrushmap -o /tmp/mycrushmap

# crushtool -d /tmp/mycrushmap > /tmp/mycrushmap.txt

chooseleaf_vary_r 0

# crushtool -c /tmp/mycrushmap.txt -o /tmp/mycrushmap.new

# ceph osd setcrushmap -i /tmp/mycrushmap.new
Create RBD image

# rbd create *pool*/*image* --size *512G* --image-shared
Remove Deep-flatten feature

# rbd feature disable *pool*/*image* deep-flatten


On XenServer

(Install the RBDSR plugin on every node)

To Create SR on XenServer

#  modprobe rbd (make sure the are no errors)

# xe sr-create type=lvmoiscsi name-label=*RBD IMAGE NAME* shared=true
device-config:target=*CEPH MONITOR IP* device-config:port=6789
device-config:targetIQN=*RBD POOL NAME* device-config:SCSIid=*MOUNT NAME*
device-config:chapuser=*CEPHUSER* device-config:chappassword=*PASSWORD*


NOTES:

- The GUI option doesn't work, but the above command works every time.

- If you you have more than one XenServer and they are on a Pool, you have
to run the sr-create command on one node, then the other(s) pool members
will need to rejoin the Pool for the sr to be shared accross the pool.  I
couldn't get the mounts working on other pool members of an existing pool.


Let me know if you have any questions.


Cheers,
Mike
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to