Hi Stephen,

Thanks for the response and information.

We would like to stay away from having to use iSCSI to interface with the Ceph environment.

In which ACS version has the bug been fixed?
Last I checked on the bug report, it has not been fixed.

Link to bug: https://issues.apache.org/jira/browse/CLOUDSTACK-1302

Regards,
Enzo

On 20/10/2016 17:41, Stephan Seitz wrote:
Hi,

TL;DR; you can't. XenServer 6.5 just doesn't support RBD backends.

There are projects to integrate RBD-SR into XenServer 7, I don't know
if projects like https://github.com/rposudnevskiy/RBDSR will make it
into XenServer upstream.

A while ago, I closed that gap by using iSCSI in between. The slides
are in german, but I think the code snippets are easy to understand.

https://www.heinlein-support.de/sites/default/files/ceph-iscsi-host-fai
lover-multipath.pdf

That works and there are some XenServer Pools out there running on top
of ceph clusters.

Nowadays, I'ld try scst with replicated persistent reservations https:/
/github.com/bvanassche/scst/blob/master/scst/README.dlm backed on
kernel-mapped rbd's.

Anyway, If you're already running a ceph cluster, I'ld vote for KVM.

The bug you ran into is a pure ACS bug and afaik already fixed.

- Stephan


Am Donnerstag, den 20.10.2016, 15:22 +0200 schrieb (IMIS) Enzo Bettini:
Hi,

I have been doing a lot of searching to find out or work out how to
get
ACS 4.8 to work with XenServer 6.5 with a Ceph backend.
Currently I am only able to find outdated articles with experimental
code being tested.

A little background:

We currently have a number of servers running CentOS 7 as KVM
hypervisors with ACS 4.8 installed on a separate server (also CentOS
7).
The issue we have is that there is a bug in ACS where the disk cache
mechanism for virtual disks is not being pushed through to the
database
or to the virtual machine.

Each host has 4 NICs. 2x 1Gbps and 2x 10Gbps.
Each pair of NICs is bonded together. Public, Guest and Ceph traffic
is
run on the 10Gbps bonded NICs.
Management and ACS Storage traffic is run on the 1Gbps NICs.

We have been informed that this is working with XenServer? However,
we
have been unable to get XenServer to play nice.
After installing XenServer and updating the network so that we have
access to it. We are able to add the server to the XenServer cluster
which was created.
But we are unable to deploy a VM on the newly added XenServer.

We have 2 primary storage options available. 1) NFS and 2) RBD.
Neither
one of the two are working.
We would prefer to use the RBD primary storage though.

The error we receive is that the host is unable to connect to
primary
storage [ID].

Are there guides or steps available to get this working?

We have also attempted to use the Xen Hypervisor installed on CentOS
7
instead of XenServer, though that was until we read that ACS does
not
support the Xen Hypervisor on its own and only supports XenServer
5.6
SP2 to 6.5 SP1.

Is XenServer the correct hypervisor to choose for an ACS and Ceph
setup?
Or is there a more preferred hypervisor?

Regards,
Enzo

Reply via email to