Hi Iban,
you are running xen (just the software) not xenserver (ad hoc linux
distribution).
Xenserver is a linux distribution based on CentOS.
You cannot recompile the kernel by your-own (... well, you can do, but
it's not a good idea).
And you should not install rpm by your-own (... but, i'm
Hi Andrei,
i don't think so.
The future way to support Ceph in Xencenter is the kernel.
Xencenter is based on centOS, and centOS is the downstream of RHE.
This means that some day in the future the kernel of RHE will be already
compiled to completly support RADOS.
At that time having ceph
Hi Max,
Have you considered Proxmox at all? Nicely integrates with Ceph storage. I
moved from Xenserver longtime ago and have no regrets.
Thanks
Brians
On Sat, Feb 25, 2017 at 12:47 PM, Massimiliano Cuttini
wrote:
> Hi Iban,
>
> you are running xen (just the software) not xenserver (ad hoc lin
> Op 24 februari 2017 om 19:48 schreef Adam Carheden :
>
>
> From the docs for each project:
>
> "When a primary storage outage occurs the hypervisor immediately stops
> all VMs stored on that storage
> device"http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/4.8/reliabili
Hi Brian,
never listen before.
However seems nice and fully featured.
The pity is that is based o KVM, which is as far as I know is a ligth
hypervisor that is not able to isolate the virtual machine properly.
Due to this is possible to frozen the hypervisor kernel from a guest
virtual machine
Just to give me 50 cents, Proxmox uses full KVM virt, they offer their own GUI
and storage management on top of standard QEMU/KVM.
,Ashley
Sent from my iPhone
On 25 Feb 2017, at 10:12 PM, Massimiliano Cuttini
mailto:m...@phoenixweb.it>> wrote:
Hi Brian,
never listen before.
However seems nic
I spoke with the cloud stack guys on IRC yesterday and the only risk is
when libvirtd starts. Ceph is supported only with libvirt. Cloudstack can
only pass one monitor to libvirt even though libvirt can use more. Libvirt
uses that info when it boots, but after that it gets all the monitors from
tha
> Op 25 februari 2017 om 15:45 schreef Adam Carheden :
>
>
> I spoke with the cloud stack guys on IRC yesterday and the only risk is
> when libvirtd starts. Ceph is supported only with libvirt. Cloudstack can
> only pass one monitor to libvirt even though libvirt can use more. Libvirt
> uses tha
Hi Max,
I have a working Xenserver pool using Ceph RBD as a backend.. I got it
working by using the RBDSR plugin here:
https://github.com/mstarikov/rbdsr
I don't have much time, but I just wanted to respond in case it's
helpful... Here is how I got it working:
On CEPH
Set tunables to legacy
#
Hi Mike,
Have you considered creating SR which doesn't make one huge RBD volume
and on top of it creates LVM but instead creates separate RBD volumes
for each VDI?
W dniu 25.02.2017 o 22:14, Mike Jacobacci pisze:
Hi Max,
I have a working Xenserver pool using Ceph RBD as a backend.. I got it
This fix is now merged into the kraken branch.
On Sat, Feb 25, 2017 at 12:00 AM, David Disseldorp wrote:
> Hi,
>
> On Thu, 23 Feb 2017 21:07:41 -0800, Schlacta, Christ wrote:
>
>> So hopefully when the suse ceph team get 11.2 released it should fix this,
>> yes?
>
> Please raise a bug at bugzilla
You are correct again. I forgot that rrdns returns all address, just in
different orders. So it doesn't matter that cloudstack can't pass libvirt
multiple addresses even though libvirt can pass those to qemu and librados.
On Feb 25, 2017 8:16 AM, "Wido den Hollander" wrote:
>
> > Op 25 februari
On 26/02/2017 12:12 AM, Massimiliano Cuttini wrote:
The pity is that is based o KVM, which is as far as I know is a ligth
hypervisor that is not able to isolate the virtual machine properly.
Due to this is possible to frozen the hypervisor kernel from a guest
virtual machine allowing somebody to
13 matches
Mail list logo