FYI: i'm using ocfs2 as you plan to (/var/Lib/nova/instances/) it is stable, 
but Performance isnt blasting.

--
Sent from my mobile device

On 12.07.2013, at 14:21, "Tom Verdaat" 
<t...@server.biz<mailto:t...@server.biz>> wrote:

Hi Darryl,

Would love to do that too but only if we can configure nova to do this 
automatically. Any chance you could dig up and share how you guys accomplished 
this?

>From everything I've read so far Grizzly is not up for the task yet. If I 
>can't set it in nova.conf then it probably won't work with 3rd party tools 
>like Hostbill and break user self service functionality that we're aiming for 
>with a public cloud concept. I think we'll need 
>this<https://blueprints.launchpad.net/nova/+spec/improve-boot-from-volume> and 
>this<https://blueprints.launchpad.net/nova/+spec/bring-rbd-support-libvirt-images-type>
> blueprint implemented to be able to achieve this, and of course this 
>one<https://blueprints.launchpad.net/horizon/+spec/improved-boot-from-volume> 
>for the dashboard would be nice too.

I'll do some more digging into Openstack and see how far we can get with this.

In the mean time I've done some more research and figured out that:

  *   There is a bunch of other cluster file systems but GFS2 and OCFS2 are the 
only open source ones I could find, and I believe the only ones that are 
integrated in the Linux kernel.
  *   OCFS2 seems to have a lot more public information than GFS2. It has more 
documentation and a living - though not very active - mailing list.
  *   OCFS2 seems to be in active use by its sponsor Oracle, while I can't find 
much on GFS2 from its sponsor RedHat.
  *   OCFS2 documentation indicates a node soft limit of 256 versus 16 for 
GFS2, and there are actual deployments of stable 45 TB+ production clusters.
  *   Performance tests from 2010 indicate OCFS2 clearly beating GFS2, though 
of course newer versions have been released since.
  *   GFS2 has more fencing options than OCFS2.

There is not much info from the last 12 months so it's hard get an accurate 
picture. If we have to go with the shared storage approach OCFS2 looks like the 
preferred option based on the info I've gathered so far though.

Tom



Darryl Bond schreef op vr 12-07-2013 om 10:04 [+1000]:
Tom,
I'm no expert as I didn't set it up, but we are using Openstack Grizzly with 
KVM/QEMU and RBD volumes for VM's.
We boot the VMs from the RBD volumes and it all seems to work just fine.
Migration works perfectly, although live - no break migration only works from 
the command line tools. The GUI uses the pause, migrate then un-pause mode.
Layered snapshot/cloning works just fine through the GUI. I would say Grizzly 
has pretty good integration with CEPH.

Regards
Darryl

On 07/12/13 09:41, Tom Verdaat wrote:

Hi Alex,


We're planning to deploy OpenStack Grizzly using KVM. I agree that running 
every VM directly from RBD devices would be preferable, but booting from 
volumes is not one of OpenStack's strengths and configuring nova to make boot 
from volume the default method that works automatically is not really feasible 
yet.


So the alternative is to mount a shared filesystem on /var/lib/nova/instances 
of every compute node. Hence the RBD + OCFS2/GFS2 question.


Tom


p.s. yes I've read the 
rbd-openstack<http://ceph.com/docs/master/rbd/rbd-openstack/> page which covers 
images and persistent volumes, not running instances which is what my question 
is about.


2013/7/12 Alex Bligh <a...@alex.org.uk<mailto:a...@alex.org.uk>>
Tom,

On 11 Jul 2013, at 22:28, Tom Verdaat wrote:

> Actually I want my running VMs to all be stored on the same file system, so 
> we can use live migration to move them between hosts.
>
> QEMU is not going to help because we're not using it in our virtualization 
> solution.


Out of interest, what are you using in your virtualization solution? Most 
things (including modern Xen) seem to use Qemu for the back end. If your 
virtualization solution does not use qemu as a back end, you can use kernel rbd 
devices straight which I think will give you better performance than OCFS2 on 
RBD devices.

A

>
> 2013/7/11 Alex Bligh <a...@alex.org.uk<mailto:a...@alex.org.uk>>
>
> On 11 Jul 2013, at 19:25, Gilles Mocellin wrote:
>
> > Hello,
> >
> > Yes, you missed that qemu can use directly RADOS volume.
> > Look here :
> > http://ceph.com/docs/master/rbd/qemu-rbd/
> >
> > Create :
> > qemu-img create -f rbd rbd:data/squeeze 10G
> >
> > Use :
> >
> > qemu -m 1024 -drive format=raw,file=rbd:data/squeeze
>
> I don't think he did. As I read it he wants his VMs to all access the same 
> filing system, and doesn't want to use cephfs.
>
> OCFS2 on RBD I suppose is a reasonable choice for that.
>
> --
> Alex Bligh
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>


--
Alex Bligh








________________________________

The contents of this electronic message and any attachments are intended only 
for the addressee and may contain legally privileged, personal, sensitive or 
confidential information. If you are not the intended addressee, and have 
received this email, any transmission, distribution, downloading, printing or 
photocopying of the contents of this message or attachments is strictly 
prohibited. Any legal privilege or confidentiality attached to this message and 
attachments is not waived, lost or destroyed by reason of delivery to any 
person other than intended addressee. If you have received this message and are 
not the intended addressee you should notify the sender by return email and 
destroy all copies of the message and any attachments. Unless expressly 
attributed, the views expressed in this email do not necessarily represent the 
views of the company.

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to