Thanks for that clarification Josh, I had a small doubt you just alleviated :)
Razique Mahroua - Nuage & Corazique.mahr...@gmail.comTel : +33 9 72 37 94 15
Le 1 févr. 2013 à 00:36, Josh Durgin a écrit :Ceph has been officially production ready for block (rbd) and objectst
Ceph has been officially production ready for block (rbd) and object
storage (radosgw) for a while. It's just the file system that isn't
ready yet:
http://ceph.com/docs/master/faq/#is-ceph-production-quality
Josh
On 01/31/2013 01:23 PM, Razique Mahroua wrote:
Speaking of which guys,
anything p
Speaking of which guys, anything particular stability-wise regarding Ceph within OpenStack. It's officially not production-ready, yet it's often that solution that comes out when we are looking for data clustering. GlusterFS...yah or nay?
Razique Mahroua - Nuage & Corazique.mahr...@gmail.comTel : +
+1 John
Since CephFS is not production ready what you can do is map RBD device
to all of your compute nodes and then mount them in
/var/lib/nova/instances. The downside of this that you have way more
IOPS since you only have one RBD per compute node for EVERY VMs that
will end up into this compute
On Thu, Jan 31, 2013 at 11:43 AM, Sébastien Han wrote:
> Try to have a look at the boot from volume feature. Basically the disk
> base of your instance is an RBD volume from Ceph. Something will be
> remain in /var/lib/nova/instances but it's only the kvm xml file.
>
> http://ceph.com/docs/master/
Try to have a look at the boot from volume feature. Basically the disk
base of your instance is an RBD volume from Ceph. Something will be
remain in /var/lib/nova/instances but it's only the kvm xml file.
http://ceph.com/docs/master/rbd/rbd-openstack/?highlight=openstack
Cheers!
--
Regards,
Sébas
6 matches
Mail list logo