Hey :)
so what was happening is that basically the FS was just working fine on basic 
operations (meaning that don’t involve much I/O load), but when it came to 
migrate instances, the FS made the load on the server increase, and all of a 
sudden, the server hanged (kernel panic), and every time I had to reboot.
I’ve been able to reproduce that exact behaviour on three different servers 
(that passed the iozone/ bonnie++/ vdbench tests without hanging)

Regarding Ceph RBD, if I’m not mistaken, you cannot use it as a shared disk 
since you still need to put a FS on it


On November 4, 2013 at 2:02:48, Maciej Gałkiewicz (mac...@shellycloud.com) 
wrote:

On 4 November 2013 10:46, Julien De Freitas <bada.b...@outlook.com> wrote:
Hi Razique,

Thanks for the link !
I read the full discussion and as I tought there is no real perfect solution so 
far.
I think i'll continue to use nexenta because it's a great solution and i'll set 
up multi back end storage for cinder in order to test ceph block storage.
For meta data storage i'll do some test with CephFS because not production 
ready mean a lot and nothing at the same time. 
I your previsous mail you said  "the FS kept hanging on high load, so I 
considered it to be pretty unstable for OpenStack", but if it was  kept hanging 
on high load it should be pretty stable ? what was the load ? Can you share 
more detail with us ?
That's a pity that we could not find any neutral heavy test out there.

If you consider Ceph you should take a look at Ceph RBD not CephFS 
(http://ceph.com/docs/master/rbd/rbd/). It is stable and works great for me.

--
Maciej Gałkiewicz
Shelly Cloud Sp. z o. o., Sysadmin
http://shellycloud.com/, mac...@shellycloud.com
KRS: 0000440358 REGON: 101504426
-- 
Razique Mahroua

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to