>>Thank you for your quick response! Okay I see, is there any preferred 
>>clustered FS in this case? OCFS2, GFS? 

Hi, I'm using ocfs2 on top of rbd in production, works fine. (you need to 
disable writeback/rbd_cache)

----- Mail original ----- 

De: "Mihály Árva-Tóth" <mihaly.arva-t...@virtual-call-center.eu> 
À: "Sean Redmond" <sean.redm...@ukfast.co.uk> 
Cc: ceph-users@lists.ceph.com 
Envoyé: Lundi 20 Octobre 2014 11:12:30 
Objet: Re: [ceph-users] Same rbd mount from multiple servers 




Hi Sean, 

Thank you for your quick response! Okay I see, is there any preferred clustered 
FS in this case? OCFS2, GFS? 

Thanks, 
Mihaly 


2014-10-20 10:36 GMT+02:00 Sean Redmond < sean.redm...@ukfast.co.uk > : 






Hi Mihaly, 

To my understanding you cannot mount an ext4 file system on more than one 
server at the same time, You would need to look to use a clustered file system. 

Thanks 

From: ceph-users [mailto: ceph-users-boun...@lists.ceph.com ] On Behalf Of 
Mihály Árva-Tóth 
Sent: 20 October 2014 09:34 
To: ceph-users@lists.ceph.com 
Subject: [ceph-users] Same rbd mount from multiple servers 








Hello, 
I made a 2GB RBD on Ceph and mounted on three separated servers.I followed 
this: 

http://ceph.com/docs/master/start/quick-rbd/ 
Set up, mkfs (extt4) and mount were finished success, but every node seems like 
three different rbd volume. :-o If I copy one 100 MB file on test1 node I don't 
see this file on test2 and test3 nodes. I'm using Ubuntu 14.04 x64 with latest 
stable ceph (0.80.7). 
What's wrong? 

Thank you, 
Mihaly 


NOTICE AND DISCLAIMER 
This e-mail (including any attachments) is intended for the above-named 
person(s). If you are not the intended recipient, notify the sender 
immediately, delete this email from your system and do not disclose or use for 
any purpose. We may monitor all incoming and outgoing emails in line with 
current legislation. We have taken steps to ensure that this email and 
attachments are free from any virus, but it remains your responsibility to 
ensure that viruses do not adversely affect you 




_______________________________________________ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to