Thank you very much for your prompt response…

So basically I can’t use cluster aware tool like Microsoft CSV on the RBD, is 
that correct?

What I am trying to understand is that can I have 2 physical hosts (Maybe Dell 
PowerEdge2950)

*host1 with VM#0-10
*host2 with  VM #10-20

And both of these hosts accessing one big LUN or, in this case ceph RBD?

Can host1 failed all it VMs to host2 in case that machine has trouble and still 
make it resources available to my users? This is very important to us if we 
really want to explore this new avenue of Ceph

Thank you,

Yao Mensah
Systems Administrator II
OLS Servers
yao.men...@usdoj.gov<mailto:yao.men...@usdoj.gov>
(202) 307 0354
MCITP
MCSE NT4.0 / 2000-2003
A+

From: Dave Spano [mailto:dsp...@optogenics.com]
Sent: Thursday, May 23, 2013 1:19 PM
To: Mensah, Yao (CIV)
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] FW: About RBD

Unless something changed, each RBD needs to be attached to 1 host at a time 
like an ISCSI lun.
Dave Spano
Optogenics


________________________________
From: "Yao Mensah (CIV)" <yao.men...@usdoj.gov<mailto:yao.men...@usdoj.gov>>
To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Sent: Thursday, May 23, 2013 1:10:53 PM
Subject: [ceph-users] FW: About RBD
FYI

From: Mensah, Yao (CIV)
Sent: Wednesday, May 22, 2013 5:59 PM
To: 'i...@inktank.com'
Subject: About RBD

Hello,

I was doing some reading on your web site about ceph and what it capable of. I 
have one question and maybe you can help on this:

Can ceph RBD be used by 2 physical hosts at the same time? Or, is Ceph rbd 
CSV(Clustered Shared Volumes) aware?

Thank you,

Yao Mensah
Systems Administrator II
OLS Servers


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to