Hell Igor,

Thanks for getting back to me.

> > You can map it to multiple hosts, but before doing  dd if=/dev/zero
of=/media/tmp/test you have created file system, right

Correct, I can't mount /dev/rbd/rbd/test-device without first creating a
file system on the device.  Now, I am creating an ext4 filesystem with the
-m0 flag so the metadata is contained within the filesystem. I did not
create a partition on the device instead opting to format the whole device
(this is how the rbd guide does it).

>> This file system MUST be distributed, thus multiple hosts can read and
write files on it.

I'm not sure I understand.  Are you implying I need to use some OTHER
network filesystem (e.g. glusterfs) -on top- of ceph? Or are you saying my
ext4 file system should be distributed so multiple hosts should be able to
read and write to/from the ext4 filesystem?

If it's the former, this seems counterintuitive, but if that's what nerds
to happen I guess I'll make it so.  If it's the latter, then something is
not right as my hosts are not all able to read and write to the ext4
filesystem.  I'm not sure how else I can test / prove it other than writing
a file from each of my hosts and that file not being accessible from all of
my hosts, can you provide some further troubleshooting steps?  Could it be
the filesystem type? Do I need to use xfs or btrfs if I want to map the
block device to multiple hosts? Does CEPH not work as a client where it is
running as a service?

Thanks for your help,
Jon A
On May 29, 2013 12:47 AM, "Igor Laskovy" <igor.lask...@gmail.com> wrote:

> Hi Jon, I already mentioned multiple times here - RBD just a block device.
> You can map it to multiple hosts, but before doing  dd if=/dev/zero
> of=/media/tmp/test you have created file system, right? This file system
> MUST be distributed, thus multiple hosts can read and write files on it.
>
>
> On Wed, May 29, 2013 at 4:24 AM, Jon <three1...@gmail.com> wrote:
>
>> Hello,
>>
>> I would like to mount a single RBD on multiple hosts to be able to share
>> the block device.
>> Is this possible?  I understand that it's not possible to share data
>> between the different interfaces, e.g. CephFS and RBDs, but I don't see
>> anywhere it's declared that sharing an RBD between hosts is or is not
>> possible.
>>
>> I have followed the instructions on the github page of ceph-deploy (I was
>> following the 5 minute quick start
>> http://ceph.com/docs/next/start/quick-start/ but when I got to the step
>> with mkcephfs it erred out and pointed me to the github page), as I only
>> have three servers I am running the osds and monitors on all of the hosts,
>> I realize this isn't ideal but I'm hoping it will work for testing purposes.
>>
>> This is what my cluster looks like:
>>
>> >> root@red6:~# ceph -s
>> >>    health HEALTH_OK
>> >>    monmap e2: 3 mons at {kitt=
>> 192.168.0.35:6789/0,red6=192.168.0.40:6789/0,shepard=192.168.0.2:6789/0},
>> election epoch 10, quorum 0,1,2 kitt,red6,shepard
>> >>    osdmap e29: 5 osds: 5 up, 5 in
>> >>     pgmap v1692: 192 pgs: 192 active+clean; 19935 MB data, 40441 MB
>> used, 2581 GB / 2620 GB avail; 73B/s rd, 0op/s
>> >>    mdsmap e1: 0/0/1 up
>>
>> To test, what I have done is created a 20GB RBD mapped it and mounted it
>> to /media/tmp on all the hosts in my cluster, so all of the hosts are also
>> clients.
>>
>> Then I use dd to create a 1MB file named test-$hostname
>>
>> >> dd if=/dev/zero of=/media/tmp/test-`hostname` bs=1024 count=1024;
>>
>> after the file is created, I wait for the writes to finish in `ceph -w`,
>> then on each host when I list /media/tmp I see the results of
>> /media/tmp/test-`hostname`, if I unmount then remount the RBD, I get mixed
>> results.  Typically, I see the file that was created on the host that is at
>> the front of the line in the quorum. e.g. the test I did while typing this
>> e-mail "kitt" is listed first quorum 0,1,2 kitt,red6,shepard, this is the
>> file I see created when I unmount then mount the rbd on shepard.
>>
>> Where this is going is, I would like to use CEPH as my back end storage
>> solution for my virtualization cluster.  The general idea is the
>> hypervisors will all have a shared mountpoint that holds images and vms so
>> vms can easily be migrated between hypervisors.  Actually, I was thinking I
>> would create one mountpoint each for images and vms for performance
>> reasons, am I likely to see performance gains using more smaller RBDs vs
>> fewer larger RBDs?
>>
>> Thanks for any feedback,
>> Jon A
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
>
> --
> Igor Laskovy
> facebook.com/igor.laskovy
> studiogrizzly.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to