Chad,

I'm sure others can speak to performance better than I can. However, using
a kernel RBD was only vulnerable to deadlocks when trying to mount a block
device on the same host that had Ceph server daemons like monitors or osds
running. It was a kernel issue not a Ceph issue. Otherwise, the reason to
use kernel is that you intend to mount a block device on your local host
and use it. There's nothing particularly special about that, as you are
just mounting and using a block device. The cool thing about Ceph block
devices is that they are thin-provisioned and striped across a cluster. So
you could do something like mount a 100TB drive and get good performance
even though there is no such thing in the physical world as a 100TB hard
drive at this point.

Using librbd, as you've pointed out, doesn't run afoul of potential Linux
kernel deadlocks; however, you normally wouldn't encounter this type of
situation in a production cluster anyway as you'd likely never use the same
host for client and server components. The benefit of using librbd, among
other things, is that you can use it with virtual machines. That's actually
a big part of how we provide block devices to cloud computing platforms
like OpenStack.

Virtualization enables lots of scenarios. You can run multiple virtual
machines on a host, and mount block devices within those virtual machines.
A compelling scenario for cloud computing, however, is to use RBD-based
images to spin up virtual machines. In other words, you create a "golden
image" that you can snapshot and then use copy-on-write cloning to bring up
VMs using an RBD-based image snapshot quickly.

OS image sizes are often sizable. So downloading them each time would be
time-consuming and slow. If you can do that once and snapshot the image;
then, clone the snapshot, that's dramatically faster.

See: http://ceph.com/docs/master/rbd/rbd-snapshot/ for details on
snapshotting.

See: http://ceph.com/docs/master/rbd/rbd-openstack/ and notice that cloud
platforms generally feed Ceph block devices via QEMU and libvirt to the
cloud computing platform.

I hope this helps.


John





On Fri, Jun 20, 2014 at 6:58 AM, Chad Seys <cws...@physics.wisc.edu> wrote:

> Hi All,
>   What are the pros and cons of running a virtual machine (with qemu-kvm)
> whose image is accessed via librbd or by mounting /dev/rbdX ?
>   I've heard that the librbd method has the advantage of not being
> vulnerable
> to deadlocks due to memory allocation problems. ?
>   Would one also benefit if using backported librbd to older kernels?  E.g.
> 0.80 ceph with running on a 3.2.51 kernel should have bug fixes that the
> rbd
> module would not. ?
>   Would one expect performance differences between librbd and module rbd?
>
> Thanks!
> Chad.
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
John Wilkins
Senior Technical Writer
Intank
john.wilk...@inktank.com
(415) 425-9599
http://inktank.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to