Re: [ceph-users] How to avoid kernel conflicts

2016-05-09 Thread K.C. Wong
The systems on which the `rbd map` hangs problem occurred are definitely not under memory stress. I don't believer they are doing a lot of disk I/O either. Here's the basic set-up: * all nodes in the "data-plane" are identical * they each host and OSD instance, sharing one of the drive * I'm runni

Re: [ceph-users] How to avoid kernel conflicts

2016-05-09 Thread Ilya Dryomov
On Mon, May 9, 2016 at 12:19 AM, K.C. Wong wrote: > >> As the tip said, you should not use rbd via kernel module on an OSD host >> >> However, using it with userspace code (librbd etc, as in kvm) is fine >> >> Generally, you should not have both: >> - "server" in userspace >> - "client" in kernels

Re: [ceph-users] How to avoid kernel conflicts

2016-05-08 Thread K.C. Wong
> As the tip said, you should not use rbd via kernel module on an OSD host > > However, using it with userspace code (librbd etc, as in kvm) is fine > > Generally, you should not have both: > - "server" in userspace > - "client" in kernelspace If `librbd` would help avoid this problem, then swi

Re: [ceph-users] How to avoid kernel conflicts

2016-05-07 Thread ceph
As the tip said, you should not use rbd via kernel module on an OSD host However, using it with userspace code (librbd etc, as in kvm) is fine Generally, you should not have both: - "server" in userspace - "client" in kernelspace On 07/05/2016 22:13, K.C. Wong wrote: > Hi, > > I saw this tip i

[ceph-users] How to avoid kernel conflicts

2016-05-07 Thread K.C. Wong
Hi, I saw this tip in the troubleshooting section: DO NOT mount kernel clients directly on the same node as your Ceph Storage Cluster, because kernel conflicts can arise. However, you can mount kernel clients within virtual machines (VMs) on a single node. Does this mean having a converged depl