The systems on which the `rbd map` hangs problem occurred are
definitely not under memory stress. I don't believer they
are doing a lot of disk I/O either. Here's the basic set-up:
* all nodes in the "data-plane" are identical
* they each host and OSD instance, sharing one of the drive
* I'm runni
On Mon, May 9, 2016 at 12:19 AM, K.C. Wong wrote:
>
>> As the tip said, you should not use rbd via kernel module on an OSD host
>>
>> However, using it with userspace code (librbd etc, as in kvm) is fine
>>
>> Generally, you should not have both:
>> - "server" in userspace
>> - "client" in kernels
> As the tip said, you should not use rbd via kernel module on an OSD host
>
> However, using it with userspace code (librbd etc, as in kvm) is fine
>
> Generally, you should not have both:
> - "server" in userspace
> - "client" in kernelspace
If `librbd` would help avoid this problem, then swi
As the tip said, you should not use rbd via kernel module on an OSD host
However, using it with userspace code (librbd etc, as in kvm) is fine
Generally, you should not have both:
- "server" in userspace
- "client" in kernelspace
On 07/05/2016 22:13, K.C. Wong wrote:
> Hi,
>
> I saw this tip i
Hi,
I saw this tip in the troubleshooting section:
DO NOT mount kernel clients directly on the same node as your Ceph Storage
Cluster,
because kernel conflicts can arise. However, you can mount kernel clients within
virtual machines (VMs) on a single node.
Does this mean having a converged depl