Hi Somnath,
We are not using krbd. We installed ceph RBD version 0.94.5 and will
update latest release and let you know. I have mentioned version details in the
below thread.
Regards
Prabu
On Mon, 04 Jan 2016 12:28:37 +0530 Somnath Roy
wrote
I doubt rbd driver will not support SCSI Reservation to mount the same rbd
across multiple clients with OCFS ?
Generally underlying devices(here rbd) should have SCSI reservation support
for cluster file system.
Thanks,
Srinivas
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On B
Hi Prabu,
Check the krbd version (and libceph) running in the kernel..You can try
building the latest krbd source for the 7.1 kernel if this is an option for you.
As I mentioned in my earlier mail, please isolate problem the way I suggested
if that seems reasonable to you.
Thanks & Regards
Somna
Hi Somnath,
Just check the below details and let us know do you need any
other information.
Regards
Prabu
On Sat, 02 Jan 2016 08:47:05 +0530 gjprabu
wrote
Hi Somnath,
Please check the details and help me on this issue
I could see that if you have size and min_size equal. Can you provide some
details about your set up? The peering souks be pretty fast and if min_size
< size then writes can happen without recovery.
Also if you are using KVM, I suggest using librbd instead of KRBD. If
something funky happens with
As far as I read as soon as a osd is marked down writes won't recover
because pgs have to be peered and the object has to be recovered before
being written. We got kernel hung task timeout on a bunch of vms when a
ceph node was taken down.
On Jan 4, 2016 11:04 AM, "Robert LeBlanc" wrote:
> I'm no
I'm not sure what you mean by transparent? Does the IO hang forever when a
node goes down? If an OSD is taken down gracefully then there should be
minimal disruption of traffic. If you yank the network or power cables, it
can take 30 seconds before the cluster considers it down to mark it bad.
Ro
Hi
We are building our private cloud. We have decided to use ceph to provide
features like ebs. How can we make it transparent for vms when one ceph
node goes down. Because when one ceph node goes down we will lose a set of
osds and thereby set of pgs have to be recovered. Clients' read and write
m