Re: [ceph-users] Luminous radosgw hangs after a few hours

2017-07-24 Thread Vaibhav Bhembre
I am seeing the same issue on upgrade to Luminous v12.1.0 from Jewel. I am not using Keystone or OpenStack either and my radosgw daemon hangs as well. I have to restart it to resume processing. 2017-07-24 00:23:33.057401 7f196096a700 0 ERROR: keystone revocation processing returned error r=-22 20

Re: [ceph-users] removing cluster name support

2017-06-08 Thread Vaibhav Bhembre
We have an internal management service that works at a higher layer upstream on top of multiple Ceph clusters. It needs a way to differentiate and connect separately to each of those clusters. Presently making that distinction is relatively easy since we create those connections based on /etc/conf/

Re: [ceph-users] RBD Mirror: Unable to re-bootstrap mirror daemons

2017-03-19 Thread Vaibhav Bhembre
er "ceph3" using the id "rbd-mirror-remote". If you have > changed the name of the id, you can update it via the rbd CLI: "rbd > --cluster --pool mirror pool peer set > 6a98a0eb-869d-4b4f-8bc7-da4bbe66e5aa client " > > On Sat, Mar 18, 2017 at 9:29 AM, Va

[ceph-users] RBD Mirror: Unable to re-bootstrap mirror daemons

2017-03-18 Thread Vaibhav Bhembre
I had a working setup initially in my test clusters with 2 daemons running on MON nodes of each cluster. I took them down, uninstalled and purged rbd-mirror (apt-get uninstall and apt-get purge) before installing them again, on the respective clusters. They now refuse to come back up or talk to eac

Re: [ceph-users] Ceph RBD object-map and discard in VM

2016-07-18 Thread Vaibhav Bhembre
appears that the rbd-replay-prep tool doesn't record translate discard events. The change sounds good to me -- but it would also need to be made in librados and ceph-osd since I'm sure they would have the same issue. On Sat, Jul 16, 2016 at 8:48 PM, Vaibhav Bhembre wrote: I was final

Re: [ceph-users] Ceph RBD object-map and discard in VM

2016-07-16 Thread Vaibhav Bhembre
be fine. Thanks! On 07/15, Vaibhav Bhembre wrote: > I enabled rbd_tracing on HV and restarted the guest as to pick the new > configuration up. The change in value of *rbd_tracing* was confirmed from > the admin socket. I am still unable to see any trace. > > lsof -p does not sho

Re: [ceph-users] Ceph RBD object-map and discard in VM

2016-07-15 Thread Vaibhav Bhembre
causes librbd.so to dynamically load the tracing module librbd_tp.so (which has linkage to LTTng-UST). On Fri, Jul 15, 2016 at 1:47 PM, Vaibhav Bhembre wrote: I followed the steps mentioned in [1] but somehow I am unable to see any traces to continue with its step 2. There are no errors seen when

Re: [ceph-users] Ceph RBD object-map and discard in VM

2016-07-15 Thread Vaibhav Bhembre
tions here [2] but would only need to perform steps 1 and 2 (attaching to output from step 2 to the ticket). Thanks, [1] http://tracker.ceph.com/issues/16689 [2] http://docs.ceph.com/docs/master/rbd/rbd-replay/ On Thu, Jul 14, 2016 at 2:55 PM, Vaibhav Bhembre wrote: We have been observing this si

Re: [ceph-users] Ceph RBD object-map and discard in VM

2016-07-14 Thread Vaibhav Bhembre
We have been observing this similar behavior. Usually it is the case where we create a new rbd image, expose it into the guest and perform any operation that issues discard to the device. A typical command that's first run on a given device is mkfs, usually with discard on. # time mkfs.xfs -s siz