[ceph-users] rados_aio_cancel

2015-11-15 Thread min fang
Is this function used in detach rx buffer, and complete IO back to the caller? From the code, I think this function will not interact with OSD or MON side, which means, we just cancel IO from client side. Am I right? Thanks. ___ ceph-users mailing list

Re: [ceph-users] OSD:s failing out after upgrade to 9.2.0 on Ubuntu 14.04

2015-11-15 Thread Josef Johansson
cc the list as well > On 15 Nov 2015, at 23:41, Josef Johansson wrote: > > Hi, > > So it’s just frozen at that point? > > You should definatly increase the logging and restart the osd. I believe it’s > debug osd 20 and debug mon 20. > > A quick google brings up a case where UUID was crashing

Re: [ceph-users] OSD:s failing out after upgrade to 9.2.0 on Ubuntu 14.04

2015-11-15 Thread Josef Johansson
Hi, Could you catch any segmentation faults in /var/log/ceph/ceph-osd.11.log ? Regards, Josef > On 15 Nov 2015, at 23:06, Claes Sahlström wrote: > > Sorry to almost double post, I noticed that it seems like one mon is down, > but they do actually seem to be ok, the 11 that are in falls out an

Re: [ceph-users] OSD:s failing out after upgrade to 9.2.0 on Ubuntu 14.04

2015-11-15 Thread Claes Sahlström
Sorry to almost double post, I noticed that it seems like one mon is down, but they do actually seem to be ok, the 11 that are in falls out and I am back at 7 healthy OSD:s again: root@black:/var/lib/ceph/mon# ceph -s cluster ee8eae7a-5994-48bc-bd43-aa07639a543b health HEALTH_WARN

[ceph-users] OSD:s failing out after upgrade to 9.2.0 on Ubuntu 14.04

2015-11-15 Thread Claes Sahlström
Hi, I have a problem I hope is possible to solve... I upgraded to 9.2.0 a couple of days back and I missed this part: "If your systems already have a ceph user, upgrading the package will cause problems. We suggest you first remove or rename the existing 'ceph' user and 'ceph' group before upgr

Re: [ceph-users] pg stuck in remapped+peering for a long time

2015-11-15 Thread Peter Theobald
I spotted a section in the pg query about firty objects so i've looked into that. The ceph documentation is very light on this, but I found the osd and pg repair commands. I issued the osd repair command and I have now reduced the number of unclean pgs. This is the output of ceph -s now cluste

Re: [ceph-users] pg stuck in remapped+peering for a long time

2015-11-15 Thread Peter Theobald
I still have the pgs stuck peering. I ran ceph pg n.nn query on a few of the pgs that are stuck. The ones that are just peering have a few entries in recovery_state -> past_intervals (Example at end of message) and the ones that say remapped+peering have a long entry here. I don't know what the con