I can confirm the latest packages upgrade fixes this issue. Em 9 de dez de 2016 7:48 PM, "Reed Dier" <reed.d...@focusvq.com> escreveu:
> I don’t think there is a graceful path to downgrade. > > There is a hot fix upstream I believe. My understanding is the build is > being tested for release. > > Francois Lafont posted in the other thread: > > Begin forwarded message: > > *From: *Francois Lafont <francois.lafont.1...@gmail.com> > *Subject: **Re: [ceph-users] 10.2.4 Jewel released* > *Date: *December 9, 2016 at 11:54:06 AM CST > *To: *"ceph-users@lists.ceph.com" <ceph-users@lists.ceph.com> > *Content-Type: *text/plain; charset="us-ascii" > > On 12/09/2016 06:39 PM, Alex Evonosky wrote: > > Sounds great. May I asked what procedure you did to upgrade? > > > Of course. ;) > > It's here: https://shaman.ceph.com/repos/ceph/wip-msgr-jewel-fix2/ > (I think this link was pointed by Greg Farnum or Sage Weil in a > previous message). > > Personally I use Ubuntu Trusty, so for me in the page above leads me > to use this line in my "sources.list": > > deb http://3.chacra.ceph.com/r/ceph/wip-msgr-jewel-fix2/ > 5d3c76c1c6e991649f0beedb80e6823606176d9e/ubuntu/trusty/flavors/default/ > trusty main > > And after that "apt-get update && apt-get upgrade" etc. > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > This is obviously geared towards Ubuntu/Debian, though I’d assume theres > an rpm of the same build accessible. > > Reed > > On Dec 9, 2016, at 4:43 PM, lewis.geo...@innoscale.net wrote: > > Hi Reed, > Yes, this was just installed yesterday and that is the version. I just > retested and it is exactly 15 minutes when the load starts to climb. > > So, just like Diego, do you know if there is a fix for this yet and when > it might be available on the repo? Should I try to install the prior minor > release version for now? > > Thank you for the information. > > Have a good day, > > Lewis George > > > > ------------------------------ > *From*: "Diego Castro" <diego.cas...@getupcloud.com> > *Sent*: Friday, December 9, 2016 2:26 PM > *To*: "Reed Dier" <reed.d...@focusvq.com> > *Cc*: lewis.geo...@innoscale.net, ceph-users@lists.ceph.com > *Subject*: Re: [ceph-users] High load on OSD processes > > Same here, is there any ETA to publish CentOS packages? > > > --- > Diego Castro / The CloudFather > GetupCloud.com <http://getupcloud.com> - Eliminamos a Gravidade > > 2016-12-09 18:59 GMT-03:00 Reed Dier <reed.d...@focusvq.com>: >> >> Assuming you deployed within the last 48 hours, I’m going to bet you are >> using v10.2.4 which has an issue that causes high cpu utilization. >> >> Should see large ramp up in loadav after 15 minutes exactly. >> >> See mailing list thread here: https://www.mail-archive >> .com/ceph-users@lists.ceph.com/msg34390.html >> >> Reed >> >> >> >> On Dec 9, 2016, at 3:25 PM, lewis.geo...@innoscale.net wrote: >> Hello, >> I am testing out a new node setup for us and I have configured a node in >> a single node cluster. It has 24 OSDs. Everything looked okay during the >> initial build and I was able to run the 'rados bench' on it just fine. >> However, if I just let the cluster sit and run for a few minutes without >> anything happening, the load starts to go up quickly. Each OSD device ends >> up using 130% CPU, with the load on the box hitting 550.00. No operations >> are going on, nothing shows up in the logs as happening or wrong. If I >> restart the OSD processes, the load stays down for a few minutes(almost at >> nothing) and then just jumps back up again. >> >> Any idea what could cause this or a direction I can look to check it? >> >> Have a good day, >> >> Lewis George >> >> >> _______________________________________________ >> ceph-users mailing list >> ceph-users@lists.ceph.com >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> >> >> _______________________________________________ >> ceph-users mailing list >> ceph-users@lists.ceph.com >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> > > > > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > >
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com