Hi, maybe you want to have a look at the following thread: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-October/005368.html Could be that you suffer from the same problems.
best regards, Kurt Rzk schrieb: > Hi all, > > I have the same problem, just curious. > could it be caused by poor hdd performance ? > read/write speed doesn't match the network speed ? > > Currently i'm using desktop hdd in my cluster. > > Rgrds, > Rzk > > > > > On Tue, Oct 29, 2013 at 6:22 AM, Kyle Bader <kyle.ba...@gmail.com > <mailto:kyle.ba...@gmail.com>> wrote: > > You can change some OSD tunables to lower the priority of backfills: > > osd recovery max chunk: 8388608 > osd recovery op priority: 2 > > In general a lower op priority means it will take longer for your > placement groups to go from degraded to active+clean, the idea is to > balance recovery time and not starving client requests. I've found 2 > to work well on our clusters, YMMV. > > On Mon, Oct 28, 2013 at 10:16 AM, Kevin Weiler > <kevin.wei...@imc-chicago.com > <mailto:kevin.wei...@imc-chicago.com>> wrote: > > Hi all, > > > > We have a ceph cluster that being used as a backing store for > several VMs > > (windows and linux). We notice that when we reboot a node, the > cluster > > enters a degraded state (which is expected), but when it begins > to recover, > > it starts backfilling and it kills the performance of our VMs. > The VMs run > > slow, or not at all, and also seem to switch it's ceph mounts to > read-only. > > I was wondering 2 things: > > > > Shouldn't we be recovering instead of backfilling? It seems like > backfilling > > is much more intensive operation > > Can we improve the recovery/backfill performance so that our VMs > don't go > > down when there is a problem with the cluster? > > > > > > -- > > > > Kevin Weiler > > > > IT > > > > > > > > IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | > Chicago, IL 60606 > > | http://imc-chicago.com/ > > > > Phone: +1 312-204-7439 <tel:%2B1%20312-204-7439> | Fax: +1 > 312-244-3301 <tel:%2B1%20312-244-3301> | E-Mail: > > kevin.wei...@imc-chicago.com <mailto:kevin.wei...@imc-chicago.com> > > > > > > ________________________________ > > > > The information in this e-mail is intended only for the person > or entity to > > which it is addressed. > > > > It may contain confidential and /or privileged material. If > someone other > > than the intended recipient should receive this e-mail, he / she > shall not > > be entitled to read, disseminate, disclose or duplicate it. > > > > If you receive this e-mail unintentionally, please inform us > immediately by > > "reply" and then delete it from your system. Although this > information has > > been compiled with great care, neither IMC Financial Markets & Asset > > Management nor any of its related entities shall accept any > responsibility > > for any errors, omissions or other inaccuracies in this > information or for > > the consequences thereof, nor shall it be bound in any way by > the contents > > of this e-mail or its attachments. In the event of incomplete or > incorrect > > transmission, please return the e-mail to the sender and > permanently delete > > this message and any attachments. > > > > Messages and attachments are scanned for all known viruses. > Always scan > > attachments before opening them. > > > > _______________________________________________ > > ceph-users mailing list > > ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > > > -- > > Kyle > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com