t Bauer mailto:kurt.ba...@univie.ac.at>>
Date: Tuesday, November 5, 2013 2:52 PM
To: Kevin Weiler
mailto:kevin.wei...@imc-chicago.com>>
Cc: "ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>"
mailto:ceph-users@lists.ceph.com>>
Subject: Re: [ceph-users] ce
Kevin Weiler schrieb:
> Thanks Kyle,
>
> What's the unit for osd recovery max chunk?
Have a look at
http://ceph.com/docs/master/rados/configuration/osd-config-ref/ where
all the possible OSD config options are described, especially have a
look at the backfilling and recovery sections.
>
> Also, h
Thanks Kyle,
What's the unit for osd recovery max chunk?
Also, how do I find out what my current values are for these osd options?
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL
60606 | http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax: +1 312-24
ceph-users] ceph recovery killing vms
Thanks Guys,
after tested it in dev server, i have implemented the new config in prod
system.
next i will upgrade the hard drive.. :)
thanks again All.
On Tue, Oct 29, 2013 at 11:32 PM, Kyle Bader < kyle.ba...@gmail.com > wrote:
Recovering from
Thanks Guys,
after tested it in dev server, i have implemented the new config in prod
system.
next i will upgrade the hard drive.. :)
thanks again All.
On Tue, Oct 29, 2013 at 11:32 PM, Kyle Bader wrote:
> Recovering from a degraded state by copying existing replicas to other
> OSDs is going t
Recovering from a degraded state by copying existing replicas to other OSDs
is going to cause reads on existing replicas and writes to the new
locations. If you have slow media then this is going to be felt more
acutely. Tuning the backfill options I posted is one way to lessen the
impact, another
Hi,
maybe you want to have a look at the following thread:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-October/005368.html
Could be that you suffer from the same problems.
best regards,
Kurt
Rzk schrieb:
> Hi all,
>
> I have the same problem, just curious.
> could it be caused by po
Hi all,
I have the same problem, just curious.
could it be caused by poor hdd performance ?
read/write speed doesn't match the network speed ?
Currently i'm using desktop hdd in my cluster.
Rgrds,
Rzk
On Tue, Oct 29, 2013 at 6:22 AM, Kyle Bader wrote:
> You can change some OSD tunables to
You can change some OSD tunables to lower the priority of backfills:
osd recovery max chunk: 8388608
osd recovery op priority: 2
In general a lower op priority means it will take longer for your
placement groups to go from degraded to active+clean, the idea is to
balance recover
Hi all,
We have a ceph cluster that being used as a backing store for several VMs
(windows and linux). We notice that when we reboot a node, the cluster enters a
degraded state (which is expected), but when it begins to recover, it starts
backfilling and it kills the performance of our VMs. The
10 matches
Mail list logo