-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Based on [1] and my experience with Hammer it is seconds. After
adjusting this back to the defaults and doing recovery in our
production cluster I saw batches of recovery start every 64 seconds.
It initially started out nice and distributed, but over
This has nothing to do with the number of seconds between backfills. It is
actually the number of objects from a PG being scanned during a single op
when PG is backfilled. From what I can tell by looking at the source code,
impact on performance comes from the fact that during this scanning the PG
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I don't think this does what you think it does.
This will almost certainly starve the client of IO. This is the number
of seconds between backfills, not the number of objects being scanned
during a backfill. Setting these to higher values will m
To ease on clients you can change osd_backfill_scan_min and
osd_backfill_scan_max to 1. It's possible to change this online:
ceph tell osd.\* injectargs '--osd_backfill_scan_min 1'
ceph tell osd.\* injectargs '--osd_backfill_scan_max 1'
2015-11-24 16:52 GMT+01:00 Joe Ryner :
> Hi,
>
> Last night
You upgraded (and restarted as appropriate) all the clients first, right?
Warren Wang
On 11/24/15, 10:52 AM, "Joe Ryner" wrote:
>Hi,
>
>Last night I upgraded my cluster from Centos 6.5 -> Centos 7.1 and in the
>process upgraded from Emperor -> Firefly -> Hammer
>
>When I finished I changed
I think you can take a look at [ceph-ansible][1], and the
[rolling_update][2] particularly.
And in the upgrade, every thing goes smoothly, except the [dirty data
issue][3] bugged me a lot.
[1]: https://github.com/ceph/ceph-ansible
[2]: https://github.com/ceph/ceph-ansible/blob/master/rolling_