Hi,
On 07/21/13 09:05, Dan van der Ster wrote:
This is with a 10Gb network -- and we can readily get 2-3GBytes/s in
"normal" rados bench tests across many hosts in the cluster. I wasn't
too concerned with the overall MBps throughput in my question, but
rather the objects/s recovery rate --
the
On Sat, Jul 20, 2013 at 7:28 AM, Mikaël Cluseau wrote:
> HI,
>
>
> On 07/19/13 07:16, Dan van der Ster wrote:
>>
>> and that gives me something like this:
>>
>> 2013-07-18 21:22:56.546094 mon.0 128.142.142.156:6789/0 27984 : [INF]
>> pgmap v112308: 9464 pgs: 8129 active+clean, 398
>> active+remapp
HI,
On 07/19/13 07:16, Dan van der Ster wrote:
and that gives me something like this:
2013-07-18 21:22:56.546094 mon.0 128.142.142.156:6789/0 27984 : [INF]
pgmap v112308: 9464 pgs: 8129 active+clean, 398
active+remapped+wait_backfill, 3 active+recovery_wait, 933
active+remapped+backfilling, 1 a
Hi,
We added some new OSDs today and since we've recently written many
many (small/tiny) objects to a test pool, backfilling those new disks
is going to take something like 24hrs. I'm therefore curious if we can
speed up the recovery at all or if the default settings in cuttlefish
already bring us