Hi,
> Hmm. Can you generate a log with 'debug mon = 20', 'debug paxos = 20',
> 'debug ms = 1' for a few minutes over which you see a high data rate and
> send it my way? It sounds like there is something wrong with the
> stash_full logic.
Mm, actually I may have been fooled by the instrumentati
On Tue, 21 May 2013, Sylvain Munaut wrote:
> Hi,
>
> >> So, AFAICT, the bulk of the write would be writing out the pgmap to
> >> disk every second or so.
> >
> > It should be writing out the full map only every N commits... see 'paxos
> > stash full interval', which defaults to 25.
>
> But doesn'
Hi,
>> So, AFAICT, the bulk of the write would be writing out the pgmap to
>> disk every second or so.
>
> It should be writing out the full map only every N commits... see 'paxos
> stash full interval', which defaults to 25.
But doesn't it also write it in full when there is a new pgmap ?
I hav
On Tue, 21 May 2013, Sylvain Munaut wrote:
> So, AFAICT, the bulk of the write would be writing out the pgmap to
> disk every second or so.
It should be writing out the full map only every N commits... see 'paxos
stash full interval', which defaults to 25.
> Is it really needed to write it in fu
On Tue, May 21, 2013 at 8:52 AM, Sylvain Munaut
wrote:
> So, AFAICT, the bulk of the write would be writing out the pgmap to
> disk every second or so.
>
> Is it really needed to write it in full ? It doesn't change all that
> much AFAICT, so writing incremental changes with only periodic flush
>
So, AFAICT, the bulk of the write would be writing out the pgmap to
disk every second or so.
Is it really needed to write it in full ? It doesn't change all that
much AFAICT, so writing incremental changes with only periodic flush
might be a better option ?
Cheers,
Sylvain
_
Thanks for the correction on IRC. I should have written that this issue
started with 0.59 (when the monitor changes hit).
http://ceph.com/dev-notes/cephs-new-monitor-changes/
The writeup and release notes sometimes say they went in for 0.58, but I
believe they were actually released in 0.59.
Sylvain,
I can confirm I see a similar traffic pattern.
Any time I have lots of writes going to my cluster (like heavy writes
from RBD or remapping/backfilling after losing an OSD), I see all sorts
of monitor issues.
If my monitor leveldb store.db directories grow past some unknown point
(m
Hi,
I've just added some monitoring to the IO usage of mon (trying to
track down that growing mon issue), and I'm kind of surprised by the
amount of IO generated by the monitor process.
I get continuous 4 Mo/s / 75 iops with added big spikes at each
compaction every 3 min or so.
Is there a desc