I've checked, all the disks are fine and the cluster is healthy except for the
inconsistent objects.
How would I go about manually repairing?
On May 21, 2013, at 3:26 PM, David Zafman wrote:
>
> I can't reproduce this on v0.61-2. Could the disks for osd.13 & osd.22 be
> unwritable?
>
> In
I can't reproduce this on v0.61-2. Could the disks for osd.13 & osd.22 be
unwritable?
In your case it looks like the 3rd replica is probably the bad one, since
osd.13 and osd.22 are the same. You probably want to manually repair the 3rd
replica.
David Zafman
Senior Developer
http://www.inkt
Hi,
> Hmm. Can you generate a log with 'debug mon = 20', 'debug paxos = 20',
> 'debug ms = 1' for a few minutes over which you see a high data rate and
> send it my way? It sounds like there is something wrong with the
> stash_full logic.
Mm, actually I may have been fooled by the instrumentati
On Tue, 21 May 2013, Sylvain Munaut wrote:
> Hi,
>
> >> So, AFAICT, the bulk of the write would be writing out the pgmap to
> >> disk every second or so.
> >
> > It should be writing out the full map only every N commits... see 'paxos
> > stash full interval', which defaults to 25.
>
> But doesn'
Hi,
>> So, AFAICT, the bulk of the write would be writing out the pgmap to
>> disk every second or so.
>
> It should be writing out the full map only every N commits... see 'paxos
> stash full interval', which defaults to 25.
But doesn't it also write it in full when there is a new pgmap ?
I hav
On Tue, 21 May 2013, Sylvain Munaut wrote:
> So, AFAICT, the bulk of the write would be writing out the pgmap to
> disk every second or so.
It should be writing out the full map only every N commits... see 'paxos
stash full interval', which defaults to 25.
> Is it really needed to write it in fu
On Tue, May 21, 2013 at 8:52 AM, Sylvain Munaut
wrote:
> So, AFAICT, the bulk of the write would be writing out the pgmap to
> disk every second or so.
>
> Is it really needed to write it in full ? It doesn't change all that
> much AFAICT, so writing incremental changes with only periodic flush
>
So, AFAICT, the bulk of the write would be writing out the pgmap to
disk every second or so.
Is it really needed to write it in full ? It doesn't change all that
much AFAICT, so writing incremental changes with only periodic flush
might be a better option ?
Cheers,
Sylvain
_
Cuttlefish on CentOS 6, ceph-0.61.2-0.el6.x86_64.
On May 21, 2013, at 12:13 AM, David Zafman wrote:
>
> What version of ceph are you running?
>
> David Zafman
> Senior Developer
> http://www.inktank.com
>
> On May 20, 2013, at 9:14 AM, John Nielsen wrote:
>
>> Some scrub errors showed up on
Thanks for the correction on IRC. I should have written that this issue
started with 0.59 (when the monitor changes hit).
http://ceph.com/dev-notes/cephs-new-monitor-changes/
The writeup and release notes sometimes say they went in for 0.58, but I
believe they were actually released in 0.59.
Sylvain,
I can confirm I see a similar traffic pattern.
Any time I have lots of writes going to my cluster (like heavy writes
from RBD or remapping/backfilling after losing an OSD), I see all sorts
of monitor issues.
If my monitor leveldb store.db directories grow past some unknown point
(m
Hi,
I've just added some monitoring to the IO usage of mon (trying to
track down that growing mon issue), and I'm kind of surprised by the
amount of IO generated by the monitor process.
I get continuous 4 Mo/s / 75 iops with added big spikes at each
compaction every 3 min or so.
Is there a desc
On 21 May 2013, at 07:17, Dan Mick wrote:
> Yes, with the proviso that you really mean "kill the osd" when clean.
> Marking out is step 1.
Thanks
--
Alex Bligh
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo
-- Forwarded message --
From: Gandalf Corvotempesta
Date: 2013/5/20
Subject: RGW
To: "ceph-users@lists.ceph.com"
Hi,
i'm receiving an EntityTooLarge error when trying to upload an object of 100MB
I've already set LimitRequestBody to 0 in apache. Anyting else to check ?
14 matches
Mail list logo