Hi,
After finally resolving the remapped PGs [0] I'm running into a a problem where
the MON stores are not trimming.
health HEALTH_WARN
noscrub,nodeep-scrub flag(s) set
1 mons down, quorum 0,1 1,2
mon.1 store is getting too big! 37115 MB >= 15360 MB
Hi Wido,
AFAIK mon's won't trim while a cluster is in HEALTH_WARN. Unset
noscrub,nodeep-scrub, get that 3rd mon up, then it should trim.
-- Dan
On Thu, Nov 3, 2016 at 10:40 AM, Wido den Hollander wrote:
> Hi,
>
> After finally resolving the remapped PGs [0] I'm running into a a problem
> wher
> Op 3 november 2016 om 10:42 schreef Dan van der Ster :
>
>
> Hi Wido,
>
> AFAIK mon's won't trim while a cluster is in HEALTH_WARN. Unset
> noscrub,nodeep-scrub, get that 3rd mon up, then it should trim.
>
The 3rd MON is back, but afaik the MONs trim when all PGs are active+clean. A
cluste
> Op 3 november 2016 om 10:46 schreef Wido den Hollander :
>
>
>
> > Op 3 november 2016 om 10:42 schreef Dan van der Ster :
> >
> >
> > Hi Wido,
> >
> > AFAIK mon's won't trim while a cluster is in HEALTH_WARN. Unset
> > noscrub,nodeep-scrub, get that 3rd mon up, then it should trim.
> >
>
On Thu, Nov 3, 2016 at 11:57 AM, Wido den Hollander wrote:
>
>> Op 3 november 2016 om 10:46 schreef Wido den Hollander :
>>
>>
>>
>> > Op 3 november 2016 om 10:42 schreef Dan van der Ster :
>> >
>> >
>> > Hi Wido,
>> >
>> > AFAIK mon's won't trim while a cluster is in HEALTH_WARN. Unset
>> > noscr
Hi Lucas!
On 11/02/2016 09:07 PM, bobobo1...@gmail.com wrote:
I'm running Kraken built from Git right now and I've found that my OSDs
eat as much memory as they can before they're killed by OOM. I
understand that Bluestore is experimental but thought the fact that it
does this should be known.
On 11/03/2016 09:40 AM, Wido den Hollander wrote:
root@mon3:/var/lib/ceph/mon# ceph-monstore-tool ceph-mon3 dump-keys|awk '{print
$1}'|uniq -c
96 auth
1143 logm
3 mdsmap
1 mkfs
1 mon_sync
6 monitor
3 monmap
1158 osdmap
358364 paxos
656 pgmap
6
On 11/03/2016 12:09 PM, Joao Eduardo Luis wrote:
On 11/03/2016 09:40 AM, Wido den Hollander wrote:
root@mon3:/var/lib/ceph/mon# ceph-monstore-tool ceph-mon3
dump-keys|awk '{print $1}'|uniq -c
96 auth
1143 logm
3 mdsmap
1 mkfs
1 mon_sync
6 monitor
3 monmap
On 11/03/2016 06:52 AM, Tim Serong wrote:
> I thought I should make a little noise about a project some of us at
> SUSE have been working on, called DeepSea. It's a collection of Salt
> states, runners and modules for orchestrating deployment of Ceph
> clusters. To help everyone get a feel for i
> Op 3 november 2016 om 13:09 schreef Joao Eduardo Luis :
>
>
> On 11/03/2016 09:40 AM, Wido den Hollander wrote:
> > root@mon3:/var/lib/ceph/mon# ceph-monstore-tool ceph-mon3 dump-keys|awk
> > '{print $1}'|uniq -c
> > 96 auth
> >1143 logm
> > 3 mdsmap
> > 1 mkfs
> >
On 11/03/2016 01:24 PM, Wido den Hollander wrote:
Op 3 november 2016 om 13:09 schreef Joao Eduardo Luis :
On 11/03/2016 09:40 AM, Wido den Hollander wrote:
root@mon3:/var/lib/ceph/mon# ceph-monstore-tool ceph-mon3 dump-keys|awk '{print
$1}'|uniq -c
96 auth
1143 logm
3 mdsmap
> Op 3 nov. 2016 om 16:44 heeft Joao Eduardo Luis het volgende
> geschreven:
>
>> On 11/03/2016 01:24 PM, Wido den Hollander wrote:
>>
>>> Op 3 november 2016 om 13:09 schreef Joao Eduardo Luis :
>>>
>>>
On 11/03/2016 09:40 AM, Wido den Hollander wrote:
root@mon3:/var/lib/ceph/mon#
On 11/03/2016 05:52 PM, w...@42on.com wrote:
Op 3 nov. 2016 om 16:44 heeft Joao Eduardo Luis het volgende
geschreven:
On 11/03/2016 01:24 PM, Wido den Hollander wrote:
Op 3 november 2016 om 13:09 schreef Joao Eduardo Luis :
On 11/03/2016 09:40 AM, Wido den Hollander wrote:
root@mon3:/
> Op 3 nov. 2016 om 19:13 heeft Joao Eduardo Luis het volgende
> geschreven:
>
>> On 11/03/2016 05:52 PM, w...@42on.com wrote:
>>
>>
Op 3 nov. 2016 om 16:44 heeft Joao Eduardo Luis het
volgende geschreven:
> On 11/03/2016 01:24 PM, Wido den Hollander wrote:
>
>
On 11/03/2016 06:18 PM, w...@42on.com wrote:
Personally, I don't like this solution one bit, but I can't see any other way
without a patched monitor, or maybe ceph_monstore_tool.
If you are willing to wait till tomorrow, I'll be happy to kludge a sanitation
feature onto ceph_monstore_tool th
After a lot of messing about I have manually created a monmap and got
the two new monitors working for a total of three. But to do that I had
to delete the first monitor which for some reason was coming up with a
bogus fsid after manipulated the monmap which I checked and it had the
correct fsid. S
16 matches
Mail list logo