Hi Fulvio,
The symptom of several osd's all asserting at the same time in
OSDMap::get_map really sounds like this bug:
https://tracker.ceph.com/issues/39525
lz4 compression is buggy on CentOS 7 and Ubuntu 18.04 -- you need to
disable compression or use a different algorithm. Mimic and nautilus
wi
Hi all,
I'm gonna make my secondary zone offline.
How to remove the secondary zone from a mutisite?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hallo all,
hope you can help me with very strange problems which arose
suddenly today. Tried to search, also in this mailing list, but could
not find anything relevant.
At some point today, without any action from my side, I noticed some
OSDs in my production cluster would go down and never
Hi Sebastian,
I did not get your reply via e-mail. I am very sorry for this. I hope you can
see this message...
I've re-run the upgrade and attached the log.
Thanks,
Gencer.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an ema
Hello,Yes I did but wasn't able to suggest anything further to get around it,
however:1/ There is currently an issue with 15.2.2 so I would advise holding
off any upgrade2/ Another mail list user replied to one of your older emails in
the thread asking for some manager logs not sure if you have
Hi Ashley,
Have you seen my previous reply? If so, and no solution then does anyone has
any idea how can this be done with 2 node?
Thanks,
Gencer.
On 20.05.2020 16:33:53, Gencer W. Genç wrote:
This is 2 node setup. I have no third node :(
I am planning to add more in the future but currently 2
Hi,
I strongly suggest you read ceph documentation in
https://docs.ceph.com/docs/master
El 21/5/20 a las 15:06, CodingSpiderFox escribió:
Hello everyone :)
When I try to create an OSD, Proxmox UI asks for
* Data disk
* DB disk
* WAL disk
What disk will be the limiting factor in terms of st
Hi Sam,
I saw your comment in the other thread but wanted to reply here since
you provided the mempool and perf counters. It looks like the priority
cache is (like in Harald's case) shrinking all of the caches to their
smallest values trying to compensate for all of the stuff collecting in
Hello,
here it is, I usually set just space quota not object quota.
NAME ID QUOTA OBJECTS QUOTA BYTES USED
%USED MAX AVAIL OBJECTS DIRTY READWRITE RAW
USED
k8s 8 N/A 200GiB
Hello everyone :)
When I try to create an OSD, Proxmox UI asks for
* Data disk
* DB disk
* WAL disk
What disk will be the limiting factor in terms of storage size for my
OSD - the data disk?
How large do I need to make the other two?
Is there a risk of them running over capacity before the c
Out of curiosity do you have compression enabled? FWIW Deepika is has
been working on splitting the mempool assignments into much better
categories for better tracking. I suspect we are going to find a bug
where something isn't being cleaned up properly in buffer_anon. Adam's
been taking up
I should note that these OSDs also drop out of the pool as part of their
symptoms - it's not clear to me at the moment if they drop out *because* of the
memory, or if the buffer_pool is growing large because it's buffering
communications that aren't getting to the cluster [and hence they drop ou
Short update on the issue:
Finally we're able to reproduce the issue in master (not octopus),
investigating further..
@Chris - to make sure you're facing the same issue could you please
check the content of the broken file. To do so:
1) run "ceph-bluestore-tool --path --our-dir
--command
So, to jump into this thread - we seem to see the same problem as Harald on our
cluster here in Glasgow, except our "worst case" OSDs are much worse than his
[we get up to ~tens of GB in buffer_anon].
Activity is a mix of reads and writes against a single EC (8+2) encoded pool,
with 8MB object
Hi,
Following on from various woes, we see an odd and unhelpful behaviour with some
OSDs on our cluster currently.
A minority of OSDs seem to have runaway memory usage, rising to 10s of GB,
whilst other OSDs on the same host behave sensibly. This started when we moved
from Mimic -> Nautilus,
Do you have quotas enabled on that pool?
Can you also show
ceph df detail
Zitat von "Szabo, Istvan (Agoda)" :
Restarted mgr and mon services, nothing helped :/
-Original Message-
From: Eugen Block
Sent: Wednesday, May 20, 2020 3:05 PM
To: Szabo, Istvan (Agoda)
Cc: ceph-users@ceph
16 matches
Mail list logo