Hi,
we're seeing active mds failover to standbys every few weeks causing few
minutes of cephfs downtime. It's not crashing, all the log says is.
2020-02-25 08:30:53.313 7f9a457ae700 1 mds.m2-1045557 Updating MDS map to
version 10132 from mon.1
2020-02-25 08:30:53.313 7f9a457ae700 1 mds.m2-1045
Hi Fabian,
El 24/2/20 a las 19:01, Fabian Zimmermann escribió:
we currently creating a new cluster. This cluster is (as far as we can
tell) an config-copy (ansible) of our existing cluster, just 5 years later
- with new hardware (nvme instead of ssd, bigger disks, ...)
The setup:
* NVMe for Jo
Hi Kristof,
just some thoughts/insight on the issue.
First of all it's not clear if you're going to migrate to EC 6+3 only or
downsizing allocation size as well.
Anyway I'd suggest to postpone these modifications for a while if
possible. Ceph core team is aware of both space overhead caused
Possible without downtime: Configure multi-site, create a new zone for
the new pool, let the cluster sync to itself, do a failover to the new
zone, delete old zone.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247
I believe you are encountering https://tracker.ceph.com/issues/39570
You should do a "ceph versions" on a mon and ensure all your OSDs are
nautilus and if so set "ceph osd require-osd-release nautilus" then try to
increase pg num. Upgrading to a more recent nautilus release is also
probably a good
That's right!! I will try to update, but now I have the desire PG numbers.
Thank you.
El 25/2/20 a las 15:01, Wesley Dillingham escribió:
> ceph osd require-osd-release nautilus
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an ema
That's right!! I will try to update, but now I have the desire PG numbers.
Thank you.
El 25/2/20 a las 15:01, Wesley Dillingham escribió:
> I believe you are encountering https://tracker.ceph.com/issues/39570
>
> You should do a "ceph versions" on a mon and ensure all your OSDs are
> nautilus an
Fabian said:
> The output of "ceph osd pool stats" shows ~100 op/s, but our disks are doing:
What does the iostat output look like on the old cluster?
Thanks,
Mark
On Mon, Feb 24, 2020 at 11:02 AM Fabian Zimmermann wrote:
>
> Hi,
>
> we currently creating a new cluster. This cluster is (as far
Hello Casper,
did you found an answer on this topic?
my guess is, that with "ceph pg repair" the copy of primary osd will
overwrite the 2nd and 3rd - in case it is readable.. but what is when it
is not readable? :thinking:
Would be nice to know if there is a way to tell ceph to repair pg with
c
Hi,
is it possible to run MDS on a newer version than the monitoring nodes?
I mean we run monitoring nodes on 12.2.10 and would like to upgrade
the MDS to 12.2.13 is this possible?
Best,
Martin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsub
On 12/2/19 11:16 AM, Jan Kasprzak wrote:
> Hello, Ceph users,
>
> does anybody use Ceph on recently released CentOS 8? Apparently there are
> no el8 packages neither at download.ceph.com, nor in the native CentOS package
> tree. I am thinking about upgrading my cluster to C8 (because of othe
Hi all,
I'm running a Ceph Mimic cluster 13.2.6 and we use the ceph-balancer
in upmap mode. This cluster is fairly old and pre-Mimic we used to set
osd reweights to balance the standard deviation of the cluster. Since
moving to Mimic about 9 months ago I enabled the ceph-balancer with
upmap mode a
On Mon, Feb 24, 2020 at 2:28 PM Uday Bhaskar jalagam
wrote:
>
> Thanks Patrick,
>
> is this the bug you are referring to https://tracker.ceph.com/issues/42515 ?
Yes
> We also see performance issues mainly on metadata operations like finding
> file stats operations , however mds perf dump shows
The OOM-killer is on the rampage and striking down hapless OSDs when
the cluster is under heavy client IO.
The memory target does not seem to be much of a limit, is this intentional?
root@cnx-11:~# ceph-conf --show-config|fgrep osd_memory_target
osd_memory_target = 4294967296
osd_memory_target_cg
more examples of rampant OSD memory consumption:
PID USER PR NIVIRTRESSHR S %CPU %MEM TIME+
COMMAND
1326773 ceph 20 0 11.585g 0.011t 34728 S 110.3 8.6 14:26.87
ceph-osd
204622 ceph 20 0 16.414g 0.015t 34808 S 100.3 12.5 17:53.36
ceph-osd
5706 ceph
15 matches
Mail list logo