. Juli 2022 17:01:34
An: Nicolas FONTAINE
Cc: Kilian Ries; ceph-users
Betreff: Re: [ceph-users] Re: Ceph Stretch Cluster - df pool size (Max Avail)
https://tracker.ceph.com/issues/56650
There's a PR in progress to resolve this issue now. (Thanks, Prashant!)
-Greg
On Thu, Jul 28, 2022 at 7:
Hi,
we run a ceph cluster in stretch mode with one pool. We know about this bug:
https://tracker.ceph.com/issues/56650
https://github.com/ceph/ceph/pull/47189
Can anyone tell me what happens when a pool gets to 100% full? At the moment
raw OSD usage is about 54% but ceph throws me a "POOL_B
djust the PGs manually to get a better distribution.
Regards,
Kilian
Von: Kilian Ries
Gesendet: Mittwoch, 19. April 2023 12:18:06
An: ceph-users
Betreff: [ceph-users] Ceph stretch mode / POOL_BACKFILLFULL
Hi,
we run a ceph cluster in stretch mode with one poo
Hi,
it seems that after reboot / OS update my disk labels / device paths may have
changed. Since then i get an error like this:
CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s): osd.osd-12-22_hdd-2
###
RuntimeError: cephadm exited with an error code: 1, stderr:Non-zero exit code 1
need device paths in your configuration? You could use
other criteria like disk sizes, vendors, rotational flag etc. If you
really want device paths you'll probably need to ensure they're
persistent across reboots via udev rules.
Zitat von Kilian Ries :
> Hi,
>
>
> it seems
you
really want device paths you'll probably need to ensure they're
persistent across reboots via udev rules.
Zitat von Kilian Ries :
> Hi,
>
>
> it seems that after reboot / OS update my disk labels / device paths
> may have changed. Since then i get an error like this:
>
Ok just tried it it works like expected ... just dump the yaml, edit it and
apply it again!
Regards ;)
Von: Eugen Block
Gesendet: Mittwoch, 2. August 2023 12:53:20
An: Kilian Ries
Cc: ceph-users@ceph.io
Betreff: Re: AW: [ceph-users] Re: Disk device path
@Stefan
Did you find any solution to your problem? I just got the same error ... I have
a running pacific cluster with 4x monitor servers and wanted to join a fifth
monitor via docker container. My container start command is:
docker run --name ceph-mon-arbiter01 --net=host -v /etc/ceph:/etc/cep
This was related to an older post back from 2020:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/GBUBEGTZTAMNEQNKUGS47M6W6B4AEVVS/
Von: Kilian Ries
Gesendet: Freitag, 17. Juni 2022 17:23:03
An: ceph-users@ceph.io; ste...@bit.nl
Betreff: [ceph
it
works without deleting keyring under /var/lib/ceph/mon... and forcing a new
bootstrap on every container start.
Hope that helps someone ;)
Regards,
Kilian
Von: Stefan Kooman
Gesendet: Montag, 20. Juni 2022 16:29:22
An: Kilian Ries; ceph-users@ceph.io
B
Hi,
i'm running a ceph stretch cluster with two datacenters. Each of the
datacenters has 3x OSD nodes (in total 6x) and 2x monitors. A third monitor is
deployed as arbiter node in a third datacenter.
Each OSD node has 6x SSDs with 1,8 TB storage - that gives me a total of about
63 TB storage
displayed half the size
available?
Regards,
Kilian
Von: Clyso GmbH - Ceph Foundation Member
Gesendet: Mittwoch, 22. Juni 2022 18:20:59
An: Kilian Ries; ceph-users@ceph.io
Betreff: Re: [ceph-users] Ceph Stretch Cluster - df pool size (Max Avail)
Hi Kilian,
we do not
Hi,
i'm running a Ceph v18.2.4 cluster. I'm trying to build some latency monitoring
with the
ceph daemon osd.4 perf dump
cli command. On most of the OSDs i get all the metrics i need. On some OSDs i
only get zero values:
osd.op_latency.avgcount: 0
I already tried restarting the OSD proc
s" or
"ceph_osd_apply_latency_ms" but none of the "op_r / op_w" metrics. Do the lable
names have switched and my grafana dashboard is outdated? Or are they missing
at the exporter level?
Thanks
____
Von: Kilian Ries
Gesendet: Montag, 25. N
Any ideas? Still facing the problem ...
Von: Kilian Ries
Gesendet: Mittwoch, 23. Oktober 2024 13:59:06
An: ceph-users@ceph.io
Betreff: Ceph OSD perf metrics missing
Hi,
i'm running a Ceph v18.2.4 cluster. I'm trying to build some latency monitoring
15 matches
Mail list logo