[ceph-users] Ceph Upgrade 18.2.2 -> 18.2.4 fails | Fatal glibc error: CPU does not support x86-64-v2 on virtualized hosts

2024-07-25 Thread Björn Lässig
-device kvm64-x86_64-cpu,id=cpu2,socket-id=0,core-id=1,thread-id=0\ -device kvm64-x86_64-cpu,id=cpu3,socket-id=0,core-id=2,thread-id=0\ -device kvm64-x86_64-cpu,id=cpu4,socket-id=0,core-id=3,thread-id=0 Which CPU type should I choose for my VMs for this libc? regards Björn Lässig __

[ceph-users] Re: Ceph Upgrade 18.2.2 -> 18.2.4 fails | Fatal glibc error: CPU does not support x86-64-v2 on virtualized hosts

2024-07-25 Thread Björn Lässig
On Thu, 2024-07-25 at 10:03 +0200, Björn Lässig wrote: > Jul 25 09:16:18 cephmgr1 reverent_hypatia[1171489]: Fatal glibc > error: CPU does not support x86-64-v2 > > When starting the container for 18.2.4. a glibc error occurs. > 18.2.2. to 18.2.4 is a minor upgrade and sho

[ceph-users] Re: Ceph Upgrade 18.2.2 -> 18.2.4 fails | Fatal glibc error: CPU does not support x86-64-v2 on virtualized hosts

2024-07-25 Thread Björn Lässig
rogress: … Upgrade to 18.2.4 (14m) [===.] (remaining: 11m) Thanks and greetings from Hildesheim to Greifswald Björn Lässig ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Solved: Ceph Upgrade 18.2.2 -> 18.2.4 fails | Fatal glibc error: CPU does not support x86-64-v2 on virtualized hosts

2024-07-25 Thread Björn Lässig
On Thu, 2024-07-25 at 12:04 +0200, Björn Lässig wrote: > On Thu, 2024-07-25 at 10:03 +0200, Björn Lässig wrote: > > Jul 25 09:16:18 cephmgr1 reverent_hypatia[1171489]: Fatal glibc > > error: CPU does not support x86-64-v2 > > > > When starting the container for 1

[ceph-users] Re: Grafana dashboards is missing data

2024-09-06 Thread Björn Lässig
Am Mittwoch, dem 04.09.2024 um 20:01 +0200 schrieb Sake Ceph: > After the upgrade from 17.2.7 to 18.2.4 a lot of graphs are empty. For > example the Osd latency under OSD device details or the Osd Overview > has a lot of No data messages. > is the ceph-exporter listening on port 9926 (on every ho

[ceph-users] All OSD_UNREACHABLE in IPv6 cluster - roleback or wait?

2025-04-15 Thread Björn Lässig
? have a nice day Björn Lässig PS: attached is some output of the errors -- cluster: id: health: HEALTH_ERR 44 osds(s) are not reachable services: mon: 5 daemons, quorum marvin06,marvin08,cephmon1,cephmon3,cephmon2 (age 2d) mgr: cephmgr2.zvtgjh(active, since 2d

[ceph-users] Re: All OSD_UNREACHABLE in IPv6 cluster - roleback or wait?

2025-04-15 Thread Björn Lässig
Hi Eugen, Am Dienstag, dem 15.04.2025 um 13:05 + schrieb Eugen Block: > You could also just mute the warning until a fix is released. > > ceph health mute OSD_UNREACHABLE 100d thanks a lot. This works for me. The clusters is now HEALTH_OK (and all automation based on 'do something and wait f