Hi Alex,
Were you upgrading to 19.2.0?
There should be a fix available in 19.2.1 for the issue.
Best,
Laimis J.
> On 4 Mar 2025, at 12:30, Alex from North wrote:
>
> the answer he
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe se
Hello everybody!
Running 19.2.0 faced an issued still cannot struggle.
And this is Module 'devicehealth' has failed: Expecting value: line 1 column
2378 (char 2377)
in MGR log I see
Mar 04 12:48:07 node2.ec.mts ceph-mgr[3821449]: Traceback (most recent call
last):
Its a good question.
Any news?
From: wojiaowugen
Sent: Friday, December 6, 2024 8:07 AM
To: ceph-users@ceph.io
Subject: [ceph-users] When 18.2.5 will be released?
Hi, everyone, It's an honor to ask questions here.
can I ask when 18.2.5 will be released?
_
Do you have a pool named ".mgr"?
Zitat von Alex from North :
Hello everybody!
Running 19.2.0 faced an issued still cannot struggle.
And this is Module 'devicehealth' has failed: Expecting value: line
1 column 2378 (char 2377)
in MGR log I see
Mar 04 12:48:07 node2.ec.mts ceph-mgr[3821449]
yes, I do
.mgr 10 1 769 KiB2 2.3 MiB 04.7 PiB
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
> it's not Ceph but the network
It's almost always the network ;-)
Ramin: This reminds me of an outage we had at CERN caused by routing /
ECMP / faulty line card.
One of the main symptoms of that is high tcp retransmits on the Ceph nodes.
Basically, OSDs keep many connections open with each othe
Hi,
I'm facing a critical issue with my Ceph cluster. It has become unable to
read/write data properly and cannot recover normally. What steps should I take
to resolve this?
[root@ceph-node1 ~]# ceph -s
cluster:
id: 76956086-25f5-445d-a49e-b7824393c17b
health: HEALTH_WARN
Hi all!
Faced sad situation I don't know where to dig to. That is why I am here again,
hoping for hints.
Situation:
all the "orch" commands aint available and give error in mgr log
mgr.server reply reply (95) Operation not supported Module 'orchestrator' is
not enabled/loaded (required by comm
A few years ago, one of our customers complained about latency issues.
We investigated and the only real evidence we found were also high
retransmit values. So we recommended to let their network team look
into it. For months they refused to do anything, until they hired
another company to
found the answer here
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/QJHES7GKTI6O7BT6UBGCHK6WFTJRNJHE/
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On Tue, Mar 04, 2025 at 06:46:20PM +, Eugen Block wrote:
> > It's almost always the network ;-)
>
> I know, I have memorized your famous tweet about Ceph being the best network
> monitor 😄
It seems to be ;-)
When I spun up my small cluster, I used a noname 10G switch. Ceph
complained bitterl
Hi, we have a cluster in which we have lost the osds containg the 5 pgs.
How to proceed to get cluster back working ? of course we will lose data.
We cannot upgrade or downgrade with unknown state.
Best regards
___
ceph-users mailing list -- ceph-user
It's almost always the network ;-)
I know, I have memorized your famous tweet about Ceph being the best
network monitor 😄
and there hasn’t been a single week that I haven’t thought about that. 🙂
Zitat von Dan van der Ster :
it's not Ceph but the network
It's almost always the network ;-)
13 matches
Mail list logo