Hi,
you could try do only bind to v1 [1] by setting
ms_bind_msgr2 = false
Regards,
Eugen
[1] https://docs.ceph.com/en/latest/rados/configuration/msgr2/
Zitat von Void Star Nill :
Hello,
I am running nautilus cluster. Is there a way to force the cluster to use
msgr-v1 instead of msgr-v2?
Hi,
Is there anybody tried to migrate data from Hadoop to Ceph?
If yes what is the right way?
Thank you
This message is confidential and is for the sole use of the intended
recipient(s). It may also be privileged or otherwise protected by copyright or
other leg
Hello,
I am running nautilus cluster. Is there a way to force the cluster to use
msgr-v1 instead of msgr-v2?
I am debugging an issue and it seems like it could be related to the msgr
layer, so want to test it by using msgr-v1.
Thanks,
Shridhar
___
ceph
Hello,
I am running 14.2.13-1xenial version and I am seeing lot of logs from msgv2
layer on the OSDs. Attached are some of the logs. It looks like these logs
are not controlled by the standard log level configuration, so I couldn't
find a way to disable these logs.
I am concerned that these logs
Hi,
this seems to be a known issue [1], apparently it could be resolved by
executing:
ceph osd require-osd-release mimic
Before you do that you should check what the current values is:
ceph01:~ # ceph osd dump | grep require_osd
require_osd_release nautilus
Regards,
Eugen
[1]
https://w
Hi,
I am trying to read file from my ceph kernel mount and file read stays in
bytes for very long and I am getting below error msg in dmesg.
[ 167.591095] ceph: loaded (mds proto 32)
[ 167.600010] libceph: mon0 10.0.103.1:6789 session established
[ 167.601167] libceph: client144519 fsid f8bc768
Hi,
hmm
ceph osd require-osd-release mimic
fixt the issue
Regards
Ingo
Am 05.11.20 um 15:28 schrieb Ingo Ebel:
Hi,
we upgraded our ceph cluster from 14.2.9 to 15.2.5 but osds with 15.2.5
are not joining the cluster after restart. They hang with "1234 tick
checking mon for new map"
Hi
We had a rack down for 2hours for maintenance. 5 storage nodes were
involved. We had noout en norebalance flags set before the start of the
maintenance
When the systems were brought back online we noticed a lot of osds with
high latency (in 20 seconds range) . Mostly osds that are not on the
s
Hi,
we upgraded our ceph cluster from 14.2.9 to 15.2.5 but osds with 15.2.5
are not joining the cluster after restart. They hang with "1234 tick
checking mon for new map"
The Systems are Centos 7.8
I tried everything i could think of. But nothing helped. The mons and
mgrs are 15.2.5.
Anyo
Hi Oliver
Review this "step by step" guide to see if you forgot something:
BR
NFS:
1.
chmod +x cephadm
2.
./cephadm bootstrap
-
Record dashboard user & password printed out at the end
3.
ADD OTHER HOSTS (assuming 3+ total after adding)
4.
./cephadm shel
Hi,
I setup a small 3 node cluster as a POC. I bootstrapped the cluster with
separate networks for frontend (public network 192.168.30.0/24) and
backend (cluster network 192.168.41.0/24).
1st small question:
After the bootstrap, I recognized that I had mixed up cluster and public
network. :
11 matches
Mail list logo