Hi,
we are currently testing out ways to increase Ceph performance because
what we experience so far is very close to unusable.
For the test cluster we utilizing 4 nodes with the following hardware data:
Dual 200GBe Mellanox Ethernet
2x EPYC Rome 7302
16x 32GB 3200MHz ECC
9x 15.36TB Micron 93
eason, but try it manually and restart the
OSD services. I didn't have to go through these steps in a while (with
cephadm only once in a test cluster), not sure what we could be
missing here.
Zitat von Dominik Baack :
Hi,
OSD (9 SSDs) and Mons are currently on the same nodes with 2
the public
network you should modify both networks and restart OSD services.
Zitat von Dominik Baack :
Hi,
Reconfiguration
ceph orch daemon reconfig osd.28
Scheduled to reconfig osd.28 on host 'ml2rsn05'
cephadm ['--image',
Hi,
Reconfiguration
ceph orch daemon reconfig osd.28
Scheduled to reconfig osd.28 on host 'ml2rsn05'
cephadm ['--image',
'quay.io/ceph/ceph@sha256:12a0a4f43413fd97a14a3d47a3451b2d2df50020835bb93db666209f3f77617a',
'deploy', '--fsid', '10489760-1723-11ec-8050-cb54d51756be', '--name',
'osd.2
Am 19.12.2022 um 15:49 schrieb Dominik Baack:
Hi,
ceph.conf on the host as well as in "cephadm shell" are similar and
have the correct IPs set. Entries for osd are not present.
# minimal ceph.conf for 10489760
[global]
fsid = 10489760
mon_host = [v2:129.217.31.186
effects in a test cluster.
Regards,
Eugen
[1]
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/OWSIXB5O3Y7LXC3D2JYETEAFMRC3K7OY/
Zitat von Dominik Baack :
Hi,
min_mon_release 17 (quincy)
election_strategy: 1
0: [v2:129.217.31.189:3300/0,v1:129.217.31.189:6789/0] mon.ml2rsn0
after restarting of ceph.target.
Cheers
Dominik
Am 19.12.2022 um 09:24 schrieb Eugen Block:
If you did it "the right way" then 'ceph mon dump' should reflect the
changes. Does that output show the new IP addresses?
Zitat von Dominik Baack :
Hi,
I removed/added the new mons
/rados/operations/add-or-rm-mons/#changing-a-monitor-s-ip-address-the-messy-way
Cheers
Dominik Baack
Am 17.12.2022 um 10:06 schrieb Eugen Block:
Hi,
did you also change the monmap as described in the docs [1]? There
have been multiple threads in this list with the same topic. Simply
changing
Hi,
we have switched our network from IP over IB to an Ethernet configuration.
I could move the mons over to the new address range and they connect
into the cluster network. The OSD create more of a problem, even after
setting
public_network129.217.31.176/29
cluster_network 129.217.31.184/29
ked out. I hope this creates no further problems down the
line when I want to reintegrate a new sn01 node.
Thanks
Dominik
Am 21.07.2022 um 12:01 schrieb Robert Gallop:
You try ceph orch daemon rm already?
On Thu, Jul 21, 2022 at 3:58 AM Dominik Baack
wrote:
Hi,
after removing a node fro
4/4 8m ago 3M
ml2rsn01;ml2rsn03;ml2rsn05;ml2rsn06;ml2rsn07
you can see that its still present somewhere.
How can I remove the last error until we get a replacement?
Cheers
Dominik Baack
___
ceph-users mailing list -- ceph-users
Thank you very much for your response.
For now, we do not use IPoIB. If I remember correctly bonding interfaces
over two distinct protocols could be problematic, but I will look into it.
Cheers
Dominik Baack
Am 13.09.2021 um 09:38 schrieb Robert Sander:
Am 10.09.21 um 20:06 schrieb Dominik
Hi,
we are currently trying to deploy CephFS on 7 storage nodes connected by
two infiniband ports and an ethernet port for external communication.
For various reasons the network interfaces are mapped to the same IP
range e.g. x.x.x.15y (eno1), x.x.x.17y (ib1), x.x.x.18y (ib2) with x
constan
13 matches
Mail list logo