Hello list,
I'm having a serious issue, since my ceph cluster has become unresponsive. I
was upgrading my cluster (3 servers, 3 monitors) from 13.2.1 to 13.2.2, which
shouldn't be a problem.
Though on reboot my first host reported:
starting mon.ceph01 rank -1 at 192.168.200.197:6789/0 mon_data
Dear mailinglist,
I've been struggling to find a working configuration of the network cluster /
addr or even public addr.
* Does ceph interpret multiple values for this in the ceph.conf (I wouldn't say
so out of my tests)?
* Shouldn't public network be your internet facing range and cluster net
the network will be the bottleneck and no remarkable
speed-boost will be noticed.
Back to the interwebz for research 😊
From: ceph-users on behalf of Nino Bosteels
Sent: 19 July 2018 16:01
To: ceph-users@lists.ceph.com
Subject: [ceph-users] [RBD]Replace
arisons? 1 disk per OSD I guess, but
then, how many cores per disk or stuff like that?
Thanks in advance.
Nino Bosteels
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
Anyone got tips as to how to best benchmark a Ceph blockdevice (RBD)?
I've currently found the more traditional ways (dd, iostat, bonnie++, phoronix
test suite) and fio which actually supports the rbd-engine.
Though there's not a lot of information about it to be found online (contrary
to