Hi all,
We are running the Nautilus cluster. Today due to UPS work, we shut
down the whole cluster.
After we start the cluster, many OSDs go down and they seem to start
doing the heardbeat_check using the public network. For example, we
see the following logs:
---
2023-05-16 19:35:29.254 7efcd
On Wed, 2023-05-17 at 17:23 +, Marc wrote:
> >
> >
> > In fact, when we start up the cluster, we don't have DNS available
> > to
> > resolve the IP addresses, and for a short while, all OSDs are
> > located
> > in a new host called "localhost.localdomain". At that point, I
> > fixed
> > it b
Hello,
We use purely cephfs in out ceph cluster (version 14.2.7). The cephfs
data is an EC pool (k=4, m=2) with hdd OSDs using bluestore. The
default file layout (i.e. 4MB object size) is used.
We see the following output of ceph df:
---
RAW STORAGE:
CLASS SIZEAVAIL USED
Hi,
We use the official Ceph RPM repository (http://download.ceph.com/rpm-
nautilus/el7) for installing packages on the client nodes running
CentOS7.
But we noticed today that the repo only provides the latest version
(2:14.2.10-0.el7) of nautilus so that we couldn't install an older
(2:14.2.7-0
Hi,
On Thu, 2020-07-02 at 16:15 +0200, Janne Johansson wrote:
Den tors 2 juli 2020 kl 14:42 skrev Lee, H. (Hurng-Chun)
mailto:h@donders.ru.nl>>:
Hi,
We use the official Ceph RPM repository (http://download.ceph.com/rpm-
nautilus/el7<http://download.ceph.com/rpm-nautilus/el7>) fo
Hi Tadas,
I also noticed the same issue few days ago.
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/GDUSELT7B3NY7NBU2XHZP6CRHE3OSD6A/
I have reported it to the developers via the ceph-devel IRC. I was told
that it will be fixed on the coming Friday the earlist.
Hong
On Wed, 2