Hi,
Update to 15.2.5. We have the same issue, in the relase notes they don’t
mention anything regarding multisite, but once we updated everything started to
work ith 15.2.5.
Best regards
From: Michael Breen
Sent: Friday, November 6, 2020 10:40 PM
To: ceph-users@ceph.io
Subject: [Suspicious ne
Hi Frank,
You said only one OSD is down but in ceph status shows more than 20 OSD is
down.
Regards,
Amudhan
On Sun 8 Nov, 2020, 12:13 AM Frank Schilder, wrote:
> Hi all,
>
> I moved the crush location of 8 OSDs and rebalancing went on happily
> (misplaced objects only). Today, osd.1 crashed, r
Sorry for confusing, what I meant to say is that "having all WAL/DB
on one SSD will result a single point of failure". If that SSD goes
down, all OSDs depending on it will also stop working.
What I'd like to confirm is that, there is no benefit to put WAL/DB
on SSD when there is either cache tire
> 在 2020年11月8日,11:30,Tony Liu 写道:
>
> Is it FileStore or BlueStore? With this SSD-HDD solution, is journal
> or WAL/DB on SSD or HDD? My understanding is that, there is no
> benefit to put journal or WAL/DB on SSD with such solution. It will
> also eliminate the single point of failure when hav
Hi,
I have mounted my cephfs (ceph octopus) thru kernel client in Debian.
I get following error in "dmesg" when I try to read any file from my mount.
"[ 236.429897] libceph: osd1 10.100.4.1:6891 socket closed (con state
CONNECTING)"
I use public IP (10.100.3.1) and cluster IP (10.100.4.1) in my