Could you create a tracker for this?
Also, if you can reproduce this could you gather a log with
debug_osd=20 ? That should show us the superblock it was trying to
decode as well as additional details.
On Mon, Aug 12, 2019 at 6:29 AM huxia...@horebdata.cn
wrote:
>
> Dear folks,
>
> I had an OSD
Dear folks,
I had an OSD down, not because of a bad disk, but most likely a bug hit on
Rockdb. Any one had similar issue?
I am using Luminous 12.2.12 version. Log attached below
thanks,
Samuel
**
[root@horeb72 ceph]#
Hi
I am building a 3-node Ceph cluster to storE VM disk images.
We are running Ceph Nautilus with KVM.
Each node has:
Xeon 4116
512GB ram per node
Optane 905p NVMe disk with 980 GB
Previously, I was creating four OSDs per Optane disk, and using only Optane
disks for all storage.
However, if I