Hi,
Doing some lab tests to understand why ceph isnt working for us,
and here's the first puzzle:
setup: A completely fresh quincy cluster, 64 core EPYC 7713, 2 nvme drives
> ceph osd crush rule create-replicated osd default osd ssd
> ceph osd pool create rbd replicated osd --size 2
> dd if=/d
Hi Frank,
BTW, what's your kernel version you were using ? It's a bug and I
haven't ever seen this by using the newer kernel.
You can try to remount the mountpoints and it should work.
Thanks
- Xiubo
On 09/03/2023 17:49, Frank Schilder wrote:
Hi all,
we seem to have hit a bug in the ceph
Hello, Baergen
Thanks for your reply. Restart osd in planned, but my version is 15.2.7,
so, I may have encountered the problem you said. Could you provide PR to me
about optimize this mechanism? Besides that, if I don't want to upgrade
version in recently, is a good way that adjust
osd_pool_default
Hi:
I encountered a problem when I install cephadm on Huawei Cloud EulerOS. When
enter the following command, it raise an error. What should I do?
>> ./cephadm add-repo --release quincy
<< ERROR: Distro hce version 2.0 not supported
___
ceph-users