Hi Everybody (Hi Dr. Nick),
TL/DR: Is is possible to have a "2-Layer" Crush Map?
I think it is (although I'm not sure about how to set it up).
My issue is that we're using 4-2 Erasure coding on our OSDs, with 7-OSDs
per OSD-Node (yes, the Cluster is handling things AOK - we're running at
abou
Hi Daniel,
I also needed to add the mds_namespace in my definition...
?? But, did you also forget to specify the fs-type = "ceph" ??
This is my entry in fstab: 10.3.1.23:6789,10.3.1.26:6789,10.3.1.28:6789:/
/srv/poolVMScephname=admin,mds_namespace=poolVMS,noatime,_netdev
0
Hello guys.
I have a question regarding HA.
I set up two hosts with cephadm, created the pools and set up an NFS,
everything working so far. I turned off the second Host and the first one
continued to work without problems, but if I turn off the first, the second
is totally irresponsible. What co
Hi,
do both nodes have the MON and OSD roles? If there's only one MON and
you shut it down the cluster is down, of course. If the maps don't
change too quickly it's possible that your clients still communicate
with their respective OSDs so they don't immediately notice failed
MONs. This h
No, I actually included the ceph fstype, just not in my example (the
initial post), but the key is really mds_namespace for specifying the
filesystem, this should be included in the documentation.
Thanks
Daniel
On Sun, 25 Sept 2022 at 02:13, Dominique Ramaekers
wrote:
>
> Hi Daniel,
>
> I also
Hi,
the docs [1] show how to specifiy the rgw configuration via yaml file
(similar to OSDs).
If you applied it with ceph orch you should see your changes in the
'ceph config dump' output, or like this:
---snip---
ses7-host1:~ # ceph orch ls | grep rgw
rgw.ebl-rgw?:80 2/2 33
Den lör 24 sep. 2022 kl 23:38 skrev Murilo Morais :
> I'm relatively new to Ceph. I set up a small cluster with two hosts with 12
> disks each host, all 3 TB SAS 7500 RPM and two 10 Gigabit interfaces. I
> created a pool in replicated mode and configured it to use two replicas.
>
> What I'm finding