Hi everybody,
I've setup a small ceph cluster for a cephfs service.
- 5 nodes in Almalinux9 and Squid version of Ceph
- 4 HDD per node (so 20 HDD in the cluster)
- Erasure coding in k=6+m=2 (ceph osd erasure-code-profile set
ec-62-profile-isa k=6 m=2 crush-failure-domain=host plugin=isa)
- two c
Hello everyone,
I have some considerations and doubts to ask...
I work at an HPC center and my doubts stem from performance in this environment. All clusters here was suffering from NFS performance and also problems of a single point of failure it has. We were suffering from the performanc