Hi everybody,
I've setup a small ceph cluster for a cephfs service.
- 5 nodes in Almalinux9 and Squid version of Ceph
- 4 HDD per node (so 20 HDD in the cluster)
- Erasure coding in k=6+m=2 (ceph osd erasure-code-profile set
ec-62-profile-isa k=6 m=2 crush-failure-domain=host plugin=isa)
- two chunks per node (I add un "virtual" host bucket on each nodes and
move 2 HDD to it)
- only one network interface (ethernet 25Gb/s)
I'm testing I/Os from another cluster (for computation) with a workflow
similar to our needs: several mpi processes writing each in a different
file. Node is connected to the same switch, 25Gb ethernet, same VLAN (no
router).
1 process writing/reading 20GB shows (around) *700MB/s writing and
600MB/s reading* so far from the maximum network bandwidth
4 processes writing/reading each 5GB (so 20GB aggregated) show (around)
*900MB/s *(225MB/s per process)*writing and 800MB/s reading* (200MB/s
per process)
Is this what I can expect from the actual setup or cluster architecture
(I do not find any reference) or is something wrong in my first Ceph
cluster setup.
Of course the goal is to add HDD (for capacitive storage) and servers in
the next years, this is the starting point of a Ceph deployment in the
laboratory.
I'm checking against the performance of my old NFS server (10Gb
ethernet, two cluster of 8 HDDS in HW raid6, stripped) and the
throughput improvement is small with my test.
Thanks
Patrick
[ceph: root@whitaker01-ceph /]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 254.66797 root default
-11 25.46680 host whitaker01-ceph
4 hdd 12.73340 osd.4 up 1.00000 1.00000
13 hdd 12.73340 osd.13 up 1.00000 1.00000
-13 25.46680 host whitaker01-virt
7 hdd 12.73340 osd.7 up 1.00000 1.00000
18 hdd 12.73340 osd.18 up 1.00000 1.00000
-7 25.46680 host whitaker02-ceph
0 hdd 12.73340 osd.0 up 1.00000 1.00000
11 hdd 12.73340 osd.11 up 1.00000 1.00000
-15 25.46680 host whitaker02-virt
8 hdd 12.73340 osd.8 up 1.00000 1.00000
17 hdd 12.73340 osd.17 up 1.00000 1.00000
-5 25.46680 host whitaker03-ceph
2 hdd 12.73340 osd.2 up 1.00000 1.00000
12 hdd 12.73340 osd.12 up 1.00000 1.00000
-17 25.46680 host whitaker03-virt
5 hdd 12.73340 osd.5 up 1.00000 1.00000
15 hdd 12.73340 osd.15 up 1.00000 1.00000
-3 25.46680 host whitaker04-ceph
3 hdd 12.73340 osd.3 up 1.00000 1.00000
10 hdd 12.73340 osd.10 up 1.00000 1.00000
-19 25.46680 host whitaker04-virt
6 hdd 12.73340 osd.6 up 1.00000 1.00000
16 hdd 12.73340 osd.16 up 1.00000 1.00000
-9 25.46680 host whitaker05-ceph
1 hdd 12.73340 osd.1 up 1.00000 1.00000
14 hdd 12.73340 osd.14 up 1.00000 1.00000
-21 25.46680 host whitaker05-virt
9 hdd 12.73340 osd.9 up 1.00000 1.00000
19 hdd 12.73340 osd.19 up 1.00000 1.00000
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io