Hi all

We enabled CephFS on our Ceph Cluster consisting of:
- 3 Monitor servers
- 2 Metadata servers
- 24 OSD  (3 OSD / Server)
- Spinning disks, OSD Journal is on SSD
- Public and Cluster Network separated, all 1GB
- Release: Jewel 10.2.3

With CephFS we reach roughly 1/3 of the write performance of RBD. There
are some other discussions about RBD outperforming CephFS on the mailing
list. However it would be interesting to have more figures about that
topic.

*Writes on CephFS*:

# dd if=/dev/zero of=/data_cephfs/testfile.dd bs=50M count=1 oflag=direct
1+0 records in
1+0 records out
52428800 bytes (52 MB) copied, 1.40136 s, *37.4 MB/s*

#dd if=/dev/zero of=/data_cephfs/testfile.dd bs=500M count=1 oflag=direct
1+0 records in
1+0 records out
524288000 bytes (524 MB) copied, 13.9494 s, *37.6 MB/s*

# dd if=/dev/zero of=/data_cephfs/testfile.dd bs=1000M count=1 oflag=direct
1+0 records in
1+0 records out
1048576000 bytes (1.0 GB) copied, 27.7233 s, *37.8 MB/s
*

*Writes on RBD*

# dd if=/dev/zero of=/data_rbd/testfile.dd bs=50M count=1 oflag=direct
1+0 records in
1+0 records out
52428800 bytes (52 MB) copied, 0.558617 s, *93.9 MB/s*

# dd if=/dev/zero of=/data_rbd/testfile.dd bs=500M count=1 oflag=direct
1+0 records in
1+0 records out
524288000 bytes (524 MB) copied, 3.70657 s, *141 MB/s*

# dd if=/dev/zero of=/data_rbd/testfile.dd bs=1000M count=1 oflag=direct
1+0 records in
1+0 records out
1048576000 bytes (1.0 GB) copied, 7.75926 s, *135 MB/s*

Are these measurements reproducible by others ? Thanks for sharing your
experience!

regards
martin
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to