Hi,

Doing some lab tests to understand why ceph isnt working for us,
and here's the first puzzle:

setup: A completely fresh quincy cluster, 64 core EPYC 7713, 2 nvme drives

> ceph osd crush rule create-replicated osd default osd ssd
> ceph osd pool create  rbd replicated osd --size 2

> dd if=/dev/rbd0 of=/tmp/testfile   status=progress bs=4M count=1000
4194304000 bytes (4.2 GB, 3.9 GiB) copied, 7.0152 s, 598 MB/s

> dd of=/dev/rbd0 if=/tmp/testfile   status=progress bs=4M count=1000
4194304000 bytes (4.2 GB, 3.9 GiB) copied, 3.82156 s, 1.1 GB/s

write performance is 1/3 of raw nvme, which i suppose is expected (not
very good tho)
but why is read performance so bad?

top shows only one core is being utilized at 40% cpu.
it can't be network either, since this is all localhost.




thanks
Arvid




-- 
+4916093821054
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to