Hi,

We have a pure SSD based Ceph cluster (+100 OSDs with Enterprise SSDs and IT 
mode cards) Hammer 0.94.9 over 10G. It's really stable and we are really happy 
with the performance we are getting. But after a customer ran some tests, we 
realized about something quite strange. Our user did some tests using FIO, and 
the strange thing is that Write tests did work as expected, but some Read tests 
did not.

The VM he used was artificially limited via QEMU to 3200  read and 3200  write 
IOPS. In the write department everything works more or less as expected. The 
results get close to 3200 IOPS but the read tests are the ones we don't really 
understand.

We ran tests using different IO Engines: Sync, libaio and POSIX AIO, during the 
write tests the 3 of them expect quite similar -which is something I did not 
really expect- but on the read department there is a huge difference:

Read Results (Random Read - Buffered: No - Direct: Yes - Block Size: 4KB):

LibAIO - Average: 3196 IOPS
POSIX AIO - Average: 878 IOPS
Sync -   Average: 929 IOPS

Write Results (Random Read - Buffered: No - Direct: Yes - Block Size: 4KB):

LibAIO -    Average: 2741 IOPS
POSIX AIO -    Average: 2673 IOPS
Sync -  Average: 2795 IOPS

I would expect a difference when using LibAIO or POSIX AIO, but I would expect 
it in both read and write results,  not only during reads.

So, I'm quite disoriented with this one... Does anyone have an idea about what 
might be going on?

Thanks!

Saludos Cordiales,
Xavier Trilla P.
Clouding.io<https://clouding.io/>

¿Un Servidor Cloud con SSDs, redundado
y disponible en menos de 30 segundos?

¡Pruébalo ahora en Clouding.io<https://clouding.io/>!

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to