[ceph-users] Optimizing terrible RBD performance

2019-10-04 Thread Petr Bena
Hello, If this is too long for you, TL;DR; section on the bottom I created a CEPH cluster made of 3 SuperMicro servers, each with 2 OSD (WD RED spinning drives) and I would like to optimize the performance of RBD, which I believe is blocked by some wrong CEPH configuration, because from my ob

Re: [ceph-users] Optimizing terrible RBD performance

2019-10-04 Thread Petr Bena
a single thread/iodepth=1 sequentially here. Then only 1 disk at time, and you have network latency too. rados bench is doing 16 concurrent write. Try to test with fio for example, with bigger iodepth, small block/big block , seq/rand. - Mail original ----- De: "Petr Bena&qu

Re: [ceph-users] Optimizing terrible RBD performance

2019-10-04 Thread Petr Bena
gives 200 random iops per disk which is acceptable. /Maged On 04/10/2019 17:28, Petr Bena wrote: Hello, I tried to use FIO on RBD device I just created and writing is really terrible (around 1.5MB/s) [root@ceph3 tmp]# fio test.fio rbd_iodepth32: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W