Hi,

I have a problem on one of ceph clusters I do not understand.
ceph 17.2.5 on 17 servers, 400 HDD OSDs, 10 and 25Gb/s NICs

3TB rbd image is on erasure coded 8+3 pool with 128pgs , xfs filesystem, 4MB objects in rbd image, mostly empy.

I have created a bunch of 10G files, most of them were written with 1.5GB/s, few of them were really slow, ~10MB/s, a factor of 100.

When reading these files back, the fast-written ones are read fast, ~2-2.5GB/s, the slowly-written are also extremely slow in reading, iotop shows between 1 and 30 MB/s reading speed.

This does not happen at all on replicated images. There are some OSDs with higher apply/commit latency, eg 200ms, but there are no slow ops.

The tests were done actually on proxmox vm with librbd, but the same happens with krbd, and on bare metal with mounted krbd as well.

I have tried to check all OSDs for laggy drives, but they all look about the same.

I have also copied entire image with "rados get...", object by object, the strange thing here is that most of objects were copied within 0.1-0.2s, but quite some took more than 1s. The cluster is quite busy with base traffic of ~1-2GB/s, so the speeds can vary due to that. But I would not expect a factor of 100 slowdown for some writes/reads with rbds.

Any clues on what might be wrong or what else to check? I have another similar ceph cluster where everything looks fine.

Best,
Andrej

--
_____________________________________________________________
   prof. dr. Andrej Filipcic,   E-mail: andrej.filip...@ijs.si
   Department of Experimental High Energy Physics - F9
   Jozef Stefan Institute, Jamova 39, P.o.Box 3000
   SI-1001 Ljubljana, Slovenia
   Tel.: +386-1-477-3674    Fax: +386-1-477-3166
-------------------------------------------------------------
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to