Hi Xiubo,
Thanks very much for looking into this, that does sound like what might
be happening in our case.
Is this something that can be improved somehow - would disabling pinning or
some config change help? Or could this be addressed in a future release?
It seems somehow excessive to write so
I see Xiubo started discussing this on
https://tracker.ceph.com/issues/53542 as well.
So the large writes are going to the journal file, and sometimes it's
s single write of a full segment size, which is what I was curious
about.
At this point the next step is seeing what is actually taking up th
Hi Greg,
As a follow up, we see items similar to this pop up in the
objecter_requests (when it's not empty). Not sure if reading it right, but
some appear quite large (in the MB range?):
{
"ops": [
{
"tid": 9532804,
"pg": "3.f9c235d7",
"osd": 2,
Hi Greg,
Much appreciated for the reply, the image is also available at:
https://tracker.ceph.com/attachments/download/5808/Bytes_per_op.png
How the graph is generated: we back the cephfs metadata pool with Azure
ultrassd disks. Azure reports for the disks each minute the average
read/write iops
Andras,
Unfortunately your attachment didn't come through the list. (It might
work if you embed it inline? Not sure.) I don't know if anybody's
looked too hard at this before, and without the image I don't know
exactly what metric you're using to say something's 320KB in size. Can
you explain more