Hi,

My three-node lab cluster is similar to yours but with 3x bluestore OSDs per node (4TB SATA spinning disks) and 1x shared DB/WAL (240GB SATA SSD) device per node. I'm only using gigabit networking (one interface public, one interface cluster) also ceph 14.2.4 with 3x replicas.

I would have expected your dd commands to use the cache, try these instead inside your VM:

# Write test
dd if=/dev/zero of=/zero.file bs=32M oflag=direct status=progress

# Read test
dd if=/zero.file of=/dev/null bs=32M iflag=direct status=progress

You can obviously delete /zero.file when you're finished.

- bs=32M tells dd to read/write 32MB at a time, I think the default is something like 512 bytes which slows things up significantly without a cache.
- oflag/iflag=direct will use direct I/O bypassing the cache.
- status=progress is just instead of where you're using pv to show the transfer rate.

On my cluster I get 124MB/sec read (maxing out the network) and 74MB/sec write. Without bs=32M I get more like 1MB/sec read and write. The VM I'm using for this test is cache=writeback and virtio-scsi (i.e. sda rather than vda).

Simon

On 05/11/2019 11:31, Hermann Himmelbauer wrote:
Hi,
Thank you for your quick reply, Proxmox offers me "writeback"
(cache=writeback) and "writeback unsafe" (cache=unsafe), however, for my
"dd" test, this makes no difference at all.

I still have write speeds of ~ 4,5 MB/s.

Perhaps "dd" disables the write cache?

Would it perhaps help to put the journal or something else on a SSD?

Best Regards,
Hermann
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to