Yes, cache=unsafe has no effect with RBD. Hm, that's strange, you should get 
~40*6 MB/s linear write with 6 HDDs and Bluestore.

Try to create a test image and test it with 'fio -ioengine=rbd -name=test 
-direct=1 -rw=write -bs=4M -iodepth=16 -pool=<pool> -rbdname=<rbd>' from 
outside a VM.

If you still get 4 MB/s, something's wrong with your ceph. If you get adequate 
performance, something's wrong with your VM settings.

5 ноября 2019 г. 14:31:38 GMT+03:00, Hermann Himmelbauer <herm...@qwer.tk> 
пишет:
>Hi,
>Thank you for your quick reply, Proxmox offers me "writeback"
>(cache=writeback) and "writeback unsafe" (cache=unsafe), however, for
>my
>"dd" test, this makes no difference at all.
>
>I still have write speeds of ~ 4,5 MB/s.
>
>Perhaps "dd" disables the write cache?
>
>Would it perhaps help to put the journal or something else on a SSD?
>
>Best Regards,
>Hermann
>
>Am 05.11.19 um 11:49 schrieb vita...@yourcmc.ru:
>> Use `cache=writeback` QEMU option for HDD clusters, that should solve
>> your issue
>> 
>>> Hi,
>>> I recently upgraded my 3-node cluster to proxmox 6 / debian-10 and
>>> recreated my ceph cluster with a new release (14.2.4 bluestore) -
>>> basically hoping to gain some I/O speed.
>>>
>>> The installation went flawlessly, reading is faster than before (~
>80
>>> MB/s), however, the write speed is still really slow (~ 3,5 MB/s).
>>>
>>> I wonder if I can do anything to speed things up?
>>>
>>> My Hardware is as the following:
>>>
>>> 3 Nodes with Supermicro X8DTT-HIBQF Mainboard each,
>>> 2 OSD per node (2TB SATA harddisks, WDC WD2000F9YZ-0),
>>> interconnected via Infiniband 40
>>>
>>> The network should be reasonably fast, I measure ~ 16 GBit/s with
>iperf,
>>> so this seems fine.
>>>
>>> I use ceph for RBD only, so my measurement is simply doing a very
>simple
>>> "dd" read and write test within a virtual machine (Debian 8) like
>the
>>> following:
>>>
>>> read:
>>> dd if=/dev/vdb | pv | dd of=/dev/null
>>> -> 80 MB/s
>>>
>>>
>>> write:
>>> dd if=/dev/zero | pv | dd of=/dev/vdb
>>> -> 3.5 MB/s
>>>
>>> When I do the same on the virtual machine on a disk that is on a NFS
>>> storage, I get something about 30 MB/s for reading and writing.
>>>
>>> If I disable the write cache on all OSD disks via "hdparm -W 0
>>> /dev/sdX", I gain a little bit of performance, write speed is then
>4.3
>>> MB/s.
>>>
>>> Thanks to your help from the list I plan to install a second ceph
>>> cluster which is SSD based (Samsung PM1725b) which should be much
>>> faster, however, I still wonder if there is any way to speed up my
>>> harddisk based cluster?
>>>
>>> Thank you in advance for any help,
>>>
>>> Best Regards,
>>> Hermann
>
>-- 
>herm...@qwer.tk
>PGP/GPG: 299893C7 (on keyservers)

-- 
With best regards,
  Vitaliy Filippov
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to