On Sun, Nov 08, 2020 at 12:21:33PM +0000, Harshavardhan Unnibhavi wrote: > Thank you for the reply! Yes, I understand that gsoc is over for 2020, > and projects for 2021 will come up next year. I was thinking of > contributing outside of gsoc(for which I won't be eligible anyways for > next year). Anyway, I will work on some of the bite sized tasks, and > get back to you for some other concrete project ideas that require > somebody to work on, in qemu.
Hi Harsha, Here is an idea you could explore: The Linux AIO API was extended to support fsync(2)/fdatasync(2) in the following commit from 2018: commit a3c0d439e4d92411c2b4b21a526a4de720d0806b Author: Christoph Hellwig <h...@lst.de> Date: Tue Mar 27 19:18:57 2018 +0200 aio: implement IOCB_CMD_FSYNC and IOCB_CMD_FDSYNC QEMU's Linux AIO code does not take advantage of this feature yet. Instead it invokes the traditional fdatasync(2) system call from a thread pool because it assumes the Linux AIO API doesn't support the operation. The function where this happens is block/file-posix.c:raw_co_flush_to_disk(). The goal is to implement IO_CMD_FDSYNC support in block/linux-aio.c using io_prep_fdsync() and update block/file-posix.c:raw_co_flush_to_disk() to use this when the feature is available. See <libaio.h> for the Linux AIO library API. Keep in mind that old host kernels may not support IO_CMD_FDSYNC. In that case QEMU should continue to use the thread pool. Taking advantage of the Linux AIO API means QEMU will spawn fewer worker threads and disk flush performance may improve. You can benchmark performance using the fio(1) tool. Configure it with ioengine=pvsync2 rw=randwrite direct=1 fdatasync=1 bs=4k to measure the peformance of 4 KB writes followed by fdatasync. For more information about disk I/O benchmarking, including example fio jobs, see: https://blog.vmsplice.net/2017/11/common-disk-benchmarking-mistakes.html Stefan
signature.asc
Description: PGP signature