After .direct_IO is hooked, so it is easy to handle direct io in fileio mount case.
We conduct the basic test on direct io and normal io, the fio is used in the test, the results show it can decrease the memory overhead. It slower than normal io in seq read due to erofs page cache and readahead, uut in rand read direct io is similar than buffer io. The results are reasonable. ``` - buffer io total used free shared buff/cache available Mem: 54Gi 2.4Gi 52Gi 11Mi 254Mi 51Gi Swap: 4.0Gi 0B 4.0Gi after read total used free shared buff/cache available Mem: 54Gi 2.5Gi 50Gi 11Mi 2.3Gi 51Gi Swap: 4.0Gi 0B 4.0Gi cost 2GB memory (the test file is 1GB) - direct io total used free shared buff/cache available Mem: 54Gi 2.4Gi 52Gi 11Mi 280Mi 51Gi Swap: 4.0Gi 0B 4.0Gi after read total used free shared buff/cache available Mem: 54Gi 2.6Gi 51Gi 11Mi 1.2Gi 51Gi Swap: 4.0Gi 0B 4.0Gi only cost 1GB memory (the test file is 1GB) buffer io: 96.6k (seq read), 4245 (rand read) direct io: 21.6k (seq read), 4187 (rand read) ``` Signed-off-by: Hongbo Li <lihongb...@huawei.com> --- fs/erofs/data.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/fs/erofs/data.c b/fs/erofs/data.c index 0cd6b5c4df98..d58496225381 100644 --- a/fs/erofs/data.c +++ b/fs/erofs/data.c @@ -395,9 +395,13 @@ static ssize_t erofs_file_read_iter(struct kiocb *iocb, struct iov_iter *to) if (IS_DAX(inode)) return dax_iomap_rw(iocb, to, &erofs_iomap_ops); #endif - if ((iocb->ki_flags & IOCB_DIRECT) && inode->i_sb->s_bdev) - return iomap_dio_rw(iocb, to, &erofs_iomap_ops, - NULL, 0, NULL, 0); + if (iocb->ki_flags & IOCB_DIRECT) { + if (inode->i_sb->s_bdev) + return iomap_dio_rw(iocb, to, &erofs_iomap_ops, + NULL, 0, NULL, 0); + if (erofs_is_fileio_mode(EROFS_SB(inode->i_sb))) + return generic_file_read_iter(iocb, to); + } return filemap_read(iocb, to, 0); } -- 2.34.1