On 2025/1/20 11:46, Gao Xiang wrote:
On 2025/1/20 11:43, Hongbo Li wrote:
On 2025/1/20 11:10, Gao Xiang wrote:
On 2025/1/20 11:02, Hongbo Li wrote:
...
}
+static int erofs_fileio_scan_iter(struct erofs_fileio *io, struct
kiocb *iocb,
+ struct iov_iter *iter)
I wonder if it's possible to just extract a folio from
`struct iov_iter` and reuse erofs_fileio_scan_folio() logic.
Thanks for reviewing. Ok, I'll think about reusing the
erofs_fileio_scan_folio logic in later version.
Thanks.
Additionally, for the file-backed mount case, can we consider
removing the erofs's page cache and just using the backend file's
page cache? If in this way, it will use buffer io for reading the
backend's mounted files in default, and it also can decrease the
memory overhead.
I think it's too hacky for upstreaming, since EROFS can only
operate its own page cache, otherwise it should only support
overlayfs-like per-inode sharing.
Per-extent sharing among different filesystems is too hacky
It just like the dax mode of erofs (but instead of the dax devices,
it's a backend file). It does not share the page among the different
filesystems, because there is only the backend file's page cache. I
found the whole io path is similar to this direct io mode.
How do you handle a VMA which is consecutive as an
EROFS file, but actually mapping to different part
of the underlay inode, or even different underlay
inodes?
The mmap is indeed a trouble, I'm still trying to figure out how to
solve it. :)
Thanks,
Hongbo
It just cause the current MM layer broken, but FSDAX
mode is a complete different story.
Thanks,
Gao Xiang
Thanks,
Hongbo
on the MM side, but if you have some detailed internal
requirement, you could implement downstream.
Thanks,
Gao Xiang
This is just my initial idea, for uncompressed mode, this should
make sense. But for compressed layout, it needs to be verified.
Thanks,
Hongbo
It simplifies the codebase a lot, and I think the performance
is almost the same.
Otherwise currently it looks good to me.
Thanks,
Gao Xiang