Am 29.08.2016 um 19:09 hat Pavel Butsykin geschrieben: > The prefetch cache aims to improve the performance of sequential read data. > Of most interest here are the requests of a small size of data for sequential > read, such requests can be optimized by extending them and moving into > the prefetch cache. > [...]
Before I start actually looking into your code, I read both this cover letter and your KVM Forum slides, and as far as I can say, the fundamental idea and your design look sound to me. It was a good read, too, so thanks for writing all the explanations! One thing that came to mind is that we have more caches elsewhere, most notably the qcow2 metadata cache, and I still have that private branch that adds a qcow2 data cache, too (for merging small allocating writes, if you remember my talk from last year). However, the existing Qcow2Cache has a few problems like being tied to the cluster size. So I wondered how hard you think it would be to split pcache into a reusable cache core that just manages the contents based on calls like "allocate/drop/get cached memory for bytes x...y", and the actual pcache code that implements the read-ahead policy. Then qcow2 could reuse the core and use its own policy about what metadata to cache etc. Of course, this can be done incrementally on top and should by no means block the inclusion of your code, but if it's possible, it might be an interesting thought to keep in mind. Kevin