On Thu, Oct 24, 2019 at 05:21:04PM +0200, Daniel Kiper wrote: > On Thu, Oct 24, 2019 at 04:43:58AM +0000, Michael Chang wrote: > > On Wed, Oct 23, 2019 at 12:48:23PM +0200, Daniel Kiper wrote: > > > On Wed, Oct 16, 2019 at 06:15:30AM +0000, Michael Chang wrote: > > > > The lvm cache logical volume is the logical volume consisting of the > > > > original and the cache pool logical volume. The original is usually on a > > > > larger and slower storage device while the cache pool is on a smaller > > > > and faster one. The performance of the original volume can be improved > > > > by storing the frequently used data on the cache pool to utilize the > > > > greater performance of faster device. > > > > > > > > The default cache mode "writethrough" ensures that any data written will > > > > be stored both in the cache and on the origin LV, therefore grub can go > > > > straight to read the original lv as no data loss is guarenteed. > > > > > > > > The second cache mode is "writeback", which delays writing from the > > > > cache pool back to the origin LV to have increased performance. The > > > > drawback is potential data loss if losing the associated cache device. > > > > > > > > During the boot time grub reads the LVM offline i.e. LVM volumes are not > > > > activated and mounted, IMHO it should be fine to read directly from > > > > original lv since all cached data should have been flushed back in the > > > > process of taking it offline. > > > > > > Is it possible to enforce all GRUB writes to the original device instead > > > of cache during installation process? > > > > I couldn't give concrete answer whether it is possible, but it sounds to > > me not good idea because in general bypassing the cache could > > potentially break the consistency of data and the data being cached. > > > > Perhaps some system calls could help to sync the data out of the lvm > > cache to the original LV during installation process. It seems fsync > > does it for us, but I didn't have good idea either if it is enough. > > May I ask you to investigate that and if it is needed add required > install code. I mean fsync(), etc.
As far as what I learned, the fsync did not write back dirty cache to the orginal device, instead it would update associated cache metadata to complete the write transaction with the cache device and then having the write cache up to date. Without the operation, metadata is commited every second and data would be completely lost/evaporated if power failure happened in between. To write back dirty cache, as lvm cache did not support dirty cache flush per block range, there'no way to do it for file. Instead the "cleaner" policy is implemented and can be used to write back "all" dirty blocks in a cache, which effectively drain all dirty cache gradually to attain and last in the "clean" state, which can be useful for shrinking or decommissioning a cache. The result and effect is not what we are looking for here. In conclusion, it seems no way to enforce all grub writes to the original device. That means grub may suffer from power failure that leads to dirty cache not being flushed and is inept reading data flagged as dirty in cache. But even though, as long as such case is only subjected to writeback mode that is generally difficult to deal with data lost whenever accident comes up, I'd still like to propose my (relatively simple) patch and treat reading dirty data in cache as an enhancement to be more resilience to potential data loss. Thanks, Michael > > Daniel > > _______________________________________________ > Grub-devel mailing list > Grub-devel@gnu.org > https://lists.gnu.org/mailman/listinfo/grub-devel _______________________________________________ Grub-devel mailing list Grub-devel@gnu.org https://lists.gnu.org/mailman/listinfo/grub-devel