Am 14.06.2011 20:18, schrieb Stefan Hajnoczi: > Overview > -------- > > This patch series adds image streaming support for QED image files. Other > image formats can also be supported in the future. > > Image streaming populates the file in the background while the guest is > running. This makes it possible to start the guest before its image file has > been fully provisioned. > > Example use cases include: > * Providing small virtual appliances for download that can be launched > immediately but provision themselves in the background. > * Reducing guest provisioning time by creating local image files but backing > them with shared master images which will be streamed. > > When image streaming is enabled, the unallocated regions of the image file are > populated with the data from the backing file. This occurs in the background > and the guest can perform regular I/O in the meantime. Once the entire > backing > file has been streamed, the image no longer requires a backing file and will > drop its reference
Long CC list and Kevin wearing his upstream hat - this might be an unpleasant email. :-) So yesterday I had separate discussions with Stefan about image streaming and Marcelo, Avi and some other folks about live block copy. The conclusion was in both cases that, yes, both things are pretty similar and, yes, the current implementation don't reflect that but duplicate everything. To summarise what both things are about: * Image streaming is a normal image file plus copy-on-read plus a background task that copies data from the source image * Live block copy is a block-mirror of two normal image files plus a background task that copies data from the source image The right solution is probably to implement COR and the background task in generic block layer code (no reason to restrict it to QED) and use it for both image streaming and live block copy. (This is a bit more complicated than it may sound here because guest writes must always take precedence over a copy - but doing complicated things is an even better reason to do it in a common place instead of duplicating) People seem to agree on this, and the reason that I've heard why we should merge the existing code instead is downstream time pressure. That may be a valid reason for downstreams to add such code, but is taking such code really the best option for upstream (and therefore long-term for downstreams)? If we take these patches as they are, I doubt that we'll ever get a rewrite to implement the code as it should have been done in the first place. So I'm tempted to reject the current versions of both the image streaming and live block copy series and leave it to downstreams to use these as temporary solutions if the time pressure is too high. I know that maintaining things downstream is painful, but that's the whole point: I want to see the real implementation one day, and I'm afraid this might be the only way to get it. Kevin