On Fri, Mar 28, 2014 at 10:07:22AM +0100, Stefan Hajnoczi wrote: > On Thu, Mar 27, 2014 at 08:58:38PM +0100, Stefan Weil wrote: > > Am 27.03.2014 19:52, schrieb Jeff Cody: > > >> Do we need this patch for QEMU 2.0? For 32 bit systems, the image size > > >> limit is 1000 TB, and that image would need 4 GB for the block cache in > > >> memory. Are such image sizes used anywhere? For 64 bit systems, the > > >> limit will be much higher. > > >> > > > > > > I don't know what systems are in use in the wild. But since we > > > allocate block cache to fit the entire cache in RAM currently, that > > > does open us up to potentially allocating a lot of memory, based on > > > what the image file header specifies. > > > > > > That is a reason to keep the maximum blocks_in_image size bounded to > > > the size of 0x3fffffff. With an unbound blocks_in_image size (except > > > to UINT32_MAX), the driver would potentially attempt to allocate 16GB > > > of RAM, if qemu attempts to open a VDI image file with such a header. > > > > Either we crash because of the 0x3fffffff limit, or we might crash > > because a memory allocation fails (but it might also be successful). I > > prefer this 2nd variant. > > From a user perspective, hotplugging a disk should never kill the VM. > Instead the hotplug command should fail for invalid image files. > > If a valid image can cause QEMU abort then the block driver > implementation needs to use a metadata cache to avoid putting everything > in RAM. >
We also don't know what a valid image really is. Regardless, it is legitimate for QEMU to fail to open a valid image as 'unsupported', if the current code is not able to reasonably handle the image. On that note, I wonder if the disk size / blocks in image limit in my original patch is too generous - it would still allow the allocation of 4G of data for the block cache at the upper limit.