On Tue, May 14, 2013 at 04:14:33PM +0200, Kevin Wolf wrote:
> This catches the situation that is described in the bug report at
> https://bugs.launchpad.net/qemu/+bug/865518 and goes like this:
> 
>     $ qemu-img create -f qcow2 huge.qcow2 $((1024*1024))T
>     Formatting 'huge.qcow2', fmt=qcow2 size=1152921504606846976 
> encryption=off cluster_size=65536 lazy_refcounts=off
>     $ qemu-io /tmp/huge.qcow2 -c "write $((1024*1024*1024*1024*1024*1024 - 
> 1024)) 512"
>     Segmentation fault
> 
> With this patch applied the segfault will be avoided, however the case
> will still fail, though gracefully:
> 
>     $ qemu-img create -f qcow2 /tmp/huge.qcow2 $((1024*1024))T
>     Formatting 'huge.qcow2', fmt=qcow2 size=1152921504606846976 
> encryption=off cluster_size=65536 lazy_refcounts=off
>     qemu-img: The image size is too large for file format 'qcow2'
> 
> Note that even long before these overflow checks kick in, you get
> insanely high memory usage (up to INT_MAX * sizeof(uint64_t) = 16 GB for
> the L1 table), so with somewhat smaller image sizes you'll probably see
> qemu aborting for a failed g_malloc().
> 
> If you need huge image sizes, you should increase the cluster size to
> the maximum of 2 MB in order to get higher limits.
> 
> Signed-off-by: Kevin Wolf <kw...@redhat.com>
> ---
>  block/qcow2-cluster.c | 23 +++++++++++++++--------
>  block/qcow2.c         | 13 +++++++++++--
>  block/qcow2.h         |  5 +++--
>  3 files changed, 29 insertions(+), 12 deletions(-)

Thanks, applied to my block tree for 1.5:
https://github.com/stefanha/qemu/commits/block

Stefan

Reply via email to