On Wed, Jun 12, 2013 at 1:30 PM, Kevin Wolf <kw...@redhat.com> wrote:

> Am 12.06.2013 um 10:04 hat Evgeny Budilovsky geschrieben:
> > The hard-coded 2k buffer on the stack won't allow reading big descriptor
> > files which can be generated when storing big images (For example 500G
> > vmdk splitted to 2G chunks).
> >
> > Signed-off-by: Evgeny Budilovsky <evgeny.budilov...@ravellosystems.com>
> > ---
> >  block/vmdk.c |   28 +++++++++++++++++++++-------
> >  1 file changed, 21 insertions(+), 7 deletions(-)
> >
> > diff --git a/block/vmdk.c b/block/vmdk.c
> > index 608daaf..1bc944b 100644
> > --- a/block/vmdk.c
> > +++ b/block/vmdk.c
> > @@ -719,27 +719,41 @@ static int vmdk_open_desc_file(BlockDriverState
> *bs, int flags,
> >                                 int64_t desc_offset)
> >  {
> >      int ret;
> > -    char buf[2048];
> > +    char *buf = NULL;
> >      char ct[128];
> >      BDRVVmdkState *s = bs->opaque;
> > +    int64_t size;
> >
> > -    ret = bdrv_pread(bs->file, desc_offset, buf, sizeof(buf));
> > +    size = bdrv_get_allocated_file_size(bs);
> > +    if (size < 0) {
> > +        return -EINVAL;
> > +    }
> > +
> > +    buf = g_malloc0(size+1);
>
> This is an unbounded allocation. Not sure if this is a good idea. Can we
> restrict the maximum size to something reasonably small, like a megabyte?
>
> Kevin
>

yes good idea !

-- 
Best Regards,
Evgeny

Reply via email to