Hi Jamie,

On 11/11/21 15:11, Jamie Iles wrote:
> On Linux, read() will only ever read a maximum of 0x7ffff000 bytes
> regardless of what is asked.  If the file is larger than 0x7ffff000
> bytes the read will need to be broken up into multiple chunks.
> 
> Cc: Luc Michel <lmic...@kalray.eu>
> Signed-off-by: Jamie Iles <ja...@nuviainc.com>
> ---
>  hw/core/loader.c | 40 ++++++++++++++++++++++++++++++++++------
>  1 file changed, 34 insertions(+), 6 deletions(-)
> 
> diff --git a/hw/core/loader.c b/hw/core/loader.c
> index 348bbf535bd9..16ca9b99cf0f 100644
> --- a/hw/core/loader.c
> +++ b/hw/core/loader.c
> @@ -80,6 +80,34 @@ int64_t get_image_size(const char *filename)
>      return size;
>  }
>  
> +static ssize_t read_large(int fd, void *dst, size_t len)
> +{
> +    /*
> +     * man 2 read says:
> +     *
> +     * On Linux, read() (and similar system calls) will transfer at most
> +     * 0x7ffff000 (2,147,479,552) bytes, returning the number of bytes

Could you mention MAX_RW_COUNT from linux/fs.h?

> +     * actually transferred.  (This is true on both 32-bit and 64-bit
> +     * systems.)

Maybe "This is true for both ILP32 and LP64 data models used by Linux"?
(because that would not be the case for the ILP64 model).

Otherwise s/systems/Linux variants/?

> +     *
> +     * So read in chunks no larger than 0x7ffff000 bytes.
> +     */
> +    size_t max_chunk_size = 0x7ffff000;

We can declare it static const.

> +    size_t offset = 0;
> +
> +    while (offset < len) {
> +        size_t chunk_len = MIN(max_chunk_size, len - offset);
> +        ssize_t br = read(fd, dst + offset, chunk_len);
> +
> +        if (br < 0) {
> +            return br;
> +        }
> +        offset += br;
> +    }
> +
> +    return (ssize_t)len;
> +}

I see other read()/pread() calls:

hw/9pfs/9p-local.c:472:            tsize = read(fd, (void *)buf, bufsz);
hw/vfio/common.c:269:    if (pread(vbasedev->fd, &buf, size,
region->fd_offset + addr) != size) {
...

Maybe the read_large() belongs to "sysemu/os-xxx.h"?


Reply via email to