We were basing the advertisement of maximum discard and transfer length off of UINT32_MAX, but since the rest of the block layer has signed int limits on a transaction, nothing could ever reach that maximum, and we risk overflowing an int once things are converted to byte-based rather than sector-based limits. What's more, we DO have a much smaller limit: both the current kernel and qemu-nbd have a hard limit of 32M on a read or write transaction, and while they may also permit up to a full 32 bits on a discard transaction, the upstream NBD protocol is proposing wording that without any explicit advertisement otherwise, clients should limit ALL requests to the same limits as read and write, even though the other requests do not actually require as many bytes across the wire. So the better limit to tell the block layer is 32M for both values.
Signed-off-by: Eric Blake <ebl...@redhat.com> --- v2: new patch --- block/nbd.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/block/nbd.c b/block/nbd.c index 6015e8b..bf67c8a 100644 --- a/block/nbd.c +++ b/block/nbd.c @@ -362,8 +362,8 @@ static int nbd_co_flush(BlockDriverState *bs) static void nbd_refresh_limits(BlockDriverState *bs, Error **errp) { - bs->bl.max_discard = UINT32_MAX >> BDRV_SECTOR_BITS; - bs->bl.max_transfer_length = UINT32_MAX >> BDRV_SECTOR_BITS; + bs->bl.max_discard = NBD_MAX_SECTORS; + bs->bl.max_transfer_length = NBD_MAX_SECTORS; } static int nbd_co_discard(BlockDriverState *bs, int64_t sector_num, -- 2.5.5