On 7/7/25 6:41 PM, John Garry wrote:
> The atomic write unit max value is limited by any stacked device stripe
> size.
> 
> It is required that the atomic write unit is a power-of-2 factor of the
> stripe size.
> 
> Currently we use io_min limit to hold the stripe size, and check for a
> io_min <= SECTOR_SIZE when deciding if we have a striped stacked device.
> 
> Nilay reports that this causes a problem when the physical block size is
> greater than SECTOR_SIZE [0].
> 
> Furthermore, io_min may be mutated when stacking devices, and this makes
> it a poor candidate to hold the stripe size. Such an example (of when
> io_min may change) would be when the io_min is less than the physical
> block size.
> 
> Use chunk_sectors to hold the stripe size, which is more appropriate.
> 
> [0] 
> https://lore.kernel.org/linux-block/888f3b1d-7817-4007-b3b3-1a2ea04df...@linux.ibm.com/T/#mecca17129f72811137d3c2f1e477634e77f06781
> 
> Signed-off-by: John Garry <john.g.ga...@oracle.com>
> ---
>  block/blk-settings.c | 58 ++++++++++++++++++++++++++------------------
>  1 file changed, 35 insertions(+), 23 deletions(-)
> 
> diff --git a/block/blk-settings.c b/block/blk-settings.c
> index 761c6ccf5af7..3259cfac5d0d 100644
> --- a/block/blk-settings.c
> +++ b/block/blk-settings.c
> @@ -597,41 +597,52 @@ static bool 
> blk_stack_atomic_writes_boundary_head(struct queue_limits *t,
>       return true;
>  }
>  
> -
> -/* Check stacking of first bottom device */
> -static bool blk_stack_atomic_writes_head(struct queue_limits *t,
> -                             struct queue_limits *b)
> +static void blk_stack_atomic_writes_chunk_sectors(struct queue_limits *t)
>  {
> -     if (b->atomic_write_hw_boundary &&
> -         !blk_stack_atomic_writes_boundary_head(t, b))
> -             return false;
> +     unsigned int chunk_sectors = t->chunk_sectors, chunk_bytes;
>  
> -     if (t->io_min <= SECTOR_SIZE) {
> -             /* No chunk sectors, so use bottom device values directly */
> -             t->atomic_write_hw_unit_max = b->atomic_write_hw_unit_max;
> -             t->atomic_write_hw_unit_min = b->atomic_write_hw_unit_min;
> -             t->atomic_write_hw_max = b->atomic_write_hw_max;
> -             return true;
> -     }
> +     if (!chunk_sectors)
> +             return;
> +
> +     /*
> +      * If chunk sectors is so large that its value in bytes overflows
> +      * UINT_MAX, then just shift it down so it definitely will fit.
> +      * We don't support atomic writes of such a large size anyway.
> +      */
> +     if ((unsigned long)chunk_sectors << SECTOR_SHIFT > UINT_MAX)
> +             chunk_bytes = chunk_sectors;
> +     else
> +             chunk_bytes = chunk_sectors << SECTOR_SHIFT;
>  

Can we use check_shl_overflow() here for checking overflow? Otherwise,
changes look good to me. I've also tested it using my NVMe disk which
supports up to 256kb of atomic writes. 

Reviewed-by: Nilay Shroff <ni...@linux.ibm.com>
Tested-by: Nilay Shroff <ni...@linux.ibm.com>


Reply via email to