Make blk_bio_segment_split() respect blk_crypto_bio_sectors_alignment()
when calling bio_split(). The number of sectors is rounded down to the
required alignment just before the call to bio_split(). This makes it
possible for nsegs to be overestimated, but this solution is a lot
simpler than trying to calculate the exact number of nsegs required
for the aligned number of sectors. A future patch will attempt to
calculate nsegs more accurately.

Signed-off-by: Satya Tangirala <sat...@google.com>
---
 block/blk-merge.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index a23a91e12e24..45cda45c1066 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -236,6 +236,8 @@ static bool bvec_split_segs(const struct request_queue *q,
  * following is guaranteed for the cloned bio:
  * - That it has at most get_max_io_size(@q, @bio) sectors.
  * - That it has at most queue_max_segments(@q) segments.
+ * - That the number of sectors in the returned bio is aligned to
+ *   blk_crypto_bio_sectors_alignment(@bio)
  *
  * Except for discard requests the cloned bio will point at the bi_io_vec of
  * the original bio. It is the responsibility of the caller to ensure that the
@@ -292,6 +294,9 @@ static int blk_bio_segment_split(struct request_queue *q,
         */
        bio->bi_opf &= ~REQ_HIPRI;
 
+       sectors = round_down(sectors, blk_crypto_bio_sectors_alignment(bio));
+       if (WARN_ON(sectors == 0))
+               return -EIO;
        *split = bio_split(bio, sectors, GFP_NOIO, bs);
        return 0;
 }
-- 
2.30.0.284.gd98b1dd5eaa7-goog

Reply via email to