The branch main has been updated by eugen:

URL: 
https://cgit.FreeBSD.org/src/commit/?id=81092e92ea5184c4eeedad58044d72cfef72dd24

commit 81092e92ea5184c4eeedad58044d72cfef72dd24
Author:     Eugene Grosbein <eu...@freebsd.org>
AuthorDate: 2024-02-12 07:24:28 +0000
Commit:     Eugene Grosbein <eu...@freebsd.org>
CommitDate: 2024-02-12 07:33:43 +0000

    graid: unbreak Promise RAID1 with 4+ providers
    
    Fix a problem in graid implementation of Promise RAID1 created with 4+ 
disks.
    Such an array generally works fine until reboot only due to a bug
    in metadata writing code. Before the fix, next taste erronously created
    RAID1E (kind of RAID10) instead of RAID1, hence graid used wrong offsets
    for I/O operations.
    
    The bug did not affect Promise RAID1 arrays with 2 or 3 disks only.
    
    Reviewed by:    mav
    MFC after:      3 days
---
 sys/geom/raid/md_promise.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/sys/geom/raid/md_promise.c b/sys/geom/raid/md_promise.c
index ba7a4d2b1cc4..d0d041e027c2 100644
--- a/sys/geom/raid/md_promise.c
+++ b/sys/geom/raid/md_promise.c
@@ -1762,8 +1762,9 @@ g_raid_md_write_promise(struct g_raid_md_object *md, 
struct g_raid_volume *tvol,
                meta->total_disks = vol->v_disks_count;
                meta->stripe_shift = ffs(vol->v_strip_size / 1024);
                meta->array_width = vol->v_disks_count;
-               if (vol->v_raid_level == G_RAID_VOLUME_RL_RAID1 ||
-                   vol->v_raid_level == G_RAID_VOLUME_RL_RAID1E)
+               if (vol->v_raid_level == G_RAID_VOLUME_RL_RAID1)
+                       meta->array_width = 1;
+               else if (vol->v_raid_level == G_RAID_VOLUME_RL_RAID1E)
                        meta->array_width /= 2;
                meta->array_number = vol->v_global_id;
                meta->total_sectors = vol->v_mediasize / 512;

Reply via email to