List, I'm bringing you into the middle of an off-list conversation where I'm setting up a RAID10 array. Well, I'm using two RAID1 arrays as the drives for a RAID0 array.

All relevant information follows. Any clue to why I'm ending up with an array 1/4 the size I'm expecting?


On 10/18/13 23:16, Constantine A. Murenin wrote:
No clue what you're talking about; I thought stacking works just fine
since a few releases back.  Are you sure it panic'ed with the
partitions partitioned and specified correctly?

Another question is whether you'd want to have a huge 6TB partition in
OpenBSD -- generally something that's not advised.

C.

Hmm, I stand corrected. I must have done something wrong. Either way, I'm not quite getting the result I'd hoped for. Here's the details:

- the 3TB drives in dmesg look like this:

# dmesg|grep sd[0-9]
sd0 at scsibus0 targ 0 lun 0: <ATA, ST3000DM001-9YN1, CC4B> SCSI3 0/direct fixed naa.5000c500525bf426
sd0: 2861588MB, 512 bytes/sector, 5860533168 sectors
sd1 at scsibus0 targ 1 lun 0: <ATA, ST3000DM001-9YN1, CC4B> SCSI3 0/direct fixed naa.5000c5005265ff15
sd1: 2861588MB, 512 bytes/sector, 5860533168 sectors
sd2 at scsibus0 targ 2 lun 0: <ATA, ST3000DM001-9YN1, CC4B> SCSI3 0/direct fixed naa.5000c5004a5baa2e
sd2: 2861588MB, 512 bytes/sector, 5860533168 sectors
sd3 at scsibus0 targ 3 lun 0: <ATA, ST3000DM001-9YN1, CC4B> SCSI3 0/direct fixed naa.5000c5004a6e56f1
sd3: 2861588MB, 512 bytes/sector, 5860533168 sectors
sd4 at scsibus2 targ 0 lun 0: <ATA, ST3000DM001-1CH1, CC43> SCSI3 0/direct fixed naa.5000c5004e455146
sd4: 2861588MB, 512 bytes/sector, 5860533168 sectors
sd5 at scsibus2 targ 1 lun 0: <ATA, ST3000DM001-1CH1, CC43> SCSI3 0/direct fixed naa.5000c5004e4a8141
sd5: 2861588MB, 512 bytes/sector, 5860533168 sectors

[snip]

The three RAID1 arrays are created from the above six HDDs like so:

sd9 at scsibus4 targ 1 lun 0: <OPENBSD, SR RAID 1, 005> SCSI2 0/direct fixed
sd9: 2861588MB, 512 bytes/sector, 5860532576 sectors
sd10 at scsibus4 targ 2 lun 0: <OPENBSD, SR RAID 1, 005> SCSI2 0/direct fixed
sd10: 2861588MB, 512 bytes/sector, 5860532576 sectors
sd11 at scsibus4 targ 3 lun 0: <OPENBSD, SR RAID 1, 005> SCSI2 0/direct fixed
sd11: 2861588MB, 512 bytes/sector, 5860532576 sectors

sd9 = sd0a + sd1a; sd10 = sd2a + sd3a; sd11 = sd4a + sd5a

Observe:

# bioctl -i sd9
Volume      Status               Size Device
softraid0 0 Online      3000592678912 sd9     RAID1
          0 Online      3000592678912 0:0.0   noencl <sd1a>
          1 Online      3000592678912 0:1.0   noencl <sd0a>
[ root@elminster:~ ]
# bioctl -i sd10
Volume      Status               Size Device
softraid0 1 Online      3000592678912 sd10    RAID1
          0 Online      3000592678912 1:0.0   noencl <sd2a>
          1 Online      3000592678912 1:1.0   noencl <sd3a>
[ root@elminster:~ ]
# bioctl -i sd11
Volume      Status               Size Device
softraid0 2 Online      3000592678912 sd11    RAID1
          0 Online      3000592678912 2:0.0   noencl <sd4a>
          1 Online      3000592678912 2:1.0   noencl <sd5a>

At this point, I have data on sd10, so I'll only use sd9 and sd11. Here are their (lightly snipped for brevity) disklabels:

[ root@elminster:~ ]
# disklabel -pg sd9
# /dev/rsd9c:
label: SR RAID 1
duid: a7a8a62ef8e71b99
total sectors: 5860532576 # total bytes: 2794.5G
boundstart: 0
boundend: 5860532576

16 partitions:
#                size           offset  fstype [fsize bsize  cpg]
  a:          2794.5G                0    RAID
  c:          2794.5G                0  unused
[ root@elminster:~ ]
# disklabel -pg sd11
# /dev/rsd11c:
label: SR RAID 1
duid: 4b3e16399fbbbcf6
total sectors: 5860532576 # total bytes: 2794.5G
boundstart: 0
boundend: 5860532576

16 partitions:
#                size           offset  fstype [fsize bsize  cpg]
  a:          2794.5G                0    RAID
  c:          2794.5G                0  unused

As you can see above, all is looking good. sd10, which has data, was omitted.

Now, the moment of truth...  (I'm recreating this from memory..)

# bioctl -c 0 -l /dev/sd9a,/dev/sd11a softraid0

I forget exactly what was said (it was one reboot ago), but I end up with this in dmesg:

sd13 at scsibus4 targ 5 lun 0: <OPENBSD, SR RAID 0, 005> SCSI2 0/direct fixed
sd13: 1528871MB, 512 bytes/sector, 3131129344 sectors

(BTW, I have a crypto volume in there as sd12, hence the jump from 11->13)

Do you see the problem with the above?  The disklabel makes it more obvious:

# disklabel -pg sd13
# /dev/rsd13c:
type: SCSI
disk: SCSI disk
label: SR RAID 0
duid: dcfed0a6c6b194e9
flags:
bytes/sector: 512
sectors/track: 63
tracks/cylinder: 255
sectors/cylinder: 16065
cylinders: 194903
total sectors: 3131129344 # total bytes: 1493.0G
boundstart: 0
boundend: 3131129344
drivedata: 0

16 partitions:
#                size           offset  fstype [fsize bsize  cpg]
  a:          1493.0G                0  4.2BSD   8192 65536    1
  c:          1493.0G                0  unused

# bioctl -i sd13
Volume      Status               Size Device
softraid0 4 Online      1603138224128 sd13    RAID0
          0 Online      3000592408576 4:0.0   noencl <sd9a>
          1 Online      3000592408576 4:1.0   noencl <sd11a>


This should be a 3TB RAID1 (sd9) + a 3TB RAID1 (sd11) = 6TB RAID0 (sd13), but I'm only getting 1.5TB, one quarter of what I should have. Yes, I used "b" to start at zero and "*" to use the whole disk.

# newfs sd13a
[snip]

# mount -o rw,noatime,softdep /dev/sd13a /storage/raid10

# df -h
Filesystem     Size    Used   Avail Capacity  Mounted on
[snip]
/dev/sd13a     1.4T    8.0K    1.4T     0%    /storage/raid10

And that's how it stands. I guess RAID10, or stacking, or whatever you wish to call it, doesn't quite work just yet...

Fun experiment, too bad it didn't work out.

I'm all ears if anyone has a suggestion that can turn that 1.4T into a 5.6T. :D

--
Scott McEachern

https://www.blackstaff.ca

"Beware the Four Horsemen of the Information Apocalypse: terrorists, drug dealers, 
kidnappers, and child pornographers. Seems like you can scare any public into allowing 
the government to do anything with those four."  -- Bruce Schneier

Reply via email to