I am using a CentOS 6 based distro called Clearos.  I believe it is running a 
2.6.xx kernel if memory serves me right



-----Original message-----
From: Gustin Johnson <gus...@meganerd.ca>
Sent: Thursday 26th September 2013 11:38
To: CLUG General <clug-talk@clug.ca>
Subject: Re: [clug-talk] software raid question

Did you resize the file system?  I have two 5.5 TB arrays (one 4 disk RAID 5 
and one 5 disk RAID 6) both built via mdadm.  One of the arrays has been 
resized 4 times over the years.

What distro and kernel are you running?  I seem to remember there being a 
kernel parameter for large block devices (config_lbd IIRC) back in the earlier 
2.6.xx days (that parameter does not seem to be present on on my 3.5.xx+ 
kernels).


My RAID 6 array:
 sudo mdadm --detail /dev/md127
/dev/md127:
        Version : 1.2
  Creation Time : Sat Aug 25 23:02:27 2012
     Raid Level : raid6
     Array Size : 5860125696 (5588.65 GiB 6000.77 GB)
  Used Dev Size : 1953375232 (1862.88 GiB 2000.26 GB)
   Raid Devices : 5
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Thu Sep 26 08:07:02 2013
          State : clean
 Active Devices : 5
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : cygnus:md-backup0  (local to host cygnus)
           UUID : bd5bb90e:853a22ab:1dcc9120:9f320b29
         Events : 65908

    Number   Major   Minor   RaidDevice State
       0       8       98        0      active sync   /dev/sdg2
       1       8       66        1      active sync   /dev/sde2
       2       8       50        2      active sync   /dev/sdd2
       3       8       82        3      active sync   /dev/sdf2
       5      65       18        4      active sync   /dev/sdr2

One quick note, RAID 5 is actually dangerous and not something I would 
recommend if you care about the data.  If you lose one device you are at a 
higher risk when the array rebuilds and you are thrashing the remaining disks.


On Thu, Sep 26, 2013 at 7:52 AM, Andrew Robinson <and...@boohahaonline.com> 
wrote:
Hey has anyone run into issues with trying to grow a Linux software raid?  I’ve 
just finished syncing all of my data from 3x2TB (in raid 5) to 3x3TB (still 
raid 5).  The raid array should show a capacity of approx. 5.5 TB, but instead 
shows only 4.  I’ve tried re running the mdadm –grow command but with no 
success.  Is there are a hard limit to the size of the array in Linux?

 
I should mention that each of these drives are formatted as GTP drives, with 1 
partition of type FD that consumes the entire 2.79 TB of space on each drive.

 
I’ve been using he instructions located at 
https://raid.wiki.kernel.org/index.php/Growing#Expanding_existing_partitions if 
anyone is curious to what I’ve done so far


_______________________________________________
clug-talk mailing list
clug-talk@clug.ca
http://clug.ca/mailman/listinfo/clug-talk_clug.ca
Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
**Please remove these lines when replying


_______________________________________________

clug-talk mailing list

clug-talk@clug.ca

http://clug.ca/mailman/listinfo/clug-talk_clug.ca

Mailing List Guidelines (http://clug.ca/ml_guidelines.php)

**Please remove these lines when replying

_______________________________________________
clug-talk mailing list
clug-talk@clug.ca
http://clug.ca/mailman/listinfo/clug-talk_clug.ca
Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
**Please remove these lines when replying

Reply via email to