Ross <myxiplx <at> googlemail.com> writes:
> Now this is risky if you don't have backups, but one possible approach might 
be:
> - Take one of the 1TB drives off your raid-z pool
> - Use your 3 1TB drives, plus two sparse 1TB files and create a 5 drive 
raid-z2
> - disconnect the sparse files.  You now have a 3TB raid-z2 volume in a 
degraded state
> - use zfs send / receive to migrate your data over
> - destroy your original pool and use zpool replace to add those drives to 
the new pool in place of the sparse files

This would work but it would give the original poster a raidz2 with only 3TB 
of usable space when he really wants a 4TB raidz1.

Fortunately, Robert, a similar procedure exists to end up exactly with the 
pool config you want without requiring any other temporary drives. Before I go 
further, let me tell you there is a real risk of losing your data because the 
procedure I describe below use temporary striped pools (equivalent to raid0) 
to copy data around, and as you know raid0 is the less reliable raid 
mechanism. Also, the procedure involves lost of manual steps.

So, let me first represent your current pool config in compact form using 
drive names describing their capacity:
  pool (2.6TB usable):  raidz a-1t b-1t c-1t  raidz d-320g e-400g f-400g

Export the 1st pool, create a 2nd temporary striped pool made of your 2 new 
drives plus f-400g, reimport the 1st pool (f-400g should show up as missing in 
the 1st one):
  1st pool (2.6TB usable):  raidz a-1t b-1t c-1t  raidz d-320g e-400g 
<missing>
  2nd pool (2.4TB usable):  g-1t h-1t f-400g

Copy your data to the 2nd pool, destroy the 1st one and create a 3rd temporary 
striped pool made of the 2 smallest drives:
  1st pool (destroyed):     (unused drives: a-1t b-1t c-1t)
  2nd pool (2.4TB usable):  g-1t h-1t f-400g
  3rd pool (0.7TB usable):  d-320g e-400g

Create 2 sparse files x-1t and y-1t of 1 TB each on the 3rd pool ("mkfile -n 
932g x-1t y-1t", 1TB is about 932GiB), and recreate the 1st pool with a raidz 
vdev made of 3 physical 1TB drives and the 2 sparse files:
  1st pool (4.0TB usable(*)):  raidz a-1t b-1t c-1t x-1t y-1t
  2nd pool (2.4TB usable):     g-1t h-1t f-400g
  3rd pool (0.7TB usable):     d-320g e-400g

(*) 4.0TB virtually; in practice the sparse files won't be able to allocate 
1TB of disk blocks because they are backed by the 3rd pool which is much 
smaller.

Offline one of the sparse files ("zpool offline") of the 1st pool to prevent 
at least one of them from allocating disk blocks:
  1st pool (4.0TB usable(**)):  raidz a-1t b-1t c-1t x-1t <offlined>
  2nd pool (2.4TB usable):      g-1t h-1t f-400g
  3rd pool (0.7TB usable):      d-320g e-400g

(**) At that point x-1t can grow to at least 0.7 TB because it is the only 
consumer of disk blocks on the 3rd pool; which means the 1st pool can now hold 
at least 0.7*4 = 2.8 TB in practice.

Now you should be able to copy all your data from the 2nd pool back to the 1st 
one. When done, destroy the 2nd pool:
  1st pool (4.0TB usable):  raidz a-1t b-1t c-1t x-1t <offlined>
  2nd pool (destroyed):     (unused drives: g-1t h-1t f-400g)
  3rd pool (0.7TB usable):  d-320g e-400g

Finally, replace x-1t and the other offlined sparse files with g-1t and h-1t 
("zpool replace"):
  1st pool (4.0TB usable):  raidz a-1t b-1t c-1t g-1t h-1t
  2nd pool (destroyed):     (unused drives: f-400g)
  3rd pool (0.7TB usable):  d-320g e-400g

And destroy the 3rd pool.

-marc


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to