> > I installed opensolaris and setup rpool as my base
> install on a single 1TB drive
> 
> If I understand correctly, you have rpool and the
> data pool configured all as one 
> pool?
Correct 

> That's not probably what you'd really want. For one
> part, the bootable root pool
> should all be available to GRUB from a single
> hardware device and this precludes
> any striping or raidz configurations for the root
> pool (only single drives and 
> mirrors are supported).
Makes sense
> You should rather make a separate root pool (depends
> on your installation size,
> RAM -> swap, number of OS versions to roll back); I'd
> suffice with anything from 
> 8 to 20Gb. And the rest of the disk (as another
> slice) becomes the data pool which
I would like to use a 16gb sd card for this- if there is a post or a resource 
on "how to" you know of pls point me to it.
> can later be expanded by adding stripes. Obviously,
> data already on the disk 
> won't magically become striped to all drives unless
> you rewrite it.
> 
> > a single 1TB drive
> 
> Minor detail: I thought you were moving 1.5TB disks?
> Or did you find a drive with
> adequately few data (1 TB used)?
I have 2 x 1TB drives that are clean and 8 by 1.5TB drives with all my data on.
> > transfering data accross till the drive was empty
> 
> I thought NTFS driver for Solaris is read-only?
Nope I copied(not moved) all the data 800GB so far in 3 and a half hours 
succesfully to my rpool.

> Not a good transactional approach. Delete original
> data only after all copying has 
> completed (and perhaps cross-checked) and the disk
> can actually be reused in the
> ZFS pool.
> 
> For example, if you were to remake the pool (as
> suggested above for rpool and 
> below for raidz data pool) - where would you re-get
> the original data for copying 
> over again?
> 
> > I havent worked out if I can transform my zpool int
> a zraid after I have 
> > copied all my data.
> 
> My guess would be - no, you can't (not directly at
> least). I think you can mirror the
> striped pool's component drives on the fly, by buying
> new drives one at a time - 
> which requires buying these drives. Or if you buy and
> attach all 8-9 drives at once,
Trying to spare myself the expense as this is my home system so budget is a 
constraint. 
> you can build another pool with raidz layout and
> migrate all data to it. Your old 
> drives can then be attached to this pool as another
> raidz vdev stripe (or even 
> mirror, but that's probably not needed for your
> usecase). These scenarios are
> not unlike raid50 or raid51, respectively.
> 
> In case of striping, you can build and expand your
> pool by vdev's of different 
> layout and size. As said before, currently there's a
> problem that you can't shrink
> the pool to remove devices (other than break mirrors
> into single drives).
> 
> Perhaps you can get away by buying now only the
> "parity" drives for your future 
> pool layout (which depends on the number of
> motherboard/controller connectors,
> and power source capacity, and your computer case
> size, etc.) and following the 
> ideas for "best-case" scenario from my post.
Motherboard has 7 sata connectors in addition I have a Intel sata raid 
controller with 6 connectors which I havent put on yet and I am using a dual 
psu coolermaster case which supports 16 drives
 
> 
> Then you'd start the pool by making a raidz1 device
> of 3-5 drives total (new empty 
> ones, possibly including the "missing" fake parity
> device), and then making and 
> attaching to the pool more new similar raidz vdev's
> as you free up NTFS disks.
> 
> I did some calculations on this last evening.
> 
> For example, if your data fits on 8 "data" drives,
> you can make 1*8-Ddrive raidz1 
> set with 9 drives (8+1), 2*4-Ddrive sets with 10
> drives (8+2), 3*3-Ddrive sets with 
> 12 drives (9+3). 
> 
> I'd buy 4 new drives and stick with the latter
> 12-drive pool scenario - 
> 1) build a complete 4-drive raidz1 set (3-Ddrive +
> 1*Pdrive), 
> 2) move over 3 drives worth of data,
> 3) build and attach a fake 4-drive raidz1 set
> (3-Ddrive + 1 missing Pdrive),
> 4) move over 3 drives worth of data,
> 5) build and attach a fake 4-drive raidz1 set
> (3-Ddrive + 1 missing Pdrive),
> 6) move over 2 drives worth of data,
> 7) complete the parities for the missing Pdrives of
> the two faked sets.
> 
> This does not in any way involve the capacity of your
> bootroot drives (which I think
> were expected to be a CF card, no?). So you already
> have at least one such drive ;)
> Even if your current drive is partially consumed by
> the root pool, I think you can 
> sacrifice some 20Gb on each drive in one 4-disk
> raidz1 vdev. You can mirror the 
> root pool with one of these drives, and make a
> mirrored swap pool on the other 
> couple.
Ok I am going to have to read through this slowly and fully understand the fake 
raid scenario. What I am trying to avoid is having multiple raidz's because 
every time I have another one I loose a lot of extra space to parity. Much like 
in raid 5.
> //Jim
And last thx so very much for spending so much time and effort in transferring 
knowlege, I really do appreciate it.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to