Brandon High wrote:
As I recall, using whole disk as zfs also change the disk label to EFI. Meaning, you can't boot from it.On Wed, Jul 9, 2008 at 3:37 PM, Florin Iucha <[EMAIL PROTECTED]> wrote:The question is, how should I partition the drives, and what tuning parameters should I use for the pools and file systems? From reading the best practices guides [1], [2], it seems that I cannot have the root file system on a RAID-5 pool, but it has to be a separate storage pool. This seems to be slightly at odds with the suggestion of using whole-disks for ZFS, not just slices/partitions.The reason for using a whole disk is that ZFS will turn on the drive's cache. When using slices, the cache is normally disabled. If all slices are using ZFS, you can turn the drive cache back on. I don't think it happens by default right now, but you can set it manually.
Another alternative is to use an IDE to Compact Flash adapter, and boot off of flash.
Just curious, what will that flash contain?e.g. will it be similar to linux's /boot, or will it contain the full solaris root?
How do you manage redundancy (e.g. mirror) for that boot device?
My plan right now is to create a 20 GB and a 720 GB slice on each disk, then create two storage pools, one RAID-1 (20 GB) and one RAID-5 (1.440 TB). Create the root, var, usr and opt file systems in thefirst pool, and home, library and photos in the second.
Good plan.
I hope I won't need swap, but I could create three 1 GB slices (one on each disk) for that.If you have enough memory (say 4gb) you probably won't need swap. I believe swap can live in a ZFS pool now too, so you won't necesarily need another slice. You'll just have RAID-Z protected swap.
Really? I think solaris still needs non-zfs swap for default dump device. Regards, Fajar
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss