Hi all,

I am trying to organize our small (and the only one) filestorage using and 
thinking in ZFS-style )

So I have SF x4100 (2 x DualCore AMD Opteron 280, 4 Gb of RAM, Solaris 10 x86 
06/06 64 bit kernel + updates),  Sun Fiber Channel HBA card (Qlogic-based) and 
Apple Xraid 7Tb (2 raid controllers with 7 x 500 Gb ATA disks per each 
controller). Two internal SAS drives are in RAID1 mode using built-in LSI 
controller.

Xraid is confugured like the following - 6 disks in HW RAID 5 and one spare 
disk per controller.

So I have :

# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c0t2d0 <DEFAULT cyl 8872 alt 2 hd 255 sec 63>
          /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci1000,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
       1. c4t6000393000017312d0 <APPLE-Xserve RAID-1.50-2.27TB>
          /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci1077,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0/[EMAIL PROTECTED],0
       2. c5t600039300001742Bd0 <APPLE-Xserve RAID-1.50-2.27TB>
          /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci1077,[EMAIL 
PROTECTED],1/[EMAIL PROTECTED],0/[EMAIL PROTECTED],0


I need a place to keep multiple builds of the products (a huge number of small 
files). This will take about 2 Tb - so it's quite logical to give the whole 
"1." or "2." from the output above. What will be the best block size that I 
need to supply to "zfs create" command to get the most from filesystem that has 
a huge number of small files?

The other tank will host users' homes, projects' files and other files.

Now I am thinking to create two separate ZFS pools. "1." and "2." will be the 
only physical devices in both pools.

Or I'd better go and create one xfs pool that includes both "1." and "2."?

Later on I will use NFS to share this filestorage between Linux, Solaris, 
OpenSolaris and MacOSX hosts.
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to