Re: [zfs-discuss] Best option for my home file server?

2007-09-28 Thread Christopher
I'm new to the list so this is probably a noob question: Are this forum part of 
a mailinglist or something? I keep getting some answers to my posts in this 
thread on email as well as some here, but it seems that those answers/posts on 
email aren't posted on this forum..?? Or do I just get a copy on email from 
what ppl post here on the forum?

Georg Edelmann wrote me on email saying he was interested in making a 
homeserver/nas as I'm about to (try to) do and wanted to know my hardware etc.

What I was thinking of using for this server was Asus A8N-SLI Deluxe with some 
kind of AMD64 CPU, probably the cheapest X2 I can find and pair it with 1 or 
perhaps 2GB of RAM. The mainbord has 8 SATA onboard, 4 nvidia and 4 sil3114. I 
was also gonna get a 2sata add-on controller card, totalling 10 sata ports. But 
now I'm not sure, since alhopper just said the performance of the 3114 is poor.

Blake, on the other hand mentioned the Sil3114 as a controller chip to use. I 
will of course not make use of the fake-raid on the mainboard.

Kent - I see your point and it's a good one and, but for me, I only want a big 
fileserver with redundancy for my music collection, movie collection and 
pictures etc. I would of course make a backup of the most important data as 
well from time to time.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best option for my home file server?

2007-09-28 Thread Kent Watsen


I made a mistake in calculating the mttdl-drop for adding stripes - it 
should have read:


   2 disks: space=500 GB, mttdl=760.42 years, iops=158
   4 disks: space=1000 GB, mttdl=380 years, iops=316
   6 disks: space=1500 GB, mttdl=*253* years, iops=474
   8 disks: space=2000 GB, mttdl=*190* years, iops=632

So, in my conclusion, it should have read

  1. is less expensive per addition (its always just two disks)
  2. not limited in number of stripes (a raidz should only hold up to 8
 data disks)
  3. *drops mttdl much less quickly (in fact, you'd need 12 stripes
 before hitting the 8+1 mttdl)*
  4. increases performance (adding disks to a raidz set has no impact)
  5. increases space more slowly (the only negative - can you live with
 it?)

Sorry!
Kent


Kent Watsen wrote:


I think I have managed to confuse myself so i am asking outright hoping for a straight answer. 
  

Straight answer:

ZFS does not (yet) support adding a disk to an existing raidz set
- the only way to expand an existing pool is by adding a stripe. 
Stripes can either be mirror, raid5, or raid6 (raidz w/ single or

double parity) - these striped pools are also known as raid10,
raid50, and raid60 respectively.  Each stripe in a pool may be
different in both size and type - essentially, each offers space
at a resiliency rating.  However, since apps can't control which
stripe their data is written to, all stripes in a pool generally
have the same amount of parity.  Thus, in practice, stripes differ
only in size, which can be achieved either by using larger disks
or by using more disks (in a raidz).  When stripes are of
different size, ZFS will, in time, consume all the space each
stripe offers - assuming data-access is completely balanced,
larger stripes effectively have more I/O.  Regarding matching the
amount of parity in each stripe, note that a 2-disk mirror has the
same amount of parity as RAID5 and a 3-disk mirror has the same
parity as RAID6.


So, if the primary goal is to grow a pool over time by adding as few 
disks as possible each time while having 1 bit of parity, you need to 
plan on each time adding two disks in a mirrored configuration.   Thus 
your number of disks would grow like this: 2, 4, 6, 8, 10, etc.



But since folks apparently want to be able to just add disks to a 
RAIDZ, lets compare that to adding 2-disk mirror stripes in terms of 
impact to space, resiliency, and performance.   In both cases I'm 
assuming 500GB disks having a MTBF of 4 years,7,200 rpm, and 8.5 ms 
average read seek.


Lets first consider adding disks to a RAID5:

Following the ZFS best-practice rule of (N+P), where N={2,4,8} and
P={1,2}, the disk-count should grow as follows: 3, 5, 9.  That is,
you would start with 3, add 2, and then add 4 - note: this would
be the limit of the raidz expansion since ZFS discourages N>8.  
So, the pool's MTTDL would be:


3  disks: space=1000 GB, mttdl=760.42 years, iops=79
5  disks: space=2000 GB, mttdl=228.12 years, iops=79
9  disks: space=4000 GB, mttdl=63.37 years, iops=79

Now lets consider adding 2-disk mirror stripes:

We already said that the disks would grow by twos: 2, 4, 6, 8, 10,
etc.  - so the pool's MTTDL would be:

2 disks: space=500 GB, mttdl=760.42 years, iops=158
4 disks: space=1000 GB, mttdl=380 years, iops=316
6 disks: space=1500 GB, mttdl=190 years, iops=474
8 disks: space=2000 GB, mttdl=95 years, iops=632

So, adding 2-disk mirrors:

   1. is less expensive per addition (its always just two disks)
   2. not limited in number of stripes (a raidz should only hold up to
  8 data disks)
   3. drops mttdl at about the same rate (though the raidz is dropping
  a little faster)
   4. increases performance (adding disks to a raidz set has no impact)
   5. increases space more slowly (the only negative - can you live
  with it?)


Highly Recommended Resources:


http://blogs.sun.com/relling/entry/zfs_raid_recommendations_space_performance
http://blogs.sun.com/relling/entry/raid_recommendations_space_vs_mttdl




Hope that helps,

Kent




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best option for my home file server?

2007-09-28 Thread Kent Watsen
Christopher wrote:
> Kent - I see your point and it's a good one and, but for me, I only want a 
> big fileserver with redundancy for my music collection, movie collection and 
> pictures etc. I would of course make a backup of the most important data as 
> well from time to time.
>  
Chris,

We have two things in common - I'm also a n00b (only started looking at 
ZFS seriously in June) and I'm also building a home server for my 
music/movies/pictures and all the other data in my house.  For me, 
maximizing space and resiliency are more important that performance (as 
even entry-level performance exceeds my worst-case of 7 simultaneous 
1080p video streams).  I decided to get a 24-bay case and will start 
with a single 4+2 set, and will stripe-in the remaining three 4+2 sets 
over time.  The reason I chose this approach over having a bunch of 
2-disk mirrors striped is because similar calculations resulted in the 
following:

- 11 * (2-disk mirror): space=11 TB, mttdl=69 years, iops=1738  (2 
hot-spares not inc in mttdl calc)
-  4 * (4+2 raidz2 set): space=16 TB, mttdl=8673.5 years, iops=316

So you see, I get more space and resiliency, but not as good perfomance 
(though it exceeds my needs)

Thanks,
Kent




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-28 Thread Brian H. Nelson
Dale Ghent wrote:
> Yes, it's in there:
>
> [EMAIL PROTECTED]/$ cat /etc/release
>  Solaris 10 8/07 s10x_u4wos_12b X86
>   
It's also available in U3 (and probably earlier releases as well) after 
installing kernel patch 120011-14 or 120012-14. I checked this last night.

Prior releases have the zil_noflush tunable, but it seems that that only 
turned off some of the flushing. This one was present in U3 (and maybe 
U2) as released.

IMO the better option is to configure the array to ignore the syncs, if 
that's possible. I'm not sure if it is in the case of the arrays you listed.

-Brian

-- 
---
Brian H. Nelson Youngstown State University
System Administrator   Media and Academic Computing
  bnelson[at]cis.ysu.edu
---

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] new PCI Express flash card

2007-09-28 Thread Al Hopper

FYI only - may be of interest to ZFSers (and not available yet):

http://www.tgdaily.com/content/view/34065/135/

Also would require an OpenSolaris custom driver (AFAICT).

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot Won't work with a straight or mirror zfsroot

2007-09-28 Thread Kugutsumen

I just tried again with Tim Foster's script ( http:// 
mediacast.sun.com/share/timf/zfs-actual-root-install.sh ) and I get  
the same negative results...

with mirror c1t0d0s0 c2t0d0s0, I get "init(1M) exited on fatal signal  
9" spam.
with a straight c1t0d0s0, I get the same problem...

I tried with 2 Sandisk U3 Cruser USB drive (in mirror mode and single  
disk )... I also tried with a Kingston because I was worried the U3  
and emulated cd was causing problem... but got same result

If I remove the original UFS boot disk, there is a panic message  
after the Solaris Version header... but it just flashes and reboot  
before I have the time to read it...

Otherwise if I keep the original UFS boot disk, I get the init(1M)  
existed on fatal signal 9 crap.
I am going to try one more time with a real harddisk instead of a UFS  
boot disk.

Lori told me in a private e-mail that the mount root was a hack..  
well I prefer a hack that worked than this nightmare.
I would really prefer having a minimal ufs boot and mount my raidz2  
rootfs than wasting days to make zfs boot work as expected!

That's the thing, zfs mountroot worked perfect any support for it was  
taken out in build 62... great backward compatibility here!

On 28/09/2007, at 6:17 PM, Kugutsumen wrote:

>
> Using build 70, I followed the zfsboot instructions at http:// 
> www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/  to the  
> letter.
>
> I tried first with a mirror zfsroot, when I try to boot to zfsboot  
> the screen is flooded with "init(1M) exited on fatal signal 9"
>
> Than I tried with a simple zfs pool (not mirrored) and it just  
> reboots right away.
>
> If I try to setup grub manually, it has no problem in both cases  
> finding the root_archive, etc...:
>
> grub> root (hd0,0,a)
>  Filesystem type is zfs, partition type 0xbf
>
> grub> /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
> loading 'platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
> cpu: 'AuthenticAMD' family 15 model 47 step 0
> [BIOS accepted mixed-mode target setting!]
>   [Multiboot-kluge, loadaddr=0xbffe38, text-and-data=0x161958,  
> bss=0x0, entry=0xc0]
> '/platform/i86pc/kernel/amd64/unix -B zfs-bootfs=rootpool/21' is  
> loaded
>
> grub> boot
>
> Any idea what I am doing wrong?
>
> Thanks and regards
> Kugutsumen
>
> 
>
> This is everything I did:
>
> # root is about 350 megs ... so it should fit nicely on our flash  
> disk.
>
> # let's make separate partition for /, /var, /opt, /usr and /export/ 
> home
>
> # Later we'll move / to the separate disk (in this case a zfs  
> mirror pool)
> # and we will create a raidz2 pool for /var, /opt, /usr and /export/ 
> home... so we won't have to worry much about the size
> # of these volumes.
> # finally we'll create a zvol for swap this machine has really a  
> lot of ram so I don't really worry about swapping.
>
> # So we Modify c0t0d0... and create:
>
> # 0 / 1024M
> # 1 swap 518M
> # 2 /usr 10240M
> # 3 /var  10240M
> # 4 /opt  10240M
> # 7 /export/home ... whatever is left
>
> formatted my flashdisks with 2 identical unamed slices (s0) of 969M
>
> Installed everything on that temporary install disk (c0t0d0)
>
> zpool create -f datapool raidz2 /dev/dsk/c4d0  /dev/dsk/c5d0  /dev/ 
> dsk/c6d0  /dev/dsk/c7d0
> zfs create datapool/usr
> zfs create datapool/opt
> zfs create datapool/var
> zfs create datapool/home
> zfs set mountpoint=legacy datapool/usr
> zfs set mountpoint=legacy datapool/opt
> zfs set mountpoint=legacy datapool/var
> zfs set mountpoint=legacy datapool/home
>
> zfs create -V 2g datapool/swap
>
>
> # http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/
>
> zpool create -f rootpool c1t0d0s0
> zfs create rootpool/rootfs
>
> zfs set mountpoint=legacy rootpool/rootfs
> mkdir /zfsroot
> mount -F zfs rootpool/rootfs /zfsroot
>
> cd /zfsroot ; mkdir -p usr opt var home export/home
>
> mount -F zfs datapool/usr /zfsroot/usr
> mount -F zfs datapool/opt /zfsroot/opt
> mount -F zfs datapool/var /zfsroot/var
> mount -F zfs datapool/home /zfsroot/export/home
>
> Added the following to /etc/vfstab
> rootpool/rootfs - /zfsroot  zfs - yes -
> datapool/usr- /zfsroot/usr  zfs - yes -
> datapool/var- /zfsroot/var  zfs - yes -
> datapool/opt- /zfsroot/opt  zfs - yes -
> datapool/home   - /zfsroot/export/home zfs - yes -
> /dev/zvol/dsk/datapool/swap -   -   swap-
> no  -
>
> cd / ; find . -xdev -depth -print | cpio -pvdm /zfsroot
> cd / ; find usr -xdev -depth -print | cpio -pvdm /zfsroot
> cd / ; find var -xdev -depth -print | cpio -pvdm /zfsroot
> cd / ; find opt -xdev -depth -print | cpio -pvdm /zfsroot
> cd / ; find export/home -xdev -depth -print | cpio -pvdm /zfsroot
>
> # ran this script: http://www.opensolaris.org/os/community/zfs/boot/ 
> zfsboot-manual/create_dirs/
>
>  mount -F lofs -o nosub / /mnt
> (cd /mnt; tar cvf - devices dev ) | (cd /zfsroot; tar xvf -)
> umount /mnt
>
> # edit /zfsroot/etc/vfs

[zfs-discuss] DMAPI in ZFS

2007-09-28 Thread Oliver Scholz
Hello!

Does anyone know if/when ZFS will support DMAPI?

regards

Oliver
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Need Help Choosing a Rackmount Chassis

2007-09-28 Thread Blake
I'm looking for a rackmount chassis for an x86 ZFS fileserver I wan to build
for my organization.

Requirements:

Hot-swap SATA disk support
Minimum of 4-disk SATA support (would prefer 6+)
Hot-swap power supply (redundant)
Some kind of availability for replacement parts

I'll be putting in a board/proc/controller of my choice.  Sadly, there is no
lower-end offering similar to the Thumper, which is way too costly for my
org at the moment.

Any input or advice is greatly appreciated.

Blake
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot Won't work with a straight or mirror zfsroot

2007-09-28 Thread Al Hopper
On Fri, 28 Sep 2007, Kugutsumen wrote:

>
> Using build 70, I followed the zfsboot instructions at http://
> www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/  to the
> letter.
>
> I tried first with a mirror zfsroot, when I try to boot to zfsboot
> the screen is flooded with "init(1M) exited on fatal signal 9"
>
> Than I tried with a simple zfs pool (not mirrored) and it just
> reboots right away.
>
> If I try to setup grub manually, it has no problem in both cases
> finding the root_archive, etc...:
>
> grub> root (hd0,0,a)
>  Filesystem type is zfs, partition type 0xbf
>
> grub> /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
> loading 'platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
> cpu: 'AuthenticAMD' family 15 model 47 step 0
> [BIOS accepted mixed-mode target setting!]
>   [Multiboot-kluge, loadaddr=0xbffe38, text-and-data=0x161958,
> bss=0x0, entry=0xc0]
> '/platform/i86pc/kernel/amd64/unix -B zfs-bootfs=rootpool/21' is loaded
>
> grub> boot
>
> Any idea what I am doing wrong?

No - but I'll post my (very ugly) ZFS boot "cheat sheet" that'll get 
you up and running, from scratch, in less than one hour.  Apologies 
for the nastiness of this cheat sheet - I'll pretty it up and post it 
later.

# this "cheat sheet" makes the following assumptions:

# The install server is at 192.168.80.18
# The install server is using a ZFS based filesystem with 
# a pool called tanku.  The users home directory is also
# in this pool at /tanku/home/al
# the target machine has ethernet address: 00:e0:81:2f:e1:4f

# verify that your ethernet interface supports PXE boot
# most systems do - except for low-end ethernet cards that
# don't have an option RAM
# verify that you know which two disks you'll be loading the
# OS to.  If need be, you'll have to boot the box off a DVD
# and run format to determine the available disk drives

# determine the ethernet address of the interface you'll be
# using for PXE boot.  See more notes below.  Make a note 
# of it.

# next: download Lori Alts/Dave Miners ZFS boot tools:

wget 
http://www.opensolaris.org/os/community/install/files/zfsboot-kit-20060418.i386.tar.bz2

# Yes - the date should be 20070418
# unzip and untar them - in this case they'll end up in
# /tanku/home/al/zfsboot/20070418 (aka ~al/zfsboot/20070418)
# Find and read the README file.  But don't spend too much
# time studying it.  The cheat sheet will tell you what to do.

# on the install server setup a ZFS bootable netinstall image for b72
mkdir /mnt72
chown root:sys /mnt72
chmod 755 /mnt72
# FYI only: /solimages is an NFS mount
lofiadm -a /solimages/sol-nv-b72-x86-dvd.iso
# assumes that lofiadm returned "/dev/lofi/2"
mount -F hsfs -o ro /dev/lofi/2 /mnt72
zfs create tanku/b72installzfs
zfs set sharenfs='ro,anon=0' tanku/b72installzfs
cd /mnt72/Solaris_11/Tools
./setup_install_server /tanku/b72installzfs
cd /tanku/home/al/zfsboot/20070418
# next step takes around 13 minutes (why?)
ptime ./patch_image_for_zfsboot /tanku/b72installzfs
# remove the DVD image mount and cleanup
umount /mnt72
lofiadm -d /dev/lofi/2

# verify that you can mount /tanku/b72installzfs on another machine
# as a quick test.  Best to check this _now_ than try to figure it
# out later.
# mount -F nfs -o ro,vers=3,proto=tcp 192.168.80.18:/tanku/b72installzfs /mnt
# go to the prepared zfs boot area (in this case /tanku/b72zfsinstall
cd /tanku/b72installzfs/Solaris_11/Tools

# add the install server files
./add_install_client -d -e 00:e0:81:2f:e1:4f -s 
192.168.80.18:/tanku/b72installzfs  i86pc

# you'll see instructions to add the client macros (something) like:

 If not already configured, enable PXE boot by creating
 a macro named 0100E0812FE14F with:
   Boot server IP (BootSrvA) : 192.168.80.18
   Boot file  (BootFile) : 0100E0812FE14F

# using: the screen-by-screen guide at: 
# http://www.sun.com/bigadmin/features/articles/jumpstart_x86_x64.jsp
# starting at step 5:
# ^^
# 5. Configure and Run the DHCP Server
#
# add the two macros and use the name 0100E0812FE14F

# NB: Ignore *all* the stuff up to step 5.  You don't need any of it!

# NB: the macro must have the correct name
# verify that the tftp based files are available

df |grep tftp

# it should look something *like* this:
#/tanku/b72installzfs/boot
# 260129046 3564877 256564169 2%
/tftpboot/I86PC.Solaris_11-2

# test that the tftp files can be retrieved via tftp:

cd /tmp
tftp

> connect 192.168.80.18
> get 0100E0812FE14F
> quit

# enable FTP on your boot server to allow snagging the zfs boot profile file:

svcadm enable ftp

# change your password before you dare use ftp.  Remember to use a disposable
# password - because it can be sniffed on the LAN.  After we're done with FTP,
# restore your original password.

# enable the PXE boot on the target systems BIOS

# boot the target machine
# during the early phases of booting press F12  ASAP

# you should see the machine contact the DHCP server and start down

Re: [zfs-discuss] (no subject)

2007-09-28 Thread David Runyon
actually, want 200 megabytes/sec (200 MB/sec), OK with using 2 or 4 GbE ports 
to network as needed.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] new PCI Express flash card

2007-09-28 Thread Jason P. Warr
This looks real promising.  At the $30/GB target it is 1/2 the market price for 
decent ram.

Effective lifetime is obviously lower given that it is flash.  Although most of 
the SSD makers have been doing some pretty impressive cell balancing to make it 
worth it.

Personally I would like to see something in the 32-64GB range to use instead of 
an ExpressCard.

- Original Message -
From: "Al Hopper" <[EMAIL PROTECTED]>
To: zfs-discuss@opensolaris.org
Sent: Friday, September 28, 2007 10:41:11 AM (GMT-0600) America/Chicago
Subject: [zfs-discuss] new PCI Express flash card


FYI only - may be of interest to ZFSers (and not available yet):

http://www.tgdaily.com/content/view/34065/135/

Also would require an OpenSolaris custom driver (AFAICT).

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Boot Won't work with a straight or mirror zfsroot

2007-09-28 Thread Kugutsumen

Using build 70, I followed the zfsboot instructions at http:// 
www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/  to the  
letter.

I tried first with a mirror zfsroot, when I try to boot to zfsboot  
the screen is flooded with "init(1M) exited on fatal signal 9"

Than I tried with a simple zfs pool (not mirrored) and it just  
reboots right away.

If I try to setup grub manually, it has no problem in both cases  
finding the root_archive, etc...:

grub> root (hd0,0,a)
  Filesystem type is zfs, partition type 0xbf

grub> /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
loading 'platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
cpu: 'AuthenticAMD' family 15 model 47 step 0
[BIOS accepted mixed-mode target setting!]
   [Multiboot-kluge, loadaddr=0xbffe38, text-and-data=0x161958,  
bss=0x0, entry=0xc0]
'/platform/i86pc/kernel/amd64/unix -B zfs-bootfs=rootpool/21' is loaded

grub> boot

Any idea what I am doing wrong?

Thanks and regards
Kugutsumen



This is everything I did:

# root is about 350 megs ... so it should fit nicely on our flash disk.

# let's make separate partition for /, /var, /opt, /usr and /export/home

# Later we'll move / to the separate disk (in this case a zfs mirror  
pool)
# and we will create a raidz2 pool for /var, /opt, /usr and /export/ 
home... so we won't have to worry much about the size
# of these volumes.
# finally we'll create a zvol for swap this machine has really a lot  
of ram so I don't really worry about swapping.

# So we Modify c0t0d0... and create:

# 0 / 1024M
# 1 swap 518M
# 2 /usr 10240M
# 3 /var  10240M
# 4 /opt  10240M
# 7 /export/home ... whatever is left

formatted my flashdisks with 2 identical unamed slices (s0) of 969M

Installed everything on that temporary install disk (c0t0d0)

zpool create -f datapool raidz2 /dev/dsk/c4d0  /dev/dsk/c5d0  /dev/ 
dsk/c6d0  /dev/dsk/c7d0
zfs create datapool/usr
zfs create datapool/opt
zfs create datapool/var
zfs create datapool/home
zfs set mountpoint=legacy datapool/usr
zfs set mountpoint=legacy datapool/opt
zfs set mountpoint=legacy datapool/var
zfs set mountpoint=legacy datapool/home

zfs create -V 2g datapool/swap


# http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/

zpool create -f rootpool c1t0d0s0
zfs create rootpool/rootfs

zfs set mountpoint=legacy rootpool/rootfs
mkdir /zfsroot
mount -F zfs rootpool/rootfs /zfsroot

cd /zfsroot ; mkdir -p usr opt var home export/home

mount -F zfs datapool/usr /zfsroot/usr
mount -F zfs datapool/opt /zfsroot/opt
mount -F zfs datapool/var /zfsroot/var
mount -F zfs datapool/home /zfsroot/export/home

Added the following to /etc/vfstab
rootpool/rootfs - /zfsroot  zfs - yes -
datapool/usr- /zfsroot/usr  zfs - yes -
datapool/var- /zfsroot/var  zfs - yes -
datapool/opt- /zfsroot/opt  zfs - yes -
datapool/home   - /zfsroot/export/home zfs - yes -
/dev/zvol/dsk/datapool/swap -   -   swap-
no  -

cd / ; find . -xdev -depth -print | cpio -pvdm /zfsroot
cd / ; find usr -xdev -depth -print | cpio -pvdm /zfsroot
cd / ; find var -xdev -depth -print | cpio -pvdm /zfsroot
cd / ; find opt -xdev -depth -print | cpio -pvdm /zfsroot
cd / ; find export/home -xdev -depth -print | cpio -pvdm /zfsroot

# ran this script: http://www.opensolaris.org/os/community/zfs/boot/ 
zfsboot-manual/create_dirs/

  mount -F lofs -o nosub / /mnt
(cd /mnt; tar cvf - devices dev ) | (cd /zfsroot; tar xvf -)
umount /mnt

# edit /zfsroot/etc/vfstab
zpool set bootfs=rootpool/rootfs rootpool
echo etc/zfs/zpool.cache >>/zfsroot/boot/solaris/filelist.ramdisk

/usr/sbin/bootadm update-archive -R /zfsroot
  mkdir -p /rootpool/boot/grub

# Since we are using a mirror .. we need grub on both disk mbr
# also disable splash since it will not work well with mirror setup

cat >/rootpool/boot/grub/menu.lst
default 0
fallback 1

title Solaris ZFS mirror 1
root (hd0,0,a)
bootfs rootpool/rootfs
kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
module$ /platform/i86pc/$ISADIR/boot_archive

title Solaris ZFS mirror 2
root (hd1,0,a)
bootfs rootpool/rootfs
kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
module$ /platform/i86pc/$ISADIR/boot_archive

title Solaris Failsafe
kernel /boot/platform/i86pc/kernel/unix -B console=ttya
module /boot/x86.miniroot

# install grub on both harddisk
installgrub /zfsroot/boot/grub/stage1 /zfsroot/boot/grub/stage2 /dev/ 
rdsk/c1t0d0s0
installgrub /zfsroot/boot/grub/stage1 /zfsroot/boot/grub/stage2 /dev/ 
rdsk/c2t0d0s0

swap -a /dev/zvol/dsk/datapool/swap

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best option for my home file server?

2007-09-28 Thread Blake
I would agree that the performance of the SiI 3114 is not great.  I have a
similar ASUS board, and have used the standalone controller as well.
Adaptec makes a nice 2-channel SATA card that is a lot better, though about
2x as much money.  The Supermicro/Marvell controller is very well rated and
supports 8 drives I think.  The best option would be to get the LSI card
that also works on SPARC hardware (which is way more industrial-grade than
anything pee-cee) - but that card is about 300 bux.

Remember that ZFS obsoletes the need for hardware RAID, so you will need
(for example in the case of the 3114) to set the controller to expose
individual disks to the OS.  In the case of the 3114 this means re-flashing
the controller BIOS.

As far as the system goes, make sure you use 64-bit proc (you can address a
lot more memory with ZFS this way) and lots of RAM.  Anything below 4gb of
RAM in the Solaris world is considered paltry :^) - Solaris makes extremely
good use of lots of RAM, and ZFS in particular (because of it's smart I/O
scheduler) enjoys nice performance gains on a box with lots of RAM.  If I
were you, I'd buy the cheapest 64-bit proc you can and spend the saved money
on maxing the RAM out.

Blake

On 9/28/07, Christopher <[EMAIL PROTECTED]> wrote:
>
> I'm new to the list so this is probably a noob question: Are this forum
> part of a mailinglist or something? I keep getting some answers to my posts
> in this thread on email as well as some here, but it seems that those
> answers/posts on email aren't posted on this forum..?? Or do I just get a
> copy on email from what ppl post here on the forum?
>
> Georg Edelmann wrote me on email saying he was interested in making a
> homeserver/nas as I'm about to (try to) do and wanted to know my hardware
> etc.
>
> What I was thinking of using for this server was Asus A8N-SLI Deluxe with
> some kind of AMD64 CPU, probably the cheapest X2 I can find and pair it with
> 1 or perhaps 2GB of RAM. The mainbord has 8 SATA onboard, 4 nvidia and 4
> sil3114. I was also gonna get a 2sata add-on controller card, totalling 10
> sata ports. But now I'm not sure, since alhopper just said the performance
> of the 3114 is poor.
>
> Blake, on the other hand mentioned the Sil3114 as a controller chip to
> use. I will of course not make use of the fake-raid on the mainboard.
>
> Kent - I see your point and it's a good one and, but for me, I only want a
> big fileserver with redundancy for my music collection, movie collection and
> pictures etc. I would of course make a backup of the most important data as
> well from time to time.
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Boot Won't work with a straight or mirror zfsroot

2007-09-28 Thread Jürgen Keil
> 
> Using build 70, I followed the zfsboot instructions
> at http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ 
> to the  letter.
> 
> I tried first with a mirror zfsroot, when I try to boot to zfsboot  
> the screen is flooded with "init(1M) exited on fatal signal 9"

Could be this problem:

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6423745

> This is everything I did:

> zpool create -f rootpool c1t0d0s0
> zfs create rootpool/rootfs
> 
> zfs set mountpoint=legacy rootpool/rootfs
> mkdir /zfsroot
> mount -F zfs rootpool/rootfs /zfsroot

Ok.
 
> cd /zfsroot ; mkdir -p usr opt var home export/home
> 
> mount -F zfs datapool/usr /zfsroot/usr
> mount -F zfs datapool/opt /zfsroot/opt
> mount -F zfs datapool/var /zfsroot/var
> mount -F zfs datapool/home /zfsroot/export/home
> 
> Added the following to /etc/vfstab
> rootpool/rootfs - /zfsroot  zfs - yes -
> datapool/usr- /zfsroot/usr  zfs - yes -
> datapool/var- /zfsroot/var  zfs - yes -
> datapool/opt- /zfsroot/opt  zfs - yes -
> datapool/home   - /zfsroot/export/home zfs - yes
> -
> /zvol/dsk/datapool/swap -   -   swap-
>
>  -
> cd / ; find . -xdev -depth -print | cpio -pvdm /zfsroot
> cd / ; find usr -xdev -depth -print | cpio -pvdm /zfsroot
> cd / ; find var -xdev -depth -print | cpio -pvdm /zfsroot
> cd / ; find opt -xdev -depth -print | cpio -pvdm /zfsroot
> cd / ; find export/home -xdev -depth -print | cpio -pvdm /zfsroot
> 
> # ran this script:
> http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/create_dirs/
> 
> mount -F lofs -o nosub / /mnt
> (cd /mnt; tar cvf - devices dev ) | (cd /zfsroot; tar xvf -)
> umount /mnt

Your source root filesystem is on UFS?

I think much of the above steps could be simplified by populating
the zfs root filesystem like this:

mount -F zfs rootpool/rootfs /zfsroot
ufsdump 0f - / | (cd /zfsroot; ufsrestore xf -)
umount /zfsroot

That way, you don't have to use the "create_dirs" script,
or mess with the /devices and /dev device tree and the
lofs mount.

Using ufsdump/ufsrestore also gets the lib/libc.so.1 file correct
in the rootfs zfs, which typically has some lofs file mounted on
top of it.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best option for my home file server?

2007-09-28 Thread Kent Watsen


I think I have managed to confuse myself so i am asking outright hoping for a straight answer. 
  

Straight answer:

   ZFS does not (yet) support adding a disk to an existing raidz set -
   the only way to expand an existing pool is by adding a stripe. 
   Stripes can either be mirror, raid5, or raid6 (raidz w/ single or

   double parity) - these striped pools are also known as raid10,
   raid50, and raid60 respectively.  Each stripe in a pool may be
   different in both size and type - essentially, each offers space at
   a resiliency rating.  However, since apps can't control which stripe
   their data is written to, all stripes in a pool generally have the
   same amount of parity.  Thus, in practice, stripes differ only in
   size, which can be achieved either by using larger disks or by using
   more disks (in a raidz).  When stripes are of different size, ZFS
   will, in time, consume all the space each stripe offers - assuming
   data-access is completely balanced, larger stripes effectively have
   more I/O.  Regarding matching the amount of parity in each stripe,
   note that a 2-disk mirror has the same amount of parity as RAID5 and
   a 3-disk mirror has the same parity as RAID6.


So, if the primary goal is to grow a pool over time by adding as few 
disks as possible each time while having 1 bit of parity, you need to 
plan on each time adding two disks in a mirrored configuration.   Thus 
your number of disks would grow like this: 2, 4, 6, 8, 10, etc.



But since folks apparently want to be able to just add disks to a RAIDZ, 
lets compare that to adding 2-disk mirror stripes in terms of impact to 
space, resiliency, and performance.   In both cases I'm assuming 500GB 
disks having a MTBF of 4 years,7,200 rpm, and 8.5 ms average read seek.


Lets first consider adding disks to a RAID5:

   Following the ZFS best-practice rule of (N+P), where N={2,4,8} and
   P={1,2}, the disk-count should grow as follows: 3, 5, 9.  That is,
   you would start with 3, add 2, and then add 4 - note: this would be
   the limit of the raidz expansion since ZFS discourages N>8.   So,
   the pool's MTTDL would be:

   3  disks: space=1000 GB, mttdl=760.42 years, iops=79
   5  disks: space=2000 GB, mttdl=228.12 years, iops=79
   9  disks: space=4000 GB, mttdl=63.37 years, iops=79

Now lets consider adding 2-disk mirror stripes:

   We already said that the disks would grow by twos: 2, 4, 6, 8, 10,
   etc.  - so the pool's MTTDL would be:

   2 disks: space=500 GB, mttdl=760.42 years, iops=158
   4 disks: space=1000 GB, mttdl=380 years, iops=316
   6 disks: space=1500 GB, mttdl=190 years, iops=474
   8 disks: space=2000 GB, mttdl=95 years, iops=632

So, adding 2-disk mirrors:

  1. is less expensive per addition (its always just two disks)
  2. not limited in number of stripes (a raidz should only hold up to 8
 data disks)
  3. drops mttdl at about the same rate (though the raidz is dropping a
 little faster)
  4. increases performance (adding disks to a raidz set has no impact)
  5. increases space more slowly (the only negative - can you live with
 it?)


Highly Recommended Resources:

   http://blogs.sun.com/relling/entry/zfs_raid_recommendations_space_performance
   http://blogs.sun.com/relling/entry/raid_recommendations_space_vs_mttdl




Hope that helps,

Kent


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (no subject)

2007-09-28 Thread Richard Elling
David Runyon wrote:
> actually, want 200 megabytes/sec (200 MB/sec), OK with using 2 or 4 GbE ports 
> to network as needed.

200 MBytes/s isochronous sustained is generally difficult for a small system.
Even if you have enough "port bandwidth" you often approach internal bottlenecks
of small systems (eg. memory bandwidth).  If you look at the large system
architectures which implement VOD, such as the Sun Streaming System,
http://www.sun.com/servers/networking/streamingsystem/
You'll notice large (RAM) buffers between the disks and the wire.
  -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS booting with Solaris (2007-08)

2007-09-28 Thread Richard Elling
Kris Kasner wrote:
>> 2. Back to Solaris Volume Manager (SVM), I guess. It's too bad too, because 
>> I 
>> don't like it with 2 SATA disks either. There isn't enough drives to put the 
>> State Database Replicas so that if either drive failed, the system would 
>> reboot unattended. Unless there is a trick?
> 
> There is a trick for this, not sure how long it's been around.
> Add to /etc/system:
> *Allow the system to boot if one of two rootdisks is missing
> set md:mirrored_root_flag=1

Before you do this, please read the fine manual:
http://docs.sun.com/app/docs/doc/819-2724/chapter2-161?l=en&a=view&q=mirrored_root_flag

This can cause corruption and is "not supported."
  -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best option for my home file server?

2007-09-28 Thread Richard Elling
pet peeve below...

Kent Watsen wrote:
> 
>> I think I have managed to confuse myself so i am asking outright hoping for 
>> a straight answer. 
>>   
> Straight answer:
> 
> ZFS does not (yet) support adding a disk to an existing raidz set -
> the only way to expand an existing pool is by adding a stripe. 
> Stripes can either be mirror, raid5, or raid6 (raidz w/ single or
> double parity) - these striped pools are also known as raid10,
> raid50, and raid60 respectively.  Each stripe in a pool may be
> different in both size and type - essentially, each offers space at
> a resiliency rating.  However, since apps can't control which stripe
> their data is written to, all stripes in a pool generally have the
> same amount of parity.  Thus, in practice, stripes differ only in
> size, which can be achieved either by using larger disks or by using
> more disks (in a raidz).  When stripes are of different size, ZFS
> will, in time, consume all the space each stripe offers - assuming
> data-access is completely balanced, larger stripes effectively have
> more I/O.  Regarding matching the amount of parity in each stripe,
> note that a 2-disk mirror has the same amount of parity as RAID5 and
> a 3-disk mirror has the same parity as RAID6.
> 
> 
> So, if the primary goal is to grow a pool over time by adding as few 
> disks as possible each time while having 1 bit of parity, you need to 
> plan on each time adding two disks in a mirrored configuration.   Thus 
> your number of disks would grow like this: 2, 4, 6, 8, 10, etc.
> 
> 
> But since folks apparently want to be able to just add disks to a RAIDZ, 
> lets compare that to adding 2-disk mirror stripes in terms of impact to 
> space, resiliency, and performance.   In both cases I'm assuming 500GB 
> disks having a MTBF of 4 years,7,200 rpm, and 8.5 ms average read seek.

MTBF=4 years is *way too low*!  Disk MTBF should be more like 114 years.
This is also a common misapplication of reliability analysis.  To excerpt
from http://blogs.sun.com/relling/entry/using_mtbf_and_time_dependent

For example, data collected for the years 1996-1998 in the US
showed that the annual death rate for children aged 5-14 was 20.8
per 100,000 resident population. This shows an average failure
rate of 0.0208% per year.  Thus, the MTBF for children aged 5-14
in the US is approximately 4,807 years. Clearly, no human child
could be expected to live 5,000 years.

That said (ok, it is a pet peeve for RAS guys :-) the relative merit of
the rest of the analysis is good :-)  And, for the record, I mirror.
  -- richard

> Lets first consider adding disks to a RAID5:
> 
> Following the ZFS best-practice rule of (N+P), where N={2,4,8} and
> P={1,2}, the disk-count should grow as follows: 3, 5, 9.  That is,
> you would start with 3, add 2, and then add 4 - note: this would be
> the limit of the raidz expansion since ZFS discourages N>8.   So,
> the pool's MTTDL would be:
> 
> 3  disks: space=1000 GB, mttdl=760.42 years, iops=79
> 5  disks: space=2000 GB, mttdl=228.12 years, iops=79
> 9  disks: space=4000 GB, mttdl=63.37 years, iops=79
> 
> Now lets consider adding 2-disk mirror stripes:
> 
> We already said that the disks would grow by twos: 2, 4, 6, 8, 10,
> etc.  - so the pool's MTTDL would be:
> 
> 2 disks: space=500 GB, mttdl=760.42 years, iops=158
> 4 disks: space=1000 GB, mttdl=380 years, iops=316
> 6 disks: space=1500 GB, mttdl=190 years, iops=474
> 8 disks: space=2000 GB, mttdl=95 years, iops=632
> 
> So, adding 2-disk mirrors:
> 
>1. is less expensive per addition (its always just two disks)
>2. not limited in number of stripes (a raidz should only hold up to 8
>   data disks)
>3. drops mttdl at about the same rate (though the raidz is dropping a
>   little faster)
>4. increases performance (adding disks to a raidz set has no impact)
>5. increases space more slowly (the only negative - can you live with
>   it?)
> 
> 
> Highly Recommended Resources:
> 
> 
> http://blogs.sun.com/relling/entry/zfs_raid_recommendations_space_performance
> http://blogs.sun.com/relling/entry/raid_recommendations_space_vs_mttdl
> 
> 
> 
> 
> Hope that helps,
> 
> Kent
> 
> 
> 
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best option for my home file server?

2007-09-28 Thread Richard Elling
IMHO, a better investment is in the NVidia MCP-55 chipsets which
support more than 4 SATA ports.  The NForce 680a boasts 12 SATA
ports.  Nevada builds 72+ should see these as SATA drives using
the nv_sata driver and not as ATA/IDE disks.
  -- richard

Christopher wrote:
> I'm new to the list so this is probably a noob question: Are this forum part 
> of a mailinglist or something? I keep getting some answers to my posts in 
> this thread on email as well as some here, but it seems that those 
> answers/posts on email aren't posted on this forum..?? Or do I just get a 
> copy on email from what ppl post here on the forum?
> 
> Georg Edelmann wrote me on email saying he was interested in making a 
> homeserver/nas as I'm about to (try to) do and wanted to know my hardware etc.
> 
> What I was thinking of using for this server was Asus A8N-SLI Deluxe with 
> some kind of AMD64 CPU, probably the cheapest X2 I can find and pair it with 
> 1 or perhaps 2GB of RAM. The mainbord has 8 SATA onboard, 4 nvidia and 4 
> sil3114. I was also gonna get a 2sata add-on controller card, totalling 10 
> sata ports. But now I'm not sure, since alhopper just said the performance of 
> the 3114 is poor.
> 
> Blake, on the other hand mentioned the Sil3114 as a controller chip to use. I 
> will of course not make use of the fake-raid on the mainboard.
> 
> Kent - I see your point and it's a good one and, but for me, I only want a 
> big fileserver with redundancy for my music collection, movie collection and 
> pictures etc. I would of course make a backup of the most important data as 
> well from time to time.
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best option for my home file server?

2007-09-28 Thread Gary Gendel
Just keep in mind that I tried the patched driver and occasionally had kernel 
panics because of recursive mutex calls.  I believe that it isn't 
multi-processor safe. I switched to the Marvell chipset and have been much 
happier.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best option for my home file server?

2007-09-28 Thread Thomas Wagner
sliceing say "S0" to be used as root-filesystem would make
ZFS not using the write-buffer on the disks.
This would be a slight performance degrade, but would increate
reliability of the system (since root is mirrored).

Why not living on the edge and booting from ZFS ?
This would nearly eliminate UFS.

Use e.g. the two 500GB Disks for the root-filesystem
on a mirrored pool:

   mirror  X Z   here lives the OS with it's root-Filesystem on ZFS
 *and* userdata in the same pool

   raidz A B C D or any other layout

or
 User zwo of the 250GB ones:

pool boot-and-userdata-one
   mirror A B   here lives the OS and userdata-one

pool userdata-two
   mirror C D   userdata-two spanning CD - XY
   mirror X Y   

Thomas


On Thu, Sep 27, 2007 at 08:39:40PM +0100, Dick Davies wrote:
> On 26/09/2007, Christopher <[EMAIL PROTECTED]> wrote:
> > I'm about to build a fileserver and I think I'm gonna use OpenSolaris and 
> > ZFS.
> >
> > I've got a 40GB PATA disk which will be the OS disk,
> 
> Would be nice to remove that as a SPOF.
> 
> I know ZFS likes whole disks, but I wonder how much would performance suffer
> if you SVMed up the first few Gb of a ZFS mirror pair for your root fs?
> I did it this week on Solaris 10 and it seemed to work pretty well
> 
> (
> http://number9.hellooperator.net/articles/2007/09/27/solaris-10-on-mirrored-disks
> )
> 
> Roll on ZFS root :)
> 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] O.T. "patches" for OpenSolaris

2007-09-28 Thread Henk Langeveld
> 1. It appears that OpenSolaris has no way to get updates from Sun.
 > So ... how do people "patch" OpenSolaris?

Easy, by upgrading to the next OpenSolaris build.

I guess this is a kind of FAQ

There are no patches for OpenSolaris, by defintion. All fixes and
new features are always first integrated into the current development
version of Solaris.  When that is done, the fix is backported into
older releases and tested there.  When that is satisfactory, the
fix gets rolled into an official patch for that specific release.

Even so, sometimes updates for specific modules can be released
for testing, before they are integrated into the next build.
But you would need to install the files by hand, or even build
them from source.


Cheers,
Henk Langeveld



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Need Help Choosing a Rackmount Chassis

2007-09-28 Thread Cyril Plisko
[Hit wrong reply button...]

On 9/28/07, Blake <[EMAIL PROTECTED]> wrote:
> I'm looking for a rackmount chassis for an x86 ZFS fileserver I wan to build
> for my organization.
>
> Requirements:
>
> Hot-swap SATA disk support
> Minimum of 4-disk SATA support (would prefer 6+)
> Hot-swap power supply (redundant)
> Some kind of availability for replacement parts
>

I've recently built a server with SuperMicro SC826 [0]
chassis. It takes 12 disk in 2U form factor. So far
I am very pleased with the box.

Also check this [1] thread for the similar (even better, IMHO)
Intel box.

[0] http://supermicro.com/products/chassis/2U/?chs=826
[1] http://mail.opensolaris.org/pipermail/zfs-discuss/2007-September/043151.html

-- 
Regards,
Cyril
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Survivability of zfs root

2007-09-28 Thread Peter Schuller
> Now, what if that system had been using ZFS root? I have a
> hardware failure, I replace the raid card, the devid of the boot
> device changes.

I am not sure on Solaris, but on FreeBSD I always use glabel:ed
devices in my ZFS pools, making them entirely location independent.

-- 
/ Peter Schuller

PGP userID: 0xE9758B7D or 'Peter Schuller <[EMAIL PROTECTED]>'
Key retrieval: Send an E-Mail to [EMAIL PROTECTED]
E-Mail: [EMAIL PROTECTED] Web: http://www.scode.org



pgp4eTccU5E8q.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Sun 6120 array again

2007-09-28 Thread Marion Hakanson
Greetings,

Last April, in this discussion...
http://www.opensolaris.org/jive/thread.jspa?messageID=143517

...we never found out how (or if) the Sun 6120 (T4) array can be configured
to ignore cache flush (sync-cache) requests from hosts.  We're about to
reconfigure a 6120 here for use with ZFS (S10U4), and the evil tuneable
zfs_nocacheflush is not going to serve us well (there is a ZFS pool on
slices of internal SAS drives, along with UFS boot/OS slices).

Any pointers would be appreciated.

Thanks and regards,

Marion


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Need Help Choosing a Rackmount Chassis

2007-09-28 Thread Rob Windsor
Blake wrote:
> I'm looking for a rackmount chassis for an x86 ZFS fileserver I wan to 
> build for my organization.
> 
> Requirements:
> 
> Hot-swap SATA disk support
> Minimum of 4-disk SATA support (would prefer 6+)
> Hot-swap power supply (redundant)
> Some kind of availability for replacement parts
> 
> I'll be putting in a board/proc/controller of my choice.  Sadly, there 
> is no lower-end offering similar to the Thumper, which is way too costly 
> for my org at the moment.
> 
> Any input or advice is greatly appreciated.

If you find a cheap 3-4RU system, you could always fit it with one or 
two of these:

http://www.addonics.com/products/raid_system/ae4rcs35nsa.asp

Rob++
-- 
Internet: [EMAIL PROTECTED] __o
Life: [EMAIL PROTECTED]_`\<,_
(_)/ (_)
"They couldn't hit an elephant at this distance."
   -- Major General John Sedgwick
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss