[zfs-discuss] Trouble testing hot spares

2009-10-21 Thread Ian Allison

Hi,

I've been looking at a raidz using opensolaris snv_111b and I've come 
across something I don't quite understand. I have 5 disks (fixed size 
disk images defined in virtualbox) in a raidz configuration, with 1 disk 
marked as a spare. The disks are 100m in size and I wanted simulate data 
corruption on one of them and watch the hot spare kick in, but when I do


dd if=/dev/zero of=/dev/c10t0d0 ibs=1024 count=102400

The pool remains perfectly healthy

  pool: datapool
 state: ONLINE
 scrub: scrub completed after 0h0m with 0 errors on Wed Oct 21 17:12:11 
2009

config:

NAME STATE READ WRITE CKSUM
datapool ONLINE   0 0 0
  raidz1 ONLINE   0 0 0
c10t0d0  ONLINE   0 0 0
c10t1d0  ONLINE   0 0 0
c10t2d0  ONLINE   0 0 0
c10t3d0  ONLINE   0 0 0
spares
  c10t4d0AVAIL

errors: No known data errors


I don't understand the output, I thought I should see cksum errors 
against c10t0d0. I tried exporting/importing the pool and scrubbing it 
incase this was a cache thing, but nothing changes.


I've tried this on all the disks in the pool with the same result and 
the datasets in the pool is uncorrupted. I guess I'm misunderstanding 
something fundamental about ZFS, can anyone help me out and explain.


-Ian.
z


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS storage server hardware

2009-11-17 Thread Ian Allison

Hi,

I know (from the zfs-discuss archives and other places [1,2,3,4]) that a 
lot of people are looking to use zfs as a storage server in the 10-100TB 
range.


I'm in the same boat, but I've found that hardware choice is the biggest 
issue. I'm struggling to find something which will work nicely under 
solaris and which meets my expectations in terms of hardware. Because of 
the compatibility issues, I though I should ask here to see what 
solutions people have already found.



I'm learning as I go here, but as far as I've been able to determine, 
the basic choices for attaching drives seem to be


1) SATA Port multipliers
2) SAS Multilane Enclosures
3) SAS Expanders

In option 1, the controller can only talk to one device at a time, in 
option 2 each miniSAS connector can talk to 4 drives at a time but in 
option 3 the expander can allow for communication with up to 128 drives. 
I'm thinking about having ~8-16 drives on each controller (PCI-e card) 
so I think I want option 3. Additionally, because I might get greedier 
in the future and decide to add more drives on each controller I think 
option 3 is the best way to go. I can have a motherboard with a lot of 
PCIe slots and have one controller card for each expander.



Cases like the Supermicro 846E1-R900B have 24 hot swap bays accessible 
via a single (4u) LSI SASX36 SAS expander chip, but I'm worried about 
controller death and having the backplane as a single point of failure.


I guess, ideally, I'd like a 4u enclosure with 2x2u SAS expanders. If I 
wanted hardware redundancy, I could then use mirrored vdevs with one 
side of each  mirror on one controller/expander pair and the other side 
on a separate pair. This would allow me to survive controller or 
expander death as well hard drive failure.



Replace motherboard: ~500
Replace backplane: ~500
Replace controller: ~300
Replace disk (SATA): ~100


Does anyone have any example systems they have built or any thoughts on 
what I could do differently?


Best regards,
Ian.


[1] http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg27234.html
[2] http://www.avsforum.com/avs-vb/showthread.php?p=17543496
[3] http://www.stringliterals.com/?p=53
[4] http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg22761.html


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS storage server hardware

2009-11-17 Thread Ian Allison

Hi Richard,

Richard Elling wrote:


Cases like the Supermicro 846E1-R900B have 24 hot swap bays accessible 
via a single (4u) LSI SASX36 SAS expander chip, but I'm worried about 
controller death and having the backplane as a single point of failure.


There will be dozens of single point failures in your system.  Don't 
worry about
controllers or expanders because they will be at least 10x more reliable 
than
your disks.  If you want to invest for better reliability, invest in 
enterprise class

disks, preferably SSDs.
 -- richard


I agree about the points of failure, but I guess I'm not looking as much 
for reliability as I am for replacability. The motherboard, backplane 
and controllers are all reasonably priced (to the extent that if I had a 
few of these machine I would keep spares of everything on hand). They 
are also pretty generic so I could recycle them if I decided to go in a 
different direction.


Thanks,
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS storage server hardware

2009-11-17 Thread Ian Allison

Hi Bruno,

Bruno Sousa wrote:

Hi,

I currently have a 1U server (Sun X2200) with 2 LSI HBA attached to a
Supermicro JBOD chassis each one with 24 disks , SATA 1TB, and so far so
good..
So i have a 48 TB raw capacity, with a mirror configuration for NFS
usage (Xen VMs) and i feel that for the price i paid i have a very nice 
system.


Sounds good. I understand from

http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg27248.html

That you need something like supermicro's CSE-PTJBOD-CB1 to cable the 
drive trays up, do you do anything about monitoring the power supply?


Cheers,
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS storage server hardware

2009-11-17 Thread Ian Allison

Hi Bruno,


Bruno Sousa wrote:

Hi Ian,

I use the Supermicro SuperChassis 846E1-R710B, and i added the JBOD kit 
that has :


* Power Control Card


Sorry to keep bugging you, but which card is this. I like the sound of 
your setup.


Cheers,
Ian.




* SAS 846EL2/EL1 BP External Cascading Cable

* SAS 846EL1 BP 1-Port Internal Cascading Cable

I don't do any monitoring in the JBOD chassis..
Bruno

Ian Allison wrote:

Hi Bruno,

Bruno Sousa wrote:

Hi,

I currently have a 1U server (Sun X2200) with 2 LSI HBA attached to a
Supermicro JBOD chassis each one with 24 disks , SATA 1TB, and so far so
good..
So i have a 48 TB raw capacity, with a mirror configuration for NFS
usage (Xen VMs) and i feel that for the price i paid i have a very 
nice system.


Sounds good. I understand from

http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg27248.html

That you need something like supermicro's CSE-PTJBOD-CB1 to cable the 
drive trays up, do you do anything about monitoring the power supply?


Cheers,
Ian.







--

Ian Allison
PIMS-UBC/SFU System and Network Administrator
the Pacific Institute for the Mathematical Sciences

Phone: (778) 991 1522
email: i...@pims.math.ca
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS storage server hardware

2009-11-18 Thread Ian Allison

Chris Du wrote:
> You can get the E2 version of the chassis that supports multipathing
> but you have to use dual port SAS disks. Or you can use seperate SAS
> hba to connect to seperate jbos chassis and do mirror over 2 chassis.
> The backplane is just a path-through fabric which is very unlikely to
> die.

> Then like others said, your storage head unit is single point of
> failure. Unless you implement some cluster design, there is always
> single point of failure.


Thanks, I think I'll go with the single SAS expander, I'm less worried 
about that setup now. As you say, I should probably just cluster similar 
machines when I'm looking for redundancy.


At the moment I just want to get something working with reasonable 
priced parts which I can expand on in the future.


Thanks,
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss