with pass-through disks on areca controllers you have to set the lun id (i believe) using the volume command. when you issue a volume info your disk id's should look like this (if you want solaris to see the disks):
0/1/0 0/2/0 0/3/0 0/4/0 etc. the middle part there (again, i think that's supposed to be lun id) is what you need to set manually for each disk. it's actually my #1 peeve with using areca with solaris. On Thu, May 7, 2009 at 4:29 PM, Gregory Skelton < gskel...@gravity.phys.uwm.edu> wrote: > Hi Everyone, > > I want to start out by saying ZFS has been a life saver to me, and the > scientific collaboration I work for. I can't imagine working with the TB's > of data that we do, without the snapshots or the ease of moving the data > from one pool to another. > > Right now I'm trying to setup a whiteboxe with OpenSolaris. It has an Areca > 1160 RAID controller(lastest firmware), SuperMicro H8SSL-I mobo, and a > SuperMicro IPMI card. I haven't been working with Solaris for all that long, > and wanted to create a zpool similar to our x4500's. From the documentation > it says to use the format command to locate the disks. > > OpenSolaris lives on a 2 disk Mirrored raid, and I was hoping I could have > the disks pass through, so that zfs could manage the zpool. What am I doing > wrong here, that I can't see all the disks? Or do I have to use a RAID 5 > underneath the zpool? > > Any and all help is appreciated. > Thanks, > Gregory > > > r...@nfs0009:~# format > Searching for disks...done > > > AVAILABLE DISK SELECTIONS: > 0. c3t0d0 <DEFAULT cyl 48627 alt 2 hd 255 sec 63> > > /p...@0,0/pci1166,3...@1/pci1166,1...@d/pci8086,3...@1/pci17d3,1...@e/s...@0,0 > 1. c3t1d0 <DEFAULT cyl 48639 alt 2 hd 255 sec 63> > > /p...@0,0/pci1166,3...@1/pci1166,1...@d/pci8086,3...@1/pci17d3,1...@e/s...@1,0 > Specify disk (enter its number): > > > r...@nfs0009:~# ./cli64 disk info > # Ch# ModelName Capacity Usage > > =============================================================================== > 1 1 WDC WD4000YS-01MPB1 400.1GB Raid Set # 00 > 2 2 WDC WD4000YS-01MPB1 400.1GB Raid Set # 00 > 3 3 WDC WD4000YS-01MPB1 400.1GB Pass Through > 4 4 WDC WD4000YS-01MPB1 400.1GB Pass Through > 5 5 WDC WD4000YS-01MPB1 400.1GB Pass Through > 6 6 WDC WD4000YS-01MPB1 400.1GB Pass Through > 7 7 WDC WD4000YS-01MPB1 400.1GB Pass Through > 8 8 WDC WD4000YS-01MPB1 400.1GB Pass Through > 9 9 WDC WD4000YS-01MPB1 400.1GB Pass Through > 10 10 WDC WD4000YS-01MPB1 400.1GB Pass Through > 11 11 WDC WD4000YS-01MPB1 400.1GB Pass Through > 12 12 WDC WD4000YS-01MPB1 400.1GB Pass Through > 13 13 WDC WD4000YS-01MPB1 400.1GB Pass Through > 14 14 WDC WD4000YS-01MPB1 400.1GB Pass Through > 15 15 WDC WD4000YS-01MPB1 400.1GB Pass Through > 16 16 WDC WD4000YS-01MPB1 400.1GB Pass Through > > =============================================================================== > GuiErrMsg<0x00>: Success. > r...@nfs0009:~# > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss