Hey folks,

I've been wanting to use Solaris for a while now, for a ZFS home storage server 
and simply to get used to Solaris (I like to experiment).  However, installing 
b70 has really not worked out for me at all.

The hardware I'm using is pretty simple, but didn't seem to be supported under 
the latest Nexenta or Belenix build.  It seems to work fine in b70 SXCE...with 
a few catastrophic problems (counterintuitive I know, but hear me out).

I haven't yet managed to get the Webstart hardware analyzer to work on this 
system (no install, ubuntu liveCDs seem to not want to install a jdk for some 
reason), so I'm really not sure that the hardware is "supported", but as I said 
everything seemed to work fine in the installer and then initializing the ZFS 
pool (I'd just like to say how shockingly simple it was to create my zpool -- I 
was amazed!).

The hardware is:
3.0ghz P4 socket 775
Intel 965G desktop board ("Widowmaker")
3x 400GB SATA drives (ZFS RaidZ)
1x 100GB IDE drive (UFS boot)

I added a SI 2 port PCI SATA controller, but it seemed to not be recognized so 
I am not using it.

The problems I'm experiencing are as follows:
ZFS creates the storage pool just fine, sees no errors on the drives, and seems 
to work great...right up until I attempt to put data on the drives.  After only 
a few moments of transfer, things start to go wrong.  The system doesn't power 
off, it just beeps 4-5 times.  The X session dies and the monitor turns off 
(doesn't drop back to a console).  All network access dies.  It seems that the 
system panics (is it called something else in solaris-land?).  The HD access 
light stays on (though I can hear no drives doing anything strenuous), and the 
CD light blinks.  This has happened two or three times, every time I've tried 
to start copying data to the ZFS pool.   I've been transfering over the 
network, via SCP or NFS.

This happens every time I've attempted to transfer data to the ZFS storage 
pool.  Data transfers to the UFS partition seemed to work fine, and when I 
rebooted everything seemed to be working again.

When I did a zfs scrub on the storage pool, the system crashed as usual, but 
didn't come back up properly.  It went to a disk cleanup root password prompt 
(which I couldn't enter because I didn't have USB legacy mode enabled and 
apparently USB isn't supported until the OS is fully booted and I didn't have a 
spare PS2 keyboard to use on that system).

This is really bothersome, since I really was looking forward to the ease of 
use and administration of ZFS versus Linux software RAID + LVM.

Can anybody shed some light on my situation?  Is there any way I can get a 
little more information about what's causing this crash?  I have no problem 
hooking up a serial console to the system to pull off info if that's possible 
(provided it has a serial port...I don't really remember) if necessary.  Or 
maybe there are logs stored when the system takes a dive?  Anything I can do to 
help sort this out I'll be willing to do.

As a side note, this so far is entirely experimental for me...I haven't even 
gotten the chance to get any large amount of data on the ZFS pool (~650MB so 
far), so I have no problem reinstalling, changing around hardware, swapping 
board & processor out for something different (I have several systems with some 
potential to be good storage servers that I don't mind moving around -- I 
borrowed 2 or 3 drives from work so that I can move data between stable systems 
to move around other hardware).

Thanks!
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to