Hi Dennis, I had happening the same on my Ultra-2 last week and asked one of the coworkers. Apparently this can happen when the SCSI disks take too long to respond to certain syscalls done by the operating system. I am still trying to find out the whats and whys regarding this...
-- Patrick On Wed, Nov 26, 2008 at 5:41 PM, Dennis Clarke <[EMAIL PROTECTED]> wrote: > > This happens every time and I need to check this out on SXCE and then > Solaris 10 10/08 again to see if the issue is really hardware in some way. > > Here is what happens at every boot of this Sparc machine : > > 1) given four SCSI disks on three SCSI ports ( two controllers ) > > lom>poweron > lom> > LOM event: power on > > Netra t1 (UltraSPARC-IIi 440MHz), No Keyboard > OpenBoot 3.10.27 ME, 1024 MB memory installed, Serial #12731976. > Ethernet address 8:0:20:c2:46:48, Host ID: 80c24648. > > > > > ok probe-scsi-all > /[EMAIL PROTECTED],0/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],1 > Target 2 > Unit 0 Disk SEAGATE ST373307LSUN72G 0507 > > /[EMAIL PROTECTED],0/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED] > Target 3 > Unit 0 Disk FUJITSU MAT3073N SUN72G 0602 > > /[EMAIL PROTECTED],0/[EMAIL PROTECTED],1/[EMAIL PROTECTED] > Target 0 > Unit 0 Disk SEAGATE ST373307LSUN72G 0507 > Target 1 > Unit 0 Disk SEAGATE ST373307LSUN72G 0507 > > So four more or less identical Sun OEM disks. > > ok boot > Boot device: /[EMAIL PROTECTED],0/[EMAIL PROTECTED],1/[EMAIL > PROTECTED]/[EMAIL PROTECTED],0:a File and args: > SunOS Release 5.10 Version Generic_137137-09 64-bit > Copyright 1983-2008 Sun Microsystems, Inc. All rights reserved. > Use is subject to license terms. > WARNING: add_spec: No major number for fp > Hostname: core > Reading ZFS config: done. > Mounting ZFS filesystems: (5/5) > > core console login: root > Password: > Nov 26 16:35:22 core login: ROOT LOGIN /dev/console > Last login: Wed Nov 26 03:52:36 on console > Sun Microsystems Inc. SunOS 5.10 Generic January 2005 > # zpool status -x > pool: s10s > state: DEGRADED > status: One or more devices could not be opened. Sufficient replicas > exist for > the pool to continue functioning in a degraded state. > action: Attach the missing device and online it using 'zpool online'. > see: http://www.sun.com/msg/ZFS-8000-2Q > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > s10s DEGRADED 0 0 0 > mirror DEGRADED 0 0 0 > c0t0d0s0 ONLINE 0 0 0 > c0t1d0s0 ONLINE 0 0 0 > c1t3d0s0 UNAVAIL 0 0 0 cannot open > c2t2d0s0 UNAVAIL 0 0 0 cannot open > > errors: No known data errors > > Weird .. is there a driver or kernel module not loaded ? > Let's see what happens when I bring that disk online : > > # modinfo > /tmp/modinfo.1 > # zpool online s10s c1t3d0s0 > # zpool status -x > all pools are healthy > # zpool status s10s > pool: s10s > state: ONLINE > scrub: resilver completed after 0h0m with 0 errors on Wed Nov 26 16:36:48 > 2008 > config: > > NAME STATE READ WRITE CKSUM > s10s ONLINE 0 0 0 > mirror ONLINE 0 0 0 > c0t0d0s0 ONLINE 0 0 0 > c0t1d0s0 ONLINE 0 0 0 > c1t3d0s0 ONLINE 0 0 0 > c2t2d0s0 ONLINE 0 0 0 > > errors: No known data errors > # modinfo > /tmp/modinfo.2 > > Did some device driver get loaded ? > > # diff /tmp/modinfo.1 /tmp/modinfo.2 > > No change there. > > Everything runs fine other than this. I should point out that this is a > minimal or networked reduced core install system with only 121 packages > reported by pkginfo. > > I have done this sort of thing with SXCE SNV_81 with great success and had > ZFS with zones and resource caps in place etc and it all works on a very > very small footprint. I am trying to do the same thing with Sol10u6 and > SXCE SNV_101 but got stopped in my tracks with this on S10u6 thus far. Is > this a well know bug or is something funky at play here ? > > -- > Dennis Clarke > > ps: before anyone tears a strip off me for posting about s10u6 I can > assure you that the next step after getting this config locked down was to > reproduce it with SXCE. > > _______________________________________________ > opensolaris-discuss mailing list > opensolaris-discuss@opensolaris.org > _______________________________________________ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org