OK problem solved.
I had incorrectly assumed that the server wasn't booting, the longest I had
left it was over night and there was still no logon prompt in the morning! The
reality is that it was just taking a very long time due to an excessive amount
of automatically created snapshots by Tim
I can confirm that on an X4240 with the LSI (mpt) controller:
X25-M G1 with 8820 still returns invalid selftest data
X25-E G1 with 8850 now returns correct selftest data
(I haven't got any X25-M G2)
Going to replace an X25-E with the old firmware in one of our X4500s
soon and we'll see if things
Now tested a firmware 8850 X25-E in one of our X4500:s and things look better:
> # /ifm/bin/smartctl -d scsi -l selftest /dev/rdsk/c5t7d0s0
> smartctl version 5.38 [i386-pc-solaris2.10] Copyright (C) 2002-8 Bruce Allen
> Home page is http://smartmontools.sourceforge.net/
>
> No self-tests have be
On Sat, 12 Sep 2009, Jeremy Kister wrote:
scrub: resilver in progress, 0.12% done, 108h42m to go
[...]
raidz1 DEGRADED 0 0 0
c3t8d0ONLINE 0 0 0
c5t8d0ONLINE 0 0 0
c3t9d0ONLINE 0 0 0
[Originally posted to indiana-discuss]
On certain X86 machines there's a hardware/software glitch
that causes odd transient checksum failures that always seem
to affect the same files even if you replace them. This has
been submitted as a bug:
Bug 11201 - Checksum failures on mirrored drives -
On Sun, 2009-09-13 at 11:01 -0700, Stefan Parvu wrote:
> 5. Disconnecting the other disk. Problems occur:
> # zpool status zones
> pool: zones
> state: ONLINE
> status: One or more devices has experienced an unrecoverable error.
> An
> attempt was made to correct the erro
Hi there, I wonder if your issue is related to mine, see thread here:
http://opensolaris.org/jive/thread.jspa?threadID=112777&tstart=0
It only manifested after I upgraded to snv121, although booting back to 118 did
not fix it.
--
This message posted from opensolaris.org
Thanks for the reply but this seems to be a bit different.
a couple of things I failed to mention;
1) this is a secondary pool and not the root pool.
2) the snapshot are trimmed to only keep 80 or so.
The system boots and runs fine. It's just an issue for this secondary pool
and filesystem.
Hello all,
I have a situation where zpool status shows no known data errors but all
processes on a specific filesystem are hung. This has happened 2 times before
since we installed Opensolaris 2009.06 snv_111b. For instance there are two
files systems in this pool 'zfs get all' on one fil
I have zfs on my base T5210 box installed with LDOMS (v.1.0.3). Every time I
try to jumpstart my Guest machine, I get the following error.
ERROR: One or more disks are found, but one of the following problems exists:
- Hardware failure
- The disk(s) available on this system can
Is it possible to create flar image of ZFS root filesystem to install it to
other macines?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
RB wrote:
I have zfs on my base T5210 box installed with LDOMS (v.1.0.3). Every time I try to jumpstart my Guest machine, I get the following error.
ERROR: One or more disks are found, but one of the following problems exists:
- Hardware failure
- The disk(s) available on this
RB wrote:
Is it possible to create flar image of ZFS root filesystem to install it to
other macines?
yes but it needs solaris update 7 or later to install a zfs flar
see
http://www.opensolaris.org/os/community/zfs/boot/flash/;jsessionid=AB24EEFB6955AD505F19A152CDEC84A8
isn't supported on ope
Hi RB,
We have a draft of the ZFS/flar image support here:
http://opensolaris.org/os/community/zfs/boot/flash/
Make sure you review the Solaris OS requirements.
Thanks,
Cindy
On 09/14/09 11:45, RB wrote:
Is it possible to create flar image of ZFS root filesystem to install it to
other maci
As an alternative, I've been taking a snapshot of rpool on the golden
system, sending it to a file, and creating a boot environment from the
archived snapshot on target systems. After fiddling with the snapshots a
little, I then either appropriately anonymize the system or provide it
with its i
what you want is possible with linux nfs, but solaris nfs developers don`t like
this feature and will not implement it. see
http://www.opensolaris.org/jive/thread.jspa?threadID=109178&start=0&tstart=0
--
This message posted from opensolaris.org
___
zf
Hi Greg,
We did a hack on those lines when we installed 100 Ultra 27s that was
used during J1, but we automated the process by using AI to install a
bootstrap image that had a SMF service that pulled over the zfs
sendfile, create a new BE and received the sendfile to the new BE. Work
fairly O
After moving from SXCE to 2009.06, my ZFS pools/file systems were at too new of
a version. I upgraded to the latest dev and recently upgraded to 122, but am
not too thrilled with the instability, especially zfs send / recv lockups
(don't recall the bug number).
I keep a copy of all of my criti
Absent any replies to the list, submitted as a bug:
http://defect.opensolaris.org/bz/show_bug.cgi?id=11358
Cheers -- Frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
All,
IHAC that is asking the following...reviewing the following document
http://docs.sun.com/app/docs/doc/817-2271/6mhupg6nc
it appears it may not for the parent setting will transcend downwards to
the child
Can anyone elaborate if this is correct or not..
Thanks
Peter
the question is
20 matches
Mail list logo