On Oct 3, 2010, at 7:22 PM, Jorgen Lundman wrote:
One option would be to get 147 NIC drivers for 134.
IIRC, the bnx drivers are closed source and obtained from Broadcom. No
need to upgrade OS just for a NIC driver.
-- richard
___
zfs-discu
Hello list,
I got a c7000 with BL465c G1 blades to play with and have been trying to get
some form of Solaris to work on it.
However, this is the state:
OpenSolaris 134: Installs with ZFS, but no BNX nic drivers.
OpenIndiana 147: Panics on "zpool create" everytime, even from console. Has no
U
On 04/ 2/10 10:25 AM, Ian Collins wrote:
Is this callstack familiar to anyone? It just happened on a Solaris
10 update 8 box:
genunix: [ID 655072 kern.notice] fe8000d1b830
unix:real_mode_end+7f81 ()
genunix: [ID 655072 kern.notice] fe8000d1b910 unix:trap+5e6 ()
genunix: [ID 655072 ke
Is this callstack familiar to anyone? It just happened on a Solaris 10
update 8 box:
genunix: [ID 655072 kern.notice] fe8000d1b830 unix:real_mode_end+7f81 ()
genunix: [ID 655072 kern.notice] fe8000d1b910 unix:trap+5e6 ()
genunix: [ID 655072 kern.notice] fe8000d1b920 unix:_cmntrap+14
Andre van Eyssen wrote:
> On Fri, 10 Apr 2009, Rince wrote:
>
> > FWIW, I strongly expect live ripping of a SATA device to not panic the disk
> > layer. It explicitly shouldn't panic the ZFS layer, as ZFS is supposed to be
> > "fault-tolerant" and "drive dropping away at any time" is a rather exp
Grant Lowe wrote:
Hi All,
Don't know if this is worth reporting, as it's human error. Anyway, I had a
panic on my zfs box. Here's the error:
marksburg /usr2/glowe> grep panic /var/log/syslog
Apr 8 06:57:17 marksburg savecore: [ID 570001 auth.error] reboot after panic:
assertion failed: 0 =
> "r" == Rince writes:
r> *ZFS* shouldn't panic under those conditions. The disk layer,
r> perhaps, but not ZFS.
well, yes, but panicing brings down the whole box anyway so there is
no practical difference, just a difference in blame.
I would rather say, the fact that redundant
On Fri, Apr 10, 2009 at 12:43 AM, Andre van Eyssen wrote:
> On Fri, 10 Apr 2009, Rince wrote:
>
> FWIW, I strongly expect live ripping of a SATA device to not panic the
>> disk
>> layer. It explicitly shouldn't panic the ZFS layer, as ZFS is supposed to
>> be
>> "fault-tolerant" and "drive droppi
On Fri, 10 Apr 2009, Rince wrote:
FWIW, I strongly expect live ripping of a SATA device to not panic the disk
layer. It explicitly shouldn't panic the ZFS layer, as ZFS is supposed to be
"fault-tolerant" and "drive dropping away at any time" is a rather expected
scenario.
Ripping a SATA device
sed at the panic, since the system was
> quiesced at the time. But there is coming a time when we will be doing
> this. Thanks for the feedback. I appreciate it.
>
>
>
>
> - Original Message
> From: Remco Lengers
> To: Grant Lowe
> Cc: zfs-discuss@open
will be doing this. Thanks for
the feedback. I appreciate it.
- Original Message
From: Remco Lengers
To: Grant Lowe
Cc: zfs-discuss@opensolaris.org
Sent: Thursday, April 9, 2009 5:31:42 AM
Subject: Re: [zfs-discuss] ZFS Panic
Grant,
Didn't see a response so I'll give it
Grant,
Didn't see a response so I'll give it a go.
Ripping a disk away and silently inserting a new one is asking for
trouble imho. I am not sure what you were trying to accomplish but
generally replace a drive/lun would entail commands like
zpool offline tank c1t3d0
cfgadm | grep c1t3d0
sa
Hi All,
Don't know if this is worth reporting, as it's human error. Anyway, I had a
panic on my zfs box. Here's the error:
marksburg /usr2/glowe> grep panic /var/log/syslog
Apr 8 06:57:17 marksburg savecore: [ID 570001 auth.error] reboot after panic:
assertion failed: 0 == dmu_buf_hold_arra
I upgraded my 280R system to yesterday's nightly build, and when I
rebooted, this happened:
Boot device:
/p...@8,60/SUNW,q...@4/f...@0,0/d...@w212037e9abe4,0:a File and args:
SunOS Release 5.11 Version snv_108 64-bit
Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
Use
Looks like a corrupted pool -- you appear to have a mirror block pointer with
no valid children. From the dump, you could probably determine which file is
bad, but I doubt you could delete it; you might need to recreate your pool.
--
This message posted from opensolaris.org
_
On Tue, January 13, 2009 09:51, Neil Perrin wrote:
> I'm sorry about the problems. We try to be responsive to fixing bugs and
> implementing new features that people are requesting for ZFS.
> It's not always possible to get it right. In this instance I don't think
> the
> bug was reproducible, and
I'm sorry about the problems. We try to be responsive to fixing bugs and
implementing new features that people are requesting for ZFS.
It's not always possible to get it right. In this instance I don't think the
bug was reproducible, and perhaps that's why it hasn't received the attention
it deserv
To be honest I am quite surprised as this bug you referring to was submited
early in 2008 and last updated over the summer. Quite surprised that Sun did
not
come up with a fix for it so far. ZFS is certainly gaining some popularity at
my
workplace, and we were thinking of using it instead of v
This is a known bug:
6678070 Panic from vdev_mirror_map_alloc()
http://bugs.opensolaris.org/view_bug.do?bug_id=6678070
Neil.
On 01/12/09 21:12, Krzys wrote:
> any idea what could cause my system to panic? I get my system rebooted daily
> at
> various times. very strange, but its pointing to zf
any idea what could cause my system to panic? I get my system rebooted daily at
various times. very strange, but its pointing to zfs. I have U6 with all latest
patches.
Jan 12 05:47:12 chrysek unix: [ID 836849 kern.notice]
Jan 12 05:47:12 chrysek ^Mpanic[cpu1]/thread=30002c8d4e0:
Jan 12 05:47:1
system on a continuous loop of panic due to zfs issue.
Removed the /etc/zfs/zpool.cache per this
http://docs.sun.com/app/docs/doc/819-5461/gbbwc?a=view
(Repairing an Unbootable System) to keep the system stable.
If I try to import the pool back system panics consistently
panic[cpu0]/thread=300035
> space_map_add+0xdb(ff014c1a21b8, 472785000, 1000)
> space_map_load+0x1fc(ff014c1a21b8, fbd52568, 1,
ff014c1a1e88, ff0149c88c30)
> running snv79.
hmm.. did you spend any time in snv_74 or snv_75 that might
have gotten http://bugs.opensolaris.org/view_bug.do?bug_id=660
I'm seeing this too. Nothing unusual happened before the panic.
Just a shutdown (init 5) and later startup. I have the crashdump
and copy of the problem zpool (on swan). Here's the stack trace:
> $C
ff0004463680 vpanic()
ff00044636b0 vcmn_err+0x28(3, f792ecf0, ff0004463778)
Marion Hakanson wrote:
> [EMAIL PROTECTED] said:
>> Living on the edge... The T3 has a 2 year battery life (time is counted).
>> When it decides the batteries are too old, it will shut down the nonvolatile
>> write cache. You'll want to make sure you have fresh batteries soon.
>
> Hmm, doesn't th
[EMAIL PROTECTED] said:
> Living on the edge... The T3 has a 2 year battery life (time is counted).
> When it decides the batteries are too old, it will shut down the nonvolatile
> write cache. You'll want to make sure you have fresh batteries soon.
Hmm, doesn't the array put the cache into "writ
On 10/01/07 17:01, Richard Elling wrote:
T3 comment below...
[cut]
A scrub is only 20% complete, but has found no errors thus far. I check
the T3 pair and no complaints there either - I did reboot them just for
luck (last reboot was 2 years ago, apparently!).
Living on the edge...
The T3 has
T3 comment below...
Gavin Maltby wrote:
> Hi,
>
> On 09/29/07 22:00, Gavin Maltby wrote:
>> Hi,
>>
>> Our zfs nfs build server running snv_73 (pool created back before
>> zfs integrated to ON) paniced I guess from zfs the first time
>> and now panics on attempted boot every time as below. Is thi
Hi,
On 09/29/07 22:00, Gavin Maltby wrote:
Hi,
Our zfs nfs build server running snv_73 (pool created back before
zfs integrated to ON) paniced I guess from zfs the first time
and now panics on attempted boot every time as below. Is this
a known issue and, more importantly (2TB of data in the p
Hi,
Our zfs nfs build server running snv_73 (pool created back before
zfs integrated to ON) paniced I guess from zfs the first time
and now panics on attempted boot every time as below. Is this
a known issue and, more importantly (2TB of data in the pool),
any suggestions on how to recover (othe
Ok I found the problem with 0x06, one disk was missing. But now I got all my
disk and I get 0x05.:
Sep 21 10:25:53 unknown ^Mpanic[cpu0]/thread=ff0001e12c80:
Sep 21 10:25:53 unknown genunix: [ID 603766 kern.notice] assertion failed:
dmu_read(os, smo->smo_object, offset, size, entry_map) == 0
actually here is the first panic messages:
Sep 13 23:33:22 netra2 unix: [ID 603766 kern.notice] assertion failed:
dmu_read(os, smo->smo_object, offset, size, entry_map) == 0 (0x5 == 0x0), file:
../../common/fs/zfs/space_map.c, line: 307
Sep 13 23:33:22 netra2 unix: [ID 10 kern.notice]
Sep 13
Basically, it is complaining that there aren't enough disks to read
the pool metadata. This would suggest that in your 3-disk RAID-Z
config, either two disks are missing, or one disk is missing *and*
another disk is damaged -- due to prior failed writes, perhaps.
(I know there's at least one disk
I have a raid-z zfs filesystem with 3 disks. The disk was starting have read
and write errors.
The disks was so bad that I started to have trans_err. The server lock up and
the server was reset. Then now when trying to import the pool the system panic.
I installed the last Recommend on my Solar
> Tomas Ögren wrote:
> > On 18 September, 2007 - Gino sent me these 0,3K
> bytes:
> >
> >> Hello,
> >> upgrade to snv_60 or later if you care about your
> data :)
> >
> > If there are known serious data loss bug fixes that
> have gone into
> > snv60+, but not into s10u4, then I would like to
> te
Tomas Ögren wrote:
> On 18 September, 2007 - Gino sent me these 0,3K bytes:
>
>> Hello,
>> upgrade to snv_60 or later if you care about your data :)
>
> If there are known serious data loss bug fixes that have gone into
> snv60+, but not into s10u4, then I would like to tell Sun to "backport"
> t
On 18 September, 2007 - Gino sent me these 0,3K bytes:
> Hello,
> upgrade to snv_60 or later if you care about your data :)
If there are known serious data loss bug fixes that have gone into
snv60+, but not into s10u4, then I would like to tell Sun to "backport"
those into s10u4 if they care abou
Hello,
upgrade to snv_60 or later if you care about your data :)
Gino
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Matty,
From the stack I saw, that is 6454482.
But this defect has been marked as 'Not reproducible', I have no idea
about how to recover
from it, but looks like new update will not hit this issue.
Matty wrote:
> One of our Solaris 10 update 3 servers paniced today with the following error:
One of our Solaris 10 update 3 servers paniced today with the following error:
Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after
panic: assertion failed: ss != NULL, file:
../../common/fs/zfs/space_map.c, line: 125
The server saved a core file, and the resulting backtrace is l
I encountered the following ZFS panic today and am looking for suggestions
on how to resolve it.
First panic:
panic[cpu0]/thread=ff000fa9cc80: assertion failed: 0 ==
dmu_buf_hold_array(os, object, offset, size, FALSE, FTAG, &numbufs, &dbp),
file: ../../common/fs/zfs/dmu.c, line: 435
hi all,
I was extracting a 8GB tar and encountered this panic. the system was
just installed last week with Solaris 10 update 3 and the latest
recommended patches as of June 26. I can provide more output from mdb,
or the crashdump itself if it would be of any use.
any ideas what's going on her
Gino wrote:
Apr 23 02:02:22 SERVER144 ^Mpanic[cpu1]/thread=ff0017fa1c80:
Apr 23 02:02:22 SERVER144 genunix: [ID 809409 kern.notice] ZFS: I/O failure (write on
off 0: zio 9a5d4cc0 [L0 bplist] 4000L/4000P DVA[0]=<0:770b24
000:4000> DVA[1]=<0:dfa984000:4000> fletcher4 uncompressed LE
Apr 23 02:02:21 SERVER144 offline or reservation conflict
Apr 23 02:02:21 SERVER144 scsi: [ID 107833 kern.warning] WARNING:
/scsi_vhci/[EMAIL PROTECTED] (sd82):
Apr 23 02:02:21 SERVER144 i/o to invalid geometry
Apr 23 02:02:21 SERVER144 scsi: [ID 107833 kern.warning] WARNING:
/scsi_vhci/[
Can anyone explain what this means?
Feb 1 11:32:24 rtfm savecore: [ID 570001 auth.error] reboot after panic: ZFS:
I/O failure (write on off 0: zio 82bce800 [L0 unallocated]
2L/e800P DVA[0]=<0:d02b6d800:e800> fletcher2 lzjb LE contiguous
birth=562517 fill=0 cksum=f29b4a3542d44b7a:d2
Folks, before I start delving too deeply into this crashdump, has anyone
seen anything like it?
The background is that I'm running a non-debug open build of b49 and was
in the process of running the "zoneadm -z redlx install ...".
After a bit, the machine panics, initially looking at the cras
Jürgen Keil wrote On 09/04/06 05:24,:
I made some powernow experiments on a dual core amd64 box, running the
64-bit debug on-20060828 kernel. At some point the kernel seemed to
make no more progress (probably a bug in the multiprocessor powernow
code), the gui was stuck, so I typed (blind) F1-
I made some powernow experiments on a dual core amd64 box, running the
64-bit debug on-20060828 kernel. At some point the kernel seemed to
make no more progress (probably a bug in the multiprocessor powernow
code), the gui was stuck, so I typed (blind) F1-A + $ ::status
debugging crash dump vmcore
Nathanael,
This looks like a bug. We are trying to clean up after an error in
zfs_getpage() when we trigger this panic. Can you make a core file
available? I'd like to take a closer look.
I've filed a bug to track this:
6438702 error handling in zfs_getpage() can trigger "page not lo
I believe ZFS is causing a panic whenever I attempt to mount an iso image (SXCR
build 39) that happens to reside on a ZFS file system. The problem is 100%
reproducible. I'm quite new to OpenSolaris, so I may be incorrect in saying
it's ZFS' fault. Also, let me know if you need any additional
When unpacking the solaris source onto a local disk on a system running build
39 I got the following panic:
panic[cpu0]/thread=d2c8ade0:
really out of space
d2c8a7b4 zfs:zio_write_allocate_gang_members+3e6 (e4385ac0)
d2c8a7d0 zfs:zio_dva_allocate+81 (e4385ac0)
d2c8a7e8 zfs:zio_next_stage+66 (e
50 matches
Mail list logo