Hi Carsten,
Am 17.04.12 17:40, schrieb Carsten John:
Hello everybody,
just to let you know what happened in the meantime:
I was able to open a Service Request at Oracle.
The issue is a known bug (Bug 6742788 : assertion panic at: zfs:zap_deref_leaf)
The bug has bin fixed (according to Oracl
On 17/04/2012 16:40, Carsten John wrote:
Hello everybody,
just to let you know what happened in the meantime:
I was able to open a Service Request at Oracle.
The issue is a known bug (Bug 6742788 : assertion panic at: zfs:zap_deref_leaf)
The bug has bin fixed (according to Oracle support) sin
Hello everybody,
just to let you know what happened in the meantime:
I was able to open a Service Request at Oracle.
The issue is a known bug (Bug 6742788 : assertion panic at: zfs:zap_deref_leaf)
The bug has bin fixed (according to Oracle support) since build 164, but there
is no fix for Sola
-Original message-
To: zfs-discuss@opensolaris.org;
From: John D Groenveld
Sent: Fri 30-03-2012 21:47
Subject:Re: [zfs-discuss] kernel panic during zfs import [ORACLE should
notice this]
> In message <4f735451.2020...@oracle.com>, Deepak Honnalli writes:
>
Am 30.03.12 21:45, schrieb John D Groenveld:
In message<4f735451.2020...@oracle.com>, Deepak Honnalli writes:
Thanks for your reply. I would love to take a look at the core
file. If there is a way this can somehow be transferred to
the internal cores server, I can work on the bug.
In message <4f735451.2020...@oracle.com>, Deepak Honnalli writes:
> Thanks for your reply. I would love to take a look at the core
> file. If there is a way this can somehow be transferred to
> the internal cores server, I can work on the bug.
>
> I am not sure about the modalities
and see if I can help you here.
Thanks,
Deepak.
On Wednesday 28 March 2012 06:15 PM, Carsten John wrote:
-Original message-
To: zfs-discuss@opensolaris.org;
From: Deepak Honnalli
Sent: Wed 28-03-2012 09:12
Subject:Re: [zfs-discuss] kernel panic during zfs import
Hi
In message , =?utf-
8?Q?Carsten_John?= writes:
>I just spent about an hour (or two) trying to file a bug report regarding the
>issue without success.
>
>Seems to me, that I'm too stupid to use this "MyOracleSupport" portal.
>
>So, as I'm getting paid for keeping systems running and not clicking th
-Original message-
To: zfs-discuss@opensolaris.org;
From: Deepak Honnalli
Sent: Wed 28-03-2012 09:12
Subject:Re: [zfs-discuss] kernel panic during zfs import
> Hi Carsten,
>
> This was supposed to be fixed in build 164 of Nevada (6742788). If
> you are
-Original message-
To: ZFS Discussions ;
From: Paul Kraus
Sent: Tue 27-03-2012 15:05
Subject:Re: [zfs-discuss] kernel panic during zfs import
> On Tue, Mar 27, 2012 at 3:14 AM, Carsten John wrote:
> > Hallo everybody,
> >
> > I have a Solaris 11 box
Hi Carsten,
This was supposed to be fixed in build 164 of Nevada (6742788). If
you are still seeing this
issue in S11, I think you should raise a bug with relevant details.
As Paul has suggested,
this could also be due to incomplete snapshot.
I have seen interrupted zfs recv's
On Tue, Mar 27, 2012 at 3:14 AM, Carsten John wrote:
> Hallo everybody,
>
> I have a Solaris 11 box here (Sun X4270) that crashes with a kernel panic
> during the import of a zpool (some 30TB) containing ~500 zfs filesystems
> after reboot. This causes a reboot loop, until booted single user and
2012-03-27 11:14, Carsten John write:
I saw a similar effect some time ago on a opensolaris box (build 111b). That
time my final solution was to copy over the read only mounted stuff to a newly
created pool. As it is the second time this failure occures (on different
machines) I'm really conce
Hallo everybody,
I have a Solaris 11 box here (Sun X4270) that crashes with a kernel panic
during the import of a zpool (some 30TB) containing ~500 zfs filesystems after
reboot. This causes a reboot loop, until booted single user and removed
/etc/zfs/zpool.cache.
>From /var/adm/messages:
sav
I can not help but agree with Tim's comment below.
If you want a free version of ZFS, in which case you are still responsible for
things yourself - like having backups, then maybe:
www.freenas.org
www.linuxonzfs.org
www.openindiana.org
Meanwhile, it is grossly inappropriate to be complaining ab
In message <1313687977.77375.yahoomail...@web121903.mail.ne1.yahoo.com>, Stu Wh
itefish writes:
>Nope, not a clue how to do that and I have installed Windows on this box inste
>ad of Solaris since I can't get my data back from ZFS.
>I have my two drives the pool is on disconnected so if this ever g
On Fri, Aug 19, 2011 at 4:43 AM, Stu Whitefish wrote:
>
> > It seems that obtaining an Oracle support contract or a contract renewal
> is equally frustrating.
>
> I don't have any axe to grind with Oracle. I'm new to the Solaris thing and
> wanted to see if it was for me.
>
> If I was using this
> It seems that obtaining an Oracle support contract or a contract renewal is
> equally frustrating.
>
>I don't have any axe to grind with Oracle. I'm new to the Solaris thing and
>wanted to see if it was for me.
>
>If I was using this box to make money then sure I wouldn't have any problem
>p
> lots of replies and no suggestion to try on FreeBSD. How about trying
>> on one? I believe if it crashed on FreeBSD, the developers would be
>> interested in helping to solve it. Try using the 9.0-beta1 since
>> 8.2-release has some problems importing certain zpools.
>
>I didn't think FreeBSD
On Fri, 19 Aug 2011, Edho Arief wrote:
Asking Oracle for help without support contract would be like shouting
in vacuum space...
It seems that obtaining an Oracle support contract or a contract
renewal is equally frustrating.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.s
On Fri, Aug 19, 2011 at 12:19 AM, Stu Whitefish wrote:
>> From: Thomas Gouverneur
>
>> To: zfs-discuss@opensolaris.org
>> Cc:
>> Sent: Thursday, August 18, 2011 5:11:16 PM
>> Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data
>> inacces
> From: Thomas Gouverneur
> To: zfs-discuss@opensolaris.org
> Cc:
> Sent: Thursday, August 18, 2011 5:11:16 PM
> Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data
> inaccessible!
>
> Have you already extracted the core file of the kernel crash ?
No
ng so much data
gone.
Thanks for your help. Oracle, are you listening?
Jim
- Original Message -
From: Thomas Gouverneur
To: zfs-discuss@opensolaris.org
Cc: Stu Whitefish
Sent: Thursday, August 18, 2011 1:57:29 PM
Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G
ling having so much data
gone.
Thanks for your help. Oracle, are you listening?
Jim
- Original Message -
> From: Thomas Gouverneur
> To: zfs-discuss@opensolaris.org
> Cc: Stu Whitefish
> Sent: Thursday, August 18, 2011 1:57:29 PM
> Subject: Re: [zfs-discuss] Kernel panic
>
> > From: Alexander Lesle
> > To: zfs-discuss@opensolaris.org
> > Cc:
> > Sent: Monday, August 15, 2011 8:37:42 PM
> > Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data
> > inaccessible!
> >
> > Hello Stu
- Original Message -
> From: John D Groenveld
> To: "zfs-discuss@opensolaris.org"
> Cc:
> Sent: Monday, August 15, 2011 6:12:37 PM
> Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data
> inaccessible!
>
> In message <1313431448
- Original Message -
> From: Alexander Lesle
> To: zfs-discuss@opensolaris.org
> Cc:
> Sent: Monday, August 15, 2011 8:37:42 PM
> Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data
> inaccessible!
>
> Hello Stu Whitefish and List,
>
Hello Stu Whitefish and List,
On August, 15 2011, 21:17 wrote in [1]:
>> 7. cannot import old rpool (c0t2d0s0 c0t3d0s0), any attempt causes a
>> kernel panic, even when booted from different OS versions
> Right. I have tried OpenIndiana 151 and Solaris 11 Express (latest
> from Oracle) several
Jim
- Original Message -
> From: John D Groenveld
> To: "zfs-discuss@opensolaris.org"
> Cc:
> Sent: Monday, August 15, 2011 6:12:37 PM
> Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data
> inaccessible!
>
> In message <13134
Hi Paul,
> 1. Install system to pair of mirrored disks (c0t2d0s0 c0t3d0s0),
> system works fine
I don't remember at this point which disks were which, but I believe it was 0
and 1 because during the first install there were only 2 drives in the box
because I had only 2 drives.
> 2. add two mo
In message <1313431448.5331.yahoomail...@web121911.mail.ne1.yahoo.com>, Stu Whi
tefish writes:
>I'm sorry, I don't understand this suggestion.
>
>The pool that won't import is a mirror on two drives.
Disconnect all but the two mirrored drives that you must import
and try to import from a S11X Live
I'm sorry, I don't understand this suggestion.
The pool that won't import is a mirror on two drives.
- Original Message -
> From: LaoTsao
> To: Stu Whitefish
> Cc: "zfs-discuss@opensolaris.org"
> Sent: Monday, August 15, 2011 5:50:08 PM
> Su
I am catching up here and wanted to see if I correctly understand the
chain of events...
1. Install system to pair of mirrored disks (c0t2d0s0 c0t3d0s0),
system works fine
2. add two more disks (c0t0d0s0 c0t1d0s0), create zpool tank, test and
determine these disks are fine
3. copy data to save to
though.
>
>
>
> - Original Message -
>> From: ""Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.""
>> To: zfs-discuss@opensolaris.org
>> Cc:
>> Sent: Monday, August 15, 2011 3:06:20 PM
>> Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G
Unfortunately this panics the same exact way. Thanks for the suggestion though.
- Original Message -
> From: ""Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.""
> To: zfs-discuss@opensolaris.org
> Cc:
> Sent: Monday, August 15, 2011 3:06:20 PM
> Subject: Re:
-
From: ""Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.""
To: zfs-discuss@opensolaris.org
Cc:
Sent: Monday, August 15, 2011 3:06:20 PM
Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data
inaccessible!
may be try the following
1)boot s10u8 cd into single user m
Tsao 老曹) Ph.D.""
> To: zfs-discuss@opensolaris.org
> Cc:
> Sent: Monday, August 15, 2011 3:06:20 PM
> Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data
> inaccessible!
>
> may be try the following
> 1)boot s10u8 cd into single user mode (when b
may be try the following
1)boot s10u8 cd into single user mode (when boot cdrom, choose Solaris
then choose single user mode(6))
2)when ask to mount rpool just say no
3)mkdir /tmp/mnt1 /tmp/mnt2
4)zpool import -f -R /tmp/mnt1 tank
5)zpool import -f -R /tmp/mnt2 rpool
On 8/15/2011 9:12 AM, Stu
> On Thu, Aug 4, 2011 at 2:47 PM, Stuart James Whitefish
> wrote:
>> # zpool import -f tank
>>
>> http://imageshack.us/photo/my-images/13/zfsimportfail.jpg/
>
> I encourage you to open a support case and ask for an escalation on CR
> 7056738.
>
> --
> Mike Gerdts
Hi Mike,
Unfortunately I
On Thu, Aug 4, 2011 at 2:47 PM, Stuart James Whitefish
wrote:
> # zpool import -f tank
>
> http://imageshack.us/photo/my-images/13/zfsimportfail.jpg/
I encourage you to open a support case and ask for an escalation on CR 7056738.
--
Mike Gerdts
http://mgerdts.blogspot.com/
_
I am opening a new thread since I found somebody else reported a similar
failure in May and I didn't see a resolution hopefully this post will be easier
to find for people with similar problems. Original thread was
http://opensolaris.org/jive/thread.jspa?threadID=140861
System: snv_151a 64 bit
System: snv_151a 64 bit on Intel.
Error: panic[cpu0] assertion failed: zvol_get_stats(os, nv) == 0,
file: ../../common/fs/zfs/zfs_ioctl.c, line: 1815
Failure first seen on Solaris 10, update 8
History:
I recently received two 320G drives and realized from reading this list it
would have been bet
--- On Wed, 1/19/11, Richard Elling wrote:
> From: Richard Elling
> Subject: Re: [zfs-discuss] kernel panic on USB disk power loss
> To: "Reginald Beardsley"
> Cc: zfs-discuss@opensolaris.org
> Date: Wednesday, January 19, 2011, 8:59 AM
> On Jan 15, 2011, at 10
On Jan 15, 2011, at 10:33 AM, Reginald Beardsley wrote:
> I was copying a filesystem using "zfs send | zfs receive" and inadvertently
> unplugged the power to the USB disk that was the destination. Much to my
> horror this caused the system to panic. I recovered fine on rebooting, but
> it *
I was copying a filesystem using "zfs send | zfs receive" and inadvertently
unplugged the power to the USB disk that was the destination. Much to my
horror this caused the system to panic. I recovered fine on rebooting, but it
*really* unnerved me.
I don't find anything about this online. I
Hi,
my machine is a HP ProLiant ML350 G5 with 2 quad-core Xeons, 32GB RAM and a HP
SmartArray E200i RAID controller with 3x160 and 3x500GB SATA discs connected to
it. Two of the 160GB discs build the mirrored root pool (rpool), the third
serves as a temporary data pool called "tank", and the th
Brilliant. I set those parameters via /etc/system, rebooted, and the pool
imported with just the f switch. I had seen this as an option earlier,
although not that thread, but was not sure it applied to my case.
Scrub is running now. Thank you very much!
-Scott
On 9/23/10 7:07 PM, "David Blasin
I just realized that the email I sent to David and the list did not make the
list (at least as jive can see it), so here is what I sent on the 23rd:
Brilliant. I set those parameters via /etc/system, rebooted, and the pool
imported with just the –f switch. I had seen this as an option earlier,
Have you tried setting zfs_recover & aok in /etc/system or setting it
with the mdb?
Read how to set via /etc/system
http://opensolaris.org/jive/thread.jspa?threadID=114906
mdb debugger
http://www.listware.net/201009/opensolaris-zfs/46706-re-zfs-discuss-how-to-set-zfszfsrecover1-and-aok1-in-grub
I have a box running snv_134 that had a little boo-boo.
The problem first started a couple of weeks ago with some corruption on two
filesystems in a 11 disk 10tb raidz2 set. I ran a couple of scrubs that
revealed a handful of corrupt files on my 2 de-duplicated zfs filesystems. No
biggie.
I
On Jul 9, 2010, at 4:27 AM, George wrote:
>> I think it is quite likely to be possible to get
>> readonly access to your data, but this requires
>> modified ZFS binaries. What is your pool version?
>> What build do you have installed on your system disk
>> or available as LiveCD?
For the record
> I think it is quite likely to be possible to get
> readonly access to your data, but this requires
> modified ZFS binaries. What is your pool version?
> What build do you have installed on your system disk
> or available as LiveCD?
[Prompted by an off-list e-mail from Victor asking if I was stil
On Jun 28, 2010, at 11:27 PM, George wrote:
> Again this core dumps when I try to do "zpool clear storage2"
>
> Does anyone have any suggestions what would be the best course of action now?
Do you have any crahsdumps saved? First one is most interesting one...
__
> I think it is quite likely to be possible to get readonly access to
> your data, but this requires modified ZFS binaries. What is your pool
> version? What build do you have installed on your system disk or
> available as LiveCD?
Sorry, but does this mean if ZFS can't write to the drives, access
On Jul 3, 2010, at 1:20 PM, George wrote:
>> Because of that I'm thinking that I should try
>> to change the hostid when booted from the CD to be
>> the same as the previously installed system to see if
>> that helps - unless that's likely to confuse it at
>> all...?
>
> I've now tried changing
> Because of that I'm thinking that I should try
> to change the hostid when booted from the CD to be
> the same as the previously installed system to see if
> that helps - unless that's likely to confuse it at
> all...?
I've now tried changing the hostid using the code from
http://forums.sun.com
> I think I'll try booting from a b134 Live CD and see
> that will let me fix things.
Sadly it appears not - at least not straight away.
Running "zpool import" now gives
pool: storage2
id: 14701046672203578408
state: FAULTED
status: The pool was last accessed by another system.
action: Th
Aha:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6794136
I think I'll try booting from a b134 Live CD and see that will let me fix
things.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.
> Please try
>
> zdb -U /dev/null -ebcsv storage2
r...@crypt:~# zdb -U /dev/null -ebcsv storage2
zdb: can't open storage2: No such device or address
If I try
r...@crypt:~# zdb -C storage2
Then it prints what appears to be a valid configuration but then the same error
message about being unab
On Jun 30, 2010, at 10:48 AM, George wrote:
>> I suggest you to try running 'zdb -bcsv storage2' and
>> show the result.
>
> r...@crypt:/tmp# zdb -bcsv storage2
> zdb: can't open storage2: No such device or address
>
> then I tried
>
> r...@crypt:/tmp# zdb -ebcsv storage2
> zdb: can't open sto
> I suggest you to try running 'zdb -bcsv storage2' and
> show the result.
r...@crypt:/tmp# zdb -bcsv storage2
zdb: can't open storage2: No such device or address
then I tried
r...@crypt:/tmp# zdb -ebcsv storage2
zdb: can't open storage2: File exists
George
--
This message posted from opensola
On Jun 29, 2010, at 1:30 AM, George wrote:
> I've attached the output of those commands. The machine is a v20z if that
> makes any difference.
Stack trace is similar to one bug that I do not recall right now, and it
indicates that there's likely a corruption in ZFS metadata.
I suggest you to
Another related question -
I have a second enclosure with blank disks which I would like to use to take a
copy of the existing zpool as a precaution before attempting any fixes. The
disks in this enclosure are larger than those that the one with a problem.
What would be the best way to do this
I've attached the output of those commands. The machine is a v20z if that makes
any difference.
Thanks,
George
--
This message posted from opensolaris.orgmdb: logging to "debug.txt"
> ::status
debugging crash dump vmcore.0 (64-bit) from crypt
operating system: 5.11 snv_111b (i86pc)
panic messag
On Jun 28, 2010, at 11:27 PM, George wrote:
> I've tried removing the spare and putting back the faulty drive to give:
>
> pool: storage2
> state: FAULTED
> status: An intent log record could not be read.
>Waiting for adminstrator intervention to fix the faulted pool.
> action: Either r
Hi,
I have a machine running 2009.06 with 8 SATA drives in SCSI connected enclosure.
I had a drive fail and accidentally replaced the wrong one, which
unsurprisingly caused the rebuild to fail. The status of the zpool then ended
up as:
pool: storage2
state: FAULTED
status: An intent log reco
I ran 'zpool scrub' and will report what happens once it's finished. (It will
take pretty long.)
The scrub finished successfully (with no errors) and 'zpool status -v' doesn't
crash the kernel any more.
Andrej
smime.p7s
Description: S/MIME Cryptographic Signature
__
Hello,
I got a zfs panic on build 143 (installed with onu) in the following unusual
situation:
1) 'zpool scrub' found a corrupted snapshot on which two BEs were based.
2) I removed the first dependency with 'zfs promote'.
3) I removed the second dependency with 'zfs -pv
Hi,
I have been having problems with reboots, it usually happens when I am either
sending or receiving data on the server, it can be over CIFS, or HTTP, NNTP. SO
could be a networking problem, but they directed me here or to CIFS, but as it
happens when I'm not using CIFS (but the service is st
Hey,
On Sat, Oct 31, 2009 at 5:03 PM, Victor Latushkin
wrote:
> Donald Murray, P.Eng. wrote:
>>
>> Hi,
>>
>> I've got an OpenSolaris 2009.06 box that will reliably panic whenever
>> I try to import one of my pools. What's the best practice for
>> recovering (before I resort to nuking the pool an
Donald Murray, P.Eng. wrote:
Hi,
I've got an OpenSolaris 2009.06 box that will reliably panic whenever
I try to import one of my pools. What's the best practice for
recovering (before I resort to nuking the pool and restoring from
backup)?
Could you please post panic stack backtrace?
There a
Hi,
I've got an OpenSolaris 2009.06 box that will reliably panic whenever
I try to import one of my pools. What's the best practice for
recovering (before I resort to nuking the pool and restoring from
backup)?
There are two pools on the system: rpool and tank. The rpool seems to
be fine, since I
dear all, victor,
i am most happy to report that the problems were somehwat hardware-related,
caused by a damaged / dangling SATA cable which apparently caused long delays
(sometimes working, disk on, disk off, ...) during normal zfs operations. Why
the -f produced a kernel panic I'm unsure. In
Marc Althoff wrote:
We have the same problem since of today. The pool was to be "renamed" width
zpool export, after an import it didn't come back online. A import -f results in a kernel
panic.
zpool status -v freports a degraded drive also.
I'll also try to supply som,e traces and logs.
Pl
We have the same problem since of today. The pool was to be "renamed" width
zpool export, after an import it didn't come back online. A import -f results
in a kernel panic.
zpool status -v freports a degraded drive also.
I'll also try to supply som,e traces and logs.
--
This message posted fro
i have re run zdb -l /dev/dsk/c9t4d0s0 as i should have the first time (thanks
Nicolas).
Attached output.
--
This message posted from opensolaris.org# zdb -l /dev/dsk/c9t4d0s0
LABEL 0
version=14
nam
Hi Victor, i have tried to re-attach the detail from /var/adm/messages
--
This message posted from opensolaris.orgOct 11 17:16:55 opensolaris unix: [ID 836849 kern.notice]
Oct 11 17:16:55 opensolaris ^Mpanic[cpu0]/thread=ff000b6f7c60:
Oct 11 17:16:55 opensolaris genunix: [ID 361072 kern.noti
On 11.10.09 12:59, Darren Taylor wrote:
I have searched the forums and google wide, but cannot find a fix for the issue
I'm currently experiencing. Long story short - I'm now at a point where I
cannot even import my zpool (zpool import -f tank) without causing a kernel
panic
I'm running OpenS
Hi Ian, I'm currently downloading build 124 to see if that helps... the
download is running a bit slow so wont know until later tomorrow.
Just an update that i have also tried; (forgot to mention above)
* Pulling out each disk - tried mounting in degraded state - same kernel
panic
*
Darren Taylor wrote:
I have searched the forums and google wide, but cannot find a fix for the issue
I'm currently experiencing. Long story short - I'm now at a point where I
cannot even import my zpool (zpool import -f tank) without causing a kernel
panic
I'm running OpenSolaris snv_111b and
I have searched the forums and google wide, but cannot find a fix for the issue
I'm currently experiencing. Long story short - I'm now at a point where I
cannot even import my zpool (zpool import -f tank) without causing a kernel
panic
I'm running OpenSolaris snv_111b and the zpool is version 1
Richard Elling wrote:
> Chris Gerhard wrote:
>> My home server running snv_94 is tipping with the same assertion when
>> someone list a particular file:
>>
>
> Failed assertions indicate software bugs. Please file one.
We learn something new every day!
Gavin
__
Richard Elling wrote:
Chris Gerhard wrote:
My home server running snv_94 is tipping with the same assertion when
someone list a particular file:
Failed assertions indicate software bugs. Please file one.
http://en.wikipedia.org/wiki/Assertion_(computing)
A colleague pointed out that it i
Chris Gerhard wrote:
> My home server running snv_94 is tipping with the same assertion when someone
> list a particular file:
>
Failed assertions indicate software bugs. Please file one.
http://en.wikipedia.org/wiki/Assertion_(computing)
-- richard
> ::status
> Loading modules: [ unix genu
My home server running snv_94 is tipping with the same assertion when someone
list a particular file:
::status
Loading modules: [ unix genunix specfs dtrace cpu.generic
cpu_ms.AuthenticAMD.15 uppc pcplusmp scsi_vhci ufs md ip hook neti sctp arp
usba qlc fctl nca lofs zfs audiosup sd cpc random
Do you guys have any more information about this? I've tried the offset
methods, zfs_recover, aok=1, mounting read only, yada yada, with still 0 luck.
I have about 3TBs of data on my array, and I would REALLY hate to lose it.
Thanks!
--
This message posted from opensolaris.org
_
On Sun, Nov 2, 2008 at 4:30 PM, Mark Shellenbaum
<[EMAIL PROTECTED]> wrote:
> I believe this panic shouldn't happen on OpenSolaris. It has some extra
> protection to prevent the panic that doesn't exist in the S10 code base.
>
> Are there any ACLs on the parent directory that would be inherited t
Matthew R. Wilson wrote:
> I can reliably reproduce this panic with a similar stack trace on a
> newly installed Solaris 10 10/08 system (I know, not OpenSolaris but
> it appears to be the same problem). I just opened a support case w/
> Sun but then discovered what appear to be the specific steps
I can reliably reproduce this panic with a similar stack trace on a
newly installed Solaris 10 10/08 system (I know, not OpenSolaris but
it appears to be the same problem). I just opened a support case w/
Sun but then discovered what appear to be the specific steps for me to
reproduce it.
My setup
Hi,
i try to destroy a snapshop1 on opensolaris
SunOS storage11 5.11 snv_98 i86pc i386 i86pc
and my box reboots leaving a crash-file in /var/crash/storage11.
This is repoducable... for this one snapshot1 - other
snapshots was destroyable (without crash)
How can i help somebody to track down th
David Bartley wrote:
> On Tue, Sep 9, 2008 at 11:43 AM, Mark Shellenbaum
> <[EMAIL PROTECTED]> wrote:
>> David Bartley wrote:
>>> Hello,
>>>
>>> We're repeatedly seeing a kernel panic on our disk server. We've been
>>> unable to determine exactly how to reproduce it, but it seems to occur
>>> fairl
David Bartley wrote:
> Hello,
>
> We're repeatedly seeing a kernel panic on our disk server. We've been unable
> to determine exactly how to reproduce it, but it seems to occur fairly
> frequently (a few times a day). This is happening on both snv91 and snv96.
> We've run 'zpool scrub' and this
Hello,
We're repeatedly seeing a kernel panic on our disk server. We've been unable to
determine exactly how to reproduce it, but it seems to occur fairly frequently
(a few times a day). This is happening on both snv91 and snv96. We've run
'zpool scrub' and this has reported no errors. I can tr
A little update on the subject.
After great help of Victor Latushkin the content of the pools is recovered.
The cause of the problem is still under investigation, but what is clear that
both config objects where corrupted.
What has been done to recover data:
Victor has a zfs module which allows
Borys Saulyak wrote:
> May I remind you that I issue occurred on Solaris 10, not on OpenSolaris.
>
>
I believe you. If you review the life cycle of a bug,
http://www.sun.com/bigadmin/hubs/documentation/patch/patch-docs/abugslife.pdf
then you will recall that bugs are fixed in NV and then
back
This panic message seems consistent with bugid 6322646, which was
fixed in NV b77 (post S10u5 freeze).
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6322646
-- richard
Borys Saulyak wrote:
>> From what I can predict, and *nobody* has provided
>> any panic
>> essages to confirm, ZF
> From what I can predict, and *nobody* has provided
> any panic
> essages to confirm, ZFS likely had difficulty
> writing. For Solaris 10u5
Panic stack is looking pretty much the same as panic on imprt, and cannot be
correlated to write failure:
Aug 5 12:01:27 omases11 unix: [ID 836849 kern.no
Borys Saulyak wrote:
>> Suppose that ZFS detects an error in the first
>> case. It can't tell
>> the storage array "something's wrong, please
>> fix it" (since the
>> storage array doesn't provide for this with
>> checksums and intelligent
>> recovery), so all it can do is tell the user
>> "this f
> Ask your hardware vendor. The hardware corrupted your
> data, not ZFS.
Right, that's all because of these storage vendors. All problems come from
them! Never from ZFS :-) I have similar answer from them: ask Sun, ZFS is
buggy. Our storage is always fine. That is really ridiculous! People pay hu
>Suppose that ZFS detects an error in the first
> case. It can't tell
> the storage array "something's wrong, please
> fix it" (since the
> storage array doesn't provide for this with
> checksums and intelligent
> recovery), so all it can do is tell the user
> "this file is corrupt,
> recover it f
1 - 100 of 129 matches
Mail list logo