Do you guys have any more information about this? I've tried the offset
methods, zfs_recover, aok=1, mounting read only, yada yada, with still 0 luck.
I have about 3TBs of data on my array, and I would REALLY hate to lose it.
Thanks!
--
This message posted from opensolaris.org
_
A little update on the subject.
After great help of Victor Latushkin the content of the pools is recovered.
The cause of the problem is still under investigation, but what is clear that
both config objects where corrupted.
What has been done to recover data:
Victor has a zfs module which allows
Borys Saulyak wrote:
> May I remind you that I issue occurred on Solaris 10, not on OpenSolaris.
>
>
I believe you. If you review the life cycle of a bug,
http://www.sun.com/bigadmin/hubs/documentation/patch/patch-docs/abugslife.pdf
then you will recall that bugs are fixed in NV and then
back
This panic message seems consistent with bugid 6322646, which was
fixed in NV b77 (post S10u5 freeze).
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6322646
-- richard
Borys Saulyak wrote:
>> From what I can predict, and *nobody* has provided
>> any panic
>> essages to confirm, ZF
> From what I can predict, and *nobody* has provided
> any panic
> essages to confirm, ZFS likely had difficulty
> writing. For Solaris 10u5
Panic stack is looking pretty much the same as panic on imprt, and cannot be
correlated to write failure:
Aug 5 12:01:27 omases11 unix: [ID 836849 kern.no
Borys Saulyak wrote:
>> Suppose that ZFS detects an error in the first
>> case. It can't tell
>> the storage array "something's wrong, please
>> fix it" (since the
>> storage array doesn't provide for this with
>> checksums and intelligent
>> recovery), so all it can do is tell the user
>> "this f
> Ask your hardware vendor. The hardware corrupted your
> data, not ZFS.
Right, that's all because of these storage vendors. All problems come from
them! Never from ZFS :-) I have similar answer from them: ask Sun, ZFS is
buggy. Our storage is always fine. That is really ridiculous! People pay hu
>Suppose that ZFS detects an error in the first
> case. It can't tell
> the storage array "something's wrong, please
> fix it" (since the
> storage array doesn't provide for this with
> checksums and intelligent
> recovery), so all it can do is tell the user
> "this file is corrupt,
> recover it f
On Thu, 14 Aug 2008, Miles Nordin wrote:
>> "mb" == Marc Bevand <[EMAIL PROTECTED]> writes:
>
>mb> Ask your hardware vendor. The hardware corrupted your data,
>mb> not ZFS.
>
> You absolutely do NOT have adequate basis to make this statement.
Unfortunately I was unable to read your en
Miles Nordin wrote:
>> "mb" == Marc Bevand <[EMAIL PROTECTED]> writes:
>
> mb> Ask your hardware vendor. The hardware corrupted your data,
> mb> not ZFS.
>
> You absolutely do NOT have adequate basis to make this statement.
>
> I would further argue that you are probably wrong, and t
> "mb" == Marc Bevand <[EMAIL PROTECTED]> writes:
mb> Ask your hardware vendor. The hardware corrupted your data,
mb> not ZFS.
You absolutely do NOT have adequate basis to make this statement.
I would further argue that you are probably wrong, and that I think
based on what we know t
To further clarify Will's point...
Your current setup provides excellent hardware protection, but absolutely no
data protection.
ZFS provides excellent data protection when it has multiple copies of the
data blocks (>1 hardware devices).
Combine the two, provide >1 hardware devices to ZFS, and yo
On Thu, Aug 14, 2008 at 07:42, Borys Saulyak <[EMAIL PROTECTED]> wrote:
> I've got, lets say, 10 disks in the storage. They are currently in RAID5
> configuration and given to my box as one LUN. You suggest to create 10 LUNs
> instead, and give them to ZFS, where they will be part of one raidz, r
> I would recommend you to make multiple LUNs visible
> to ZFS, and create
So, you are saying that ZFS will cope better with failures then any other
storage system, right? I'm just trying to imagine...
I've got, lets say, 10 disks in the storage. They are currently in RAID5
configuration and giv
Borys Saulyak eumetsat.int> writes:
>
> > Your pools have no redundancy...
>
> Box is connected to two fabric switches via different HBAs, storage is
> RAID5, MPxIP is ON, and all after that my pools have no redundancy?!?!
As Darren said: no, there is no redundancy that ZFS can use. It is impor
Borys Saulyak wrote:
>> Your pools have no redundancy...
> Box is connected to two fabric switches via different HBAs, storage is RAID5,
> MPxIP is ON, and all after that my pools have no redundancy?!?!
Not that ZFS can see and use, all that is just a single disk as far as
ZFS is concerned.
>>
> Your pools have no redundancy...
Box is connected to two fabric switches via different HBAs, storage is RAID5,
MPxIP is ON, and all after that my pools have no redundancy?!?!
> ...and got corrupted, therefore there is nothing ZFS
This is exactly what I would like to know. HOW this could happen
There is a chance that a software bug or change has been made which
will help you to recover from this. I suggest getting the latest SXCE
DVD, booting single user, and attempt an import.
Note: you may see a message indicating that you can upgrade the
pool. Do not upgrade the pool if you intend t
Borys Saulyak eumetsat.int> writes:
> root omases11:~[8]#zpool import
> [...]
> pool: private
> id: 3180576189687249855
> state: ONLINE
> action: The pool can be imported using its name or numeric identifier.
> config:
>
> private ONLINE
> c7t60060160CBA21000A6D22553CA91DC11d0 ONLIN
Hi,
I have problem with Solaris 10. I know that this forum is for OpenSolaris but
may be someone will have an idea.
My box is crashing on any attempt to import zfs pool. First crash happened on
export operation and since then I cannot import pool anymore due to kernel
panics. Is there any way o
20 matches
Mail list logo