galenz: "I am on different hardware, thus I cannot restore the drive
configuration exactly."
Actually, you can learn most of it, if not all of it you need.
Do "zpool import -f" with no pool name and it should dump the issue with the
pool (what is making it fail.) If that doesn't contain privi
Can you post a "zpool import -f" for us to see?
One thing I ran into recently is that if the drives arrangement was changed
(like drives swapped) it can't recover. I moved an 8 drive array recently, and
didn't worry about the order of the drives. It could not be mounted without
reordering the
ttabbal:
If I understand correctly, raidz{1} is 1 drive protection and space is
(drives - 1) available. Raidz2 is 2 drive protection and space is (drives - 2)
etc. Same for raidz3 being 3 drive protection.
Everything I've seen you should stay around 6-9 drives for raidz, so don't
do
If you are asking if anyone has experienced two drive failures simultaneously?
The answer is yes.
It has happened to me (at home) and to one client, at least that I can
remember. In both cases, I was able to dd off one of the failed disks (with
just bad sectors or less bad sectors) and recons
Written by jktorn:
>Have you tried build 128 which includes pool recovery support?
>
>This is because FreeBSD hostname (and hostid?) is recorded in the
>labels along with active pool state.
>
>It does not work that way at the moment, though readonly import is
>quite useful option that can be tried.
devzero: when you have an exported pool with no log disk and you want to mount
the pool.
Here is the changes to make it compile on dev-129:
--- logfix.c.2009-04-26 2009-12-18 11:39:40.917435361 -0800
+++ logfix.c2009-12-18 12:19:27.507337246 -0800
@@ -20,6 +20,7 @@
#include
#include
+#i
It was created on AMD64 FreeBSD with 8.0RC2 (which was version 13 of ZFS iirc.)
At some point I knocked it out (export) somehow, I don't remember doing so
intentionally. So I can't do commands like zpool replace since there are no
pools.
It says it was last used by the FreeBSD box, but the Fre
I have a 9 drive system (four mirrors of two disks and one hot spare) with a
10th SSD drive for ZIL.
The ZIL is corrupt.
I've been unable to recover using FreeBSD 8, Opensolaris x86, and using logfix
(http://github.com/pjjw/logfix)
In FreeBSD 8.0RC3 and below (uses v13 ZFS):
1) Boot Single Use