> Were you able to fix this problem in the end?
Unfortunately, no. I believe Matthew Ahrens took a look at it and couldn't
find the cause or how to fix it. We had to destroy the pool and re-create it
from scratch.
Fortunately, this was during the ZFS testing period, and no critically
importa
On 12-Jun-07, at 9:02 AM, eric kustarz wrote:
Comparing a ZFS pool made out of a single disk to a single UFS
filesystem would be a fair comparison.
What does your storage look like?
The storage looks like:
NAMESTATE READ WRITE CKSUM
tankONLINE 0
This is an old topic, discussed many times at length. However, I
still wonder if there are any workarounds to this issue except
disabling ZIL, since it makes ZFS over NFS almost unusable (it's a
whole magnitude slower). My understanding is that the ball is in the
hands of NFS due to ZFS's
00
BYTES 0x19c000 0x11da000
00
EREAD 0
EWRITE0
ECKSUM0
This will show you and read/write/cksum errors.
Thanks,
George
Siegfried Nikolaivich wrote:
Hello All,
I am wondering if there is a
Hello All,
I am wondering if there is a way to save the scrub results right before the
scrub is complete.
After upgrading to Solaris 10U3 I still have ZFS panicing right as the scrub
completes. The scrub results seem to be "cleared" when system boots back up,
so I never get a chance to see th
On 24-Oct-06, at 9:47 PM, James McPherson wrote:
Could you look through your msgbuf and/or /var/adm/messages and
find the full text of when these Illegal Request errors were
logged. That
will give an idea of where to look next.
Ok it doesn't look like it's the controller, I ran some tests
On 24-Oct-06, at 9:47 PM, James McPherson wrote:
On 10/25/06, Siegfried Nikolaivich <[EMAIL PROTECTED]> wrote:
And this is shown on the rest of the ports:
c0t?d0 Soft Errors: 6 Hard Errors: 0 Transport Errors: 0
Vendor: ATA Product: ST3320620AS Revision: CSer
On 24-Oct-06, at 9:11 PM, James McPherson wrote:
this error from the marvell88sx driver is of concern, The 10b8b decode
and disparity error messages make me think that you have a bad piece
of hardware. I hope it's not your controller but I can't tell
without more
data. You should have a look
Hello,
I am not sure if I am posting in the correct forum, but it seems somewhat zfs
related, so I thought I'd share it.
While the machine was idle, I started a scrub. Around the time the scrubbing
was supposed to be finished, the machine panicked.
This might be related to the 'metadata corru
> On Mon, Oct 09, 2006 at 11:08:14PM -0700, Matthew
> Ahrens wrote:
> You may also want to try 'fmdump -eV' to get an idea
> of what those
> faults were.
I am not sure how to interpret the results, maybe you can help me. It looks
like the following with many more similar pages following:
% fmdu
> Yeah, good catch. So this means that it seems to be
> able to read the
> label off of each device OK, and the labels look
> good. I'm not sure
> what else would cause us to be unable to open the
> pool... Can you try
> running 'zpool status -v'?
The command seems to return the same thing:
> zdb -l /dev/dsk/c0t1d0
Sorry for posting again, but I think you might have meant /dev/dsk/c0t1d0s0
there. The only difference between the following outputs is the guid for each
device.
# zdb -l /dev/dsk/c0t0d0s0
LABEL 0
--
> > zdb -v tank
Forgot to add "zdb: can't open tank: error 5" to the end of the output of that
command.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zf
Thanks for the response Matthew.
> > I don't think it's a hardware issue because it
> seems to be still
> > working fine, and has been for months.
>
> "Working fine", except that you can't access your
> pool, right? :-)
Well the computer and disk controller work fine when I tried it in Linux wit
> status: The pool metadata is corrupted and the pool
> cannot be opened.
Is there at least a way to determine what caused this error? Is it a hardware
issue? Is it a possible defect in ZFS?
I don't think it's a hardware issue because it seems to be still working fine,
and has been for months
> So, if I build it, people will want it? ;)
I think implementing this feature would help Apple adopt ZFS for Time Machine,
which is essentially a versioning FS in practice. Actually I don't know if
Apple does this, but you can increment versions with kernel notifications of
file changes (Spot
I was in the middle of doing a large transfer to my ZFS pool over CIFS. Near
the end of the transfer, the Solaris machine froze. Both ethernet links were
down.
I walked over to the machine and pushed the reset button, as it wouldn't
respond to any key-presses. After the machine booted up, I
> But for ZFS, it has been said often that it currently performs
> much better with a 64bit address space, such as that with
> Opterons and other AMD64 CPUs. I think this would play a
> bigger part in a ZFS server performing well than just MHZ
> and cache size.
I will no doubt be selecting a 64-bi
Hello,
What kind of x86 CPU does ZFS prefer? In particular, what kind of CPU is
optimal when using RAID-Z with a large number of disks (8)?
Does L2 cache size play a big role, 256kb vs 512kb vs 1MB? Are there any
performance improvements when using a dual core or quad processor machine?
I am
19 matches
Mail list logo