Stephen Green wrote:
Hi, folks. I just built a new box and I'm running the latest OpenSolaris
bits. uname says:
SunOS blue 5.11 snv_111b i86pc i386 i86pc Solaris
I just did an image-update last night, but I was seeing this problem in
111a too.
I built myself a pool out of four 1TB disks (W
I was replicating a filesystem with an application than unmounts
filesystems before sending the snapshots (the unmount is used to check
if the fs is busy before attempting the send).
One filesystem failed to remount and a couple of child filesystems where
sent with their parent unmounted.
O
Did anyone ever have success with this?
I'm trying to add a usb flash device as rpool cache, and am hitting the same
problem, even after working through the SMI/EFI label and other issues above.
r...@asura:~# zpool add rpool cache /dev/dsk/c6t0d0s0
invalid vdev specification
use '-f' to override
Ian Collins wrote:
I was replicating a filesystem with an application than unmounts
filesystems before sending the snapshots (the unmount is used to check
if the fs is busy before attempting the send).
One filesystem failed to remount and a couple of child filesystems
where sent with their pa
Hi Brad,
Brad Reese wrote:
Hello,
I've run into a problem with zpool import that seems very similar to
the following thread as far as I can tell:
http://opensolaris.org/jive/thread.jspa?threadID=70205&tstart=15
The suggested solution was to use a later version of open solaris
(b99 or later) b
On Jun 1, 2009, at 4:57 AM, Darren J Moffat wrote:
Stephen Green wrote:
Hi, folks. I just built a new box and I'm running the latest
OpenSolaris bits. uname says:
SunOS blue 5.11 snv_111b i86pc i386 i86pc Solaris
I just did an image-update last night, but I was seeing this
problem in 111a
I'm building my new storage server, all the parts should come in this week.
Intel XEON W3520 quad-core
12G DDR3-1333 ECC ram
2*74G 15K rpm SAS for OS
8*1T SATA disks in raiz2 or stripe 2 sets of 4-disk raidz
32G Intel X25-E SSD (may mirror it later)
2*Intel 82574L NIC
Qlogic 4Gb QLE2460 FC HBA
I
So we have a 24x1TB system (from Silicon Mechanics). It's using an LSI
SAS card so we don't have any hardware RAID virtual drive type options.
Solaris 10 05/09
I was hoping we could set up one large zpool (RAIDZ) from the installer
and set up a small zfs filesystem off of that for the OS leaving
Hi Ray,
* Ray Van Dolson (rvandol...@esri.com) wrote:
> So we have a 24x1TB system (from Silicon Mechanics). It's using an LSI
> SAS card so we don't have any hardware RAID virtual drive type options.
>
> Solaris 10 05/09
>
> I was hoping we could set up one large zpool (RAIDZ) from the install
Hi list,
First off:
# cat /etc/release
Solaris 10 6/06 s10x_u2wos_09a X86
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 09 June 2006
Here's an (almost) d
"zpool clear" just clears the list of errors (and # of checksum errors)
from its stats. It does not modify the filesystem in any manner. You run
"zpool clear" to make the zpool forget that it ever had any issues.
-Paul
Jonathan Loran wrote:
Hi list,
First off:
# cat /etc/release
IDE flash DOM?
On Tue, Jun 2, 2009 at 8:46 AM, Ray Van Dolson wrote:
>
> Obviously we could throw in a couple smaller drives internally, or
> elsewhere... but are there any other options here?
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Kinda scary then. Better make sure we delete all the bad files before
I back it up.
What's odd is we've checked a few hundred files, and most of them
don't seem to have any corruption. I'm thinking what's wrong is the
metadata for these files is corrupted somehow, yet we can read them
So we have a 24x1TB system (from Silicon Mechanics). It's using an LSI
SAS card so we don't have any hardware RAID virtual drive type options.
Solaris 10 05/09
I was hoping we could set up one large zpool (RAIDZ) from the installer
and set up a small zfs filesystem off of that for the OS leavin
On Mon, Jun 01, 2009 at 03:30:11PM -0700, Maurice Volaski wrote:
> >So we have a 24x1TB system (from Silicon Mechanics). It's using an LSI
> >SAS card so we don't have any hardware RAID virtual drive type options.
> >
> >Solaris 10 05/09
> >
> >I was hoping we could set up one large zpool (RAIDZ)
If you run "zpool scrub" on the zpool, it'll do its best to identify the
file(s) or filesystems/snapshots that have issues. Since you're on a
single zpool, it won't self-heal any checksum errors... It'll take a
long time, though, to scrub 30TB...
-Paul
Jonathan Loran wrote:
Kinda scary then
jlo...@ssl.berkeley.edu said:
> What's odd is we've checked a few hundred files, and most of them don't
> seem to have any corruption. I'm thinking what's wrong is the metadata for
> these files is corrupted somehow, yet we can read them just fine. I wish I
> could tell which ones are reall
On Mon, Jun 01, 2009 at 03:19:59PM -0700, Jonathan Loran wrote:
>
> Kinda scary then. Better make sure we delete all the bad files before
> I back it up.
That shouldn't be necessary. Clearing the error count doesn't disable
checksums. Every read is going to verify checksums on the file data
Hi Victor,
zdb -e -bcsvL tank
(let this go for a few hours...no output. I will let it go overnight)
zdb -e -u tank
Uberblock
magic = 00bab10c
version = 4
txg = 2435914
guid_sum = 16655261404755214374
timestamp = 1240517036 UTC = Thu Apr 23 15:03:56
Here's the output of 'zdb -e -bsvL tank' (without -c) in case it helps. I'll
post with -c if it finishes.
Thanks,
Brad
Traversing all blocks ...
block traversal size 431585053184 != alloc 431585209344 (unreachable 156160)
bp count: 4078410
bp logical:433202894336
Well, I tried to clear the errors, but zpool clear didn't clear them.
I think the errors are in the metadata in such a way that they can't
be cleared. I'm actually a bit scared to scrub it before I grab a
backup, so I'm going to do that first. After the backup, I need to
break the mirr
21 matches
Mail list logo