Well, I see no takers or even a hint...
I've been playing with zdb to try to examine the pool, but I get:
# zdb -b pool4_green
zdb: can't open pool4_green: Bad exchange descriptor
# zdb -d pool4_green
zdb: can't open pool4_green: Bad exchange descriptor
So I'm not sure how to debug using zdb.
On Jul 6, 2010, at 6:30 PM, Brian Kolaci wrote:
> Well, I see no takers or even a hint...
>
> I've been playing with zdb to try to examine the pool, but I get:
>
> # zdb -b pool4_green
> zdb: can't open pool4_green: Bad exchange descriptor
>
> # zdb -d pool4_green
> zdb: can't open pool4_green
On Jul 4, 2010, at 4:58 AM, Andrew Jones wrote:
> Victor,
>
> The zpool import succeeded on the next attempt following the crash that I
> reported to you by private e-mail!
>From the threadlist it looked like system was pretty low on memory with stacks
>of userland stuff swapped out, hence s
On Jul 3, 2010, at 1:20 PM, George wrote:
>> Because of that I'm thinking that I should try
>> to change the hostid when booted from the CD to be
>> the same as the previously installed system to see if
>> that helps - unless that's likely to confuse it at
>> all...?
>
> I've now tried changing
On Jul 4, 2010, at 1:33 AM, R. Eulenberg wrote:
> R. Eulenberg web.de> writes:
>
>>
op> I was setting up a new systen (osol 2009.06
>>> and updating to
op> the lastest version of osol/dev - snv_134 -
>>> with
op> deduplication) and then I tried to import my
>>> backup zpoo
> I think it is quite likely to be possible to get readonly access to
> your data, but this requires modified ZFS binaries. What is your pool
> version? What build do you have installed on your system disk or
> available as LiveCD?
Sorry, but does this mean if ZFS can't write to the drives, access
On Jun 28, 2010, at 11:27 PM, George wrote:
> Again this core dumps when I try to do "zpool clear storage2"
>
> Does anyone have any suggestions what would be the best course of action now?
Do you have any crahsdumps saved? First one is most interesting one...
__
Daniel Carosone wrote:
> Something similar would be useful, and much more readily achievable,
> from ZFS from such an application, and many others. Rather than a way
> to compare reliably between two files for identity, I'ld liek a way to
> compare identity of a single file between two points in t
Hello list,
This has probably been discussed, however I would like to bring it up again,
so that the powers that be, know someone else is looking for this feature.
I would like to be able to shrink a pool and remove a non-redundant disk.
Is this something that is in the works?
It would be fanta
- Original Message -
> Hello list,
>
> This has probably been discussed, however I would like to bring it up
> again, so that the powers that be, know someone else is looking for
> this feature.
>
> I would like to be able to shrink a pool and remove a non-redundant
> disk.
>
> Is this s
The pool is not redundant, so I would suppose, yes, is is Raid-1 on the
software level.
I have a few drives, which are on a specific array, which I would like to
remove from this pool.
I have discovered the "replace" command, and I am going to try and replace,
1 for 1, the drives I would like to
I'm just about to start using ZFS in a RAIDZ configuration for a home file
server (mostly holding backups), and I wasn't clear on what happens if data
corruption is detected while resilvering. For example: let's say I'm using
RAIDZ1 and a drive fails. I pull it and put in a new one. While res
I have two different servers with ZFS root but both of them has different
mountpoint for rpool/ROOT one is /rpool/ROOT and other is legacy. Whats
the difference between the two and which is the one we should keep.
And why there is 3 different zfs datasets rpool, rpool/ROOT and
rpool/ROOT/zfs
I have two different servers with ZFS root but both of them has different
mountpoint for rpool/ROOT one is /rpool/ROOT and other is legacy. Whats
the difference between the two and which is the one we should keep.
And why there is 3 different zfs datasets rpool, rpool/ROOT and
rpool/ROOT/zfs
Hello list,
I posted this a few days ago on opensolaris-discuss@ list
I am posting here, because there my be too much noise on other lists
I have been without this zfs set for a week now.
My main concern at this point,is it even possible to recover this zpool.
How does the metadata work? what to
- Original Message -
> The pool is not redundant, so I would suppose, yes, is is Raid-1 on
> the software level.
>
> I have a few drives, which are on a specific array, which I would like
> to remove from this pool.
>
> I have discovered the "replace" command, and I am going to try and
>
On 07/ 6/10 10:56 AM, Ketan wrote:
I have two different servers with ZFS root but both of them has different
mountpoint for rpool/ROOT one is /rpool/ROOT and other is legacy.
It should be legacy.
Whats
the difference between the two and which is the one we should keep.
And why there is 3
- Original Message -
> I'm just about to start using ZFS in a RAIDZ configuration for a home
> file server (mostly holding backups), and I wasn't clear on what
> happens if data corruption is detected while resilvering. For example:
> let's say I'm using RAIDZ1 and a drive fails. I pull it
I tried zfs replace, however the new drive is slightly smaller, and even
with a -f, it refuses to replace the drive.
I guess i will have to export the pool and destroy this one to get my drives
back.
Still would like the ability to shrink a pool.
-
Cassandra
(609) 243-2413
Unix Administrator
"F
>
> Good. Run 'zpool scrub' to make sure there are no
> other errors.
>
> regards
> victor
>
Yes, scrubbed successfully with no errors. Thanks again for all of your
generous assistance.
/AJ
--
This message posted from opensolaris.org
___
zfs-discus
Hi all
With several messages in here about troublesome zpools, would there be a good
reason to be able to fsck a pool? As in, check the whole thing instead of
having to boot into live CDs and whatnot?
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.n
- Original Message -
> From: "Roy Sigurd Karlsbakk"
> To: "OpenSolaris ZFS discuss"
> Sent: Tuesday, 6 July, 2010 6:35:51 PM
> Subject: [zfs-discuss] ZFS fsck?
> Hi all
>
> With several messages in here about troublesome zpools, would there be
> a good reason to be able to fsck a pool? A
On Tue, 6 Jul 2010, Roy Sigurd Karlsbakk wrote:
Hi all
With several messages in here about troublesome zpools, would there be a
good reason to be able to fsck a pool? As in, check the whole thing
instead of having to boot into live CDs and whatnot?
You can do this with "zpool scrub". It vi
> You can do this with "zpool scrub". It visits every allocated block
> and
> verifies that everything is correct. It's not the same as fsck in that
> scrub can detect and repair problems with the pool still online and
> all
> datasets mounted, whereas fsck cannot handle mounted filesystems.
>
> I
On Tue, 6 Jul 2010, Roy Sigurd Karlsbakk wrote:
what I'm saying is that there are several posts in here where the only
solution is to boot onto a live cd and then do an import, due to
metadata corruption. This should be doable from the installed system
Ah, I understand now.
A couple of thing
Release Notes information:
If there are new features, each release is added to
http://www.nexenta.com/corp/documentation/release-notes-support.
If just bug fixes, then the Changelog listing is updated:
http://www.nexenta.com/corp/documentation/nexentastor-changelog
Regards,
Spandana
Hi Sam,
In general, FreeBSD uses different device naming conventions and power
failures seem to clobber disk labeling. The "I/O error" message also
points to problems accessing these disks.
I'm not sure if this helps, but I see that the 6 disks from the zdb -e
output are indicated as c7t0d0p0 --
On Tue, Jul 6, 2010 at 4:06 PM, Spandana Goli wrote:
> Release Notes information:
> If there are new features, each release is added to
> http://www.nexenta.com/corp/documentation/release-notes-support.
>
> If just bug fixes, then the Changelog listing is updated:
> http://www.nexenta.com/corp/doc
Hi all,
I've noticed something strange in the throughput in my zpool between
different snv builds, and I'm not sure if it's an inherent difference
in the build or a kernel parameter that is different in the builds.
I've setup two similiar machines and this happens with both of them.
Each system ha
On Jul 6, 2010, at 7:30 AM, Brian Kolaci wrote:
> Well, I see no takers or even a hint...
>
> I've been playing with zdb to try to examine the pool, but I get:
>
> # zdb -b pool4_green
> zdb: can't open pool4_green: Bad exchange descriptor
For the archives, EBADE "Bad exchange descriptor" was r
On Jul 6, 2010, at 10:02 AM, Sam Fourman Jr. wrote:
> Hello list,
>
> I posted this a few days ago on opensolaris-discuss@ list
> I am posting here, because there my be too much noise on other lists
>
> I have been without this zfs set for a week now.
> My main concern at this point,is it even
Hello All,
I've recently run into an issue I can't seem to resolve. I have been
running a zpool populated with two RAID-Z1 VDEVs and a file on the
(separate) OS drive for the ZIL:
raidz1-0 ONLINE
c12t0d0 ONLINE
c12t1d0 ONLINE
c12t2d0 ONLINE
c12t3d0 ONLINE
raidz1-
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Cassandra Pugh
>
> I would like to be able to shrink a pool and remove a non-redundant
> disk.
>
> Is this something that is in the works?
I think the request is to remove vdev's from a pool.
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Andrew Kener
>
> the OS hard drive crashed [and log device]
Here's what I know: In zpool >= 19, if you import this, it will prompt you
to confirm the loss of the log device, and then it will
Under FreeBSD I've seen zpool scrub sustain nearly 500 MB/s in pools with large
files (a pool with eight MIRROR vdevs on two Silicon Image 3124 controllers).
You need to carefully look for bottlenecks in the hardware. You don't indicate
how the disks are attached. I would measure the total ban
35 matches
Mail list logo