anything changed.
Well, the good things is you've got the volume mounted. Assuming zpool
status homespool returns a good result you should be in luck.
Hope that helps,
--
Stuart Low
Systems Infrastructure Manager
iseek Communications Pty Ltd
Excellence in business data solutions
ph 1300 661
Heya,
> I believe Robert and Darren have offered sufficient explanations: You
> cannot be assured of committed data unless you've sync'd it. You are
> only risking data loss if your users and/or applications assume data
> is committed without seeing a completed sync, which would be a design
> erro
Heya,
> SL> 1) Doing a zpool destroy on the volume
> SL> 2) Doing a zpool import -D on the volume
> SL> It would appear to me that primarily what has occurred is one or all of
> SL> the metadata stores ZFS has created have become corrupt? Will a zpool
> SL> import -D ignore metadata and rebuild us
Hi Jeff,
> One possibility: I've seen this happen when a system doesn't shut down
> cleanly after the last change to the pool configuration. In this case,
> what can happen is that the boot archive (an annoying implementation
> detail of the new boot architecture) can be out of date relative to
>
50:28AM +1000, Stuart Low wrote:
> > Heya,
> >
> > > Sorry. Try 'echo vdev_load::dis | mdb -k'. This will give the
> > > disassembly for vdev_load() in your current kernel (which will help us
> > > pinpoint what
Heya,
> Hmmm. This would indicate that vdev_dtl_load() is failing, which
> suggests that something vital got corrupted to the point where
> dmu_bonus_hold() or space_map_load() is failing. I don't know exactly
> how this is supposed to work, or how exactly to debug from
> here, so I'll let one o
Heya,
> Sorry. Try 'echo vdev_load::dis | mdb -k'. This will give the
> disassembly for vdev_load() in your current kernel (which will help us
> pinpoint what vdev_load+0x69 is really doing).
Ahh, thanks for that.
Attached.
Stuart
---
[EMAIL PROTECTED] ~]$ echo vdev_load::dis | mdb -k
vdev_
Heya,
> The label looks sane. Can you try running:
Not sure if I should be reassured by that but I'll hold my hopes
high. :)
> # dtrace -n vdev_set_state:entry'[EMAIL PROTECTED], args[3], stack()] =
> count()}'
> While executing 'zpool import' and send the output? Can you also send
> '::dis'
Heya,
Firstly thanks for your help.
> That's quite strange.
Your telling me! :) I like ZFS I really do but this has dented my love
of it. :-/
> What version of ZFS are you running?
[EMAIL PROTECTED] ~]$ pkginfo -l SUNWzfsu
PKGINST: SUNWzfsu
NAME: ZFS (Usr)
CATEGORY: system
Hi there,
We have been using ZFS for our backup storage since August last year.
Overall it's been very good, handling transient data issues and even
drop outs of connectivity to the iscsi arrays we are using for storage.
However, I logged in this morning to discover that the ZFS volume could
not b
[EMAIL PROTECTED] ~]$ zpool status -v
no pools available
[EMAIL PROTECTED] ~]$
[EMAIL PROTECTED] ~]$ zpool status -v
no pools available
[EMAIL PROTECTED] ~]$
It's like it's "not there" but when I do a zpool import it reports it as
there and available just that I need to use -f. Use -f gives me
in
Heya,
SC3.1 until we can get our hands on SC3.2 beta. Realistically the Cluster
itself is operating independent of the ZFS pools (we do manual failover).
Stu
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensola
I thought that might work too but having tried the move of zpool.cache alas
same problem. :(
Stuart
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-di
Nada.
[EMAIL PROTECTED] ~]$ zpool export -f ax150s
cannot open 'ax150s': no such pool
[EMAIL PROTECTED] ~]$
I wonder if it's possible to force the pool to be marked as inactive? Ideally
all I want to do is get it back online then scrub it for errors. :-|
Stuart
This message posted from open
Well I would, if it let me. :)
[EMAIL PROTECTED] ~]$ zpool export ax150s
cannot open 'ax150s': no such pool
[EMAIL PROTECTED] ~]$
By it's own admission it's Online but it can't find it within it's pool list?
:-|
Stuart
This message posted from opensolaris.org
___
Hi there,
We've been working with ZFS at an initial setup stage hoping to eventually
integrate with Sun Cluster 3.2 and create a failover fs. Somehow between my two
machines I managed to get the file system mounted on both. On reboot of both
machines I can now no longer import my ZFS file syste
16 matches
Mail list logo