> Yeah, good catch. So this means that it seems to be
> able to read the
> label off of each device OK, and the labels look
> good. I'm not sure
> what else would cause us to be unable to open the
> pool... Can you try
> running 'zpool status -v'?
The command seems to return the same thing:
Siegfried Nikolaivich wrote:
zdb -l /dev/dsk/c0t1d0
Sorry for posting again, but I think you might have meant
/dev/dsk/c0t1d0s0 there. The only difference between the following
outputs is the guid for each device.
Yeah, good catch. So this means that it seems to be able to read the
label o
> zdb -l /dev/dsk/c0t1d0
Sorry for posting again, but I think you might have meant /dev/dsk/c0t1d0s0
there. The only difference between the following outputs is the guid for each
device.
# zdb -l /dev/dsk/c0t0d0s0
LABEL 0
--
> > zdb -v tank
Forgot to add "zdb: can't open tank: error 5" to the end of the output of that
command.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zf
Thanks for the response Matthew.
> > I don't think it's a hardware issue because it
> seems to be still
> > working fine, and has been for months.
>
> "Working fine", except that you can't access your
> pool, right? :-)
Well the computer and disk controller work fine when I tried it in Linux wit
Ewen Chan wrote:
(with help from Robert)
Yes, there are files.
# pwd
/var/crash/FILESERVER
# ls -F
boundsunix.0unix.1 vmcore.0 vmcore.1
# mdb 0
Loading modules: [ unix krtld genunix specfs dtrace cpu.AuthenticAMD.15 ufs ip
sctp usba fctl nca lofs random zfs nfs sppp ptm cpc fci
Stefan Urbat wrote:
What bug was filed?
6421427 is nfs related, but another forum member thought, that it is in fact a
general IDE performance bottleneck behind, and was only made visible in this
case. There is a report, that on an also with simple IDE equipped Blade 150 the
same issue with
Siegfried Nikolaivich wrote:
status: The pool metadata is corrupted and the pool cannot be
opened.
Is there at least a way to determine what caused this error? Is it a
hardware issue? Is it a possible defect in ZFS?
My best guess would be that it's a hardware issue.
I don't think it's a h
I think the original point of NFS being better WRT data making it to the
disk was that :
NFS follows the SYNC-ON-CLOSE semantics. You will not see an explicit
fsync() being called
by the tar...
-- Sanjeev.
Frank Batschulat (Home) wrote:
On Tue, 10 Oct 2006 01:25:36 +0200, Roch <[EMAIL PROTECT
On Tue, 10 Oct 2006 01:25:36 +0200, Roch <[EMAIL PROTECTED]> wrote:
You tell me ? We have 2 issues
can we make 'tar x' over direct attach, safe (fsync)
and posix compliant while staying close to current
performance characteristics ? In other words do we
have the
This is correct based on our experience with ZFS. When using NFS v3, you have
COMMIT's that come down the wire to the ZFS/NFS server which forces a sync or
flush to disk. To get around this issue without compromizing data integrity you
can effectively get some NVRAM by adding a battery-backed ha
errors: No known data errors
Confused me. It looks like an error message
Thanks
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 10/10/2006, at 10:05 AM, ttoulliu2002 wrote:
Hi:
I have zpool created
# zpool list
NAMESIZEUSED AVAILCAP HEALTH
ALTROOT
ktspool34,5G 33,5K 34,5G 0% ONLINE -
However, zpool status shows no known data error. May I know what
I might be wrong here, but I think it's telling you that there are no
errors.
Something like:
errors: none
or
errors: None that we know of, but we'll let you know if there are any.
At least that is how I'd read it.
:)
Do you have an actual problem other than the text?
Nathan.
On Tue, 2006
ttoulliu2002 wrote:
Hi:
I have zpool created
# zpool list
NAMESIZEUSED AVAILCAP HEALTH ALTROOT
ktspool34,5G 33,5K 34,5G 0% ONLINE -
However, zpool status shows no known data error. May I know what is the problem
# zpool status
Hi:
I have zpool created
# zpool list
NAMESIZEUSED AVAILCAP HEALTH ALTROOT
ktspool34,5G 33,5K 34,5G 0% ONLINE -
However, zpool status shows no known data error. May I know what is the problem
# zpool status
pool: ktspool
state:
On Tue, Roch wrote:
>
> Joerg Schilling writes:
> > Roch <[EMAIL PROTECTED]> wrote:
> >
> > > I would add that this is not a bug or deficientcy in
> > > implementation. Any NFS implementation tweak to make 'tar x'
> > > go as fast as direct attached will lead to silent data
>
Joerg Schilling writes:
> Roch <[EMAIL PROTECTED]> wrote:
>
> > I would add that this is not a bug or deficientcy in
> > implementation. Any NFS implementation tweak to make 'tar x'
> > go as fast as direct attached will lead to silent data
> > corruption (tar x succeeds but
> status: The pool metadata is corrupted and the pool
> cannot be opened.
Is there at least a way to determine what caused this error? Is it a hardware
issue? Is it a possible defect in ZFS?
I don't think it's a hardware issue because it seems to be still working fine,
and has been for months
Hi Hugo,
ZFS uses the EFI label.
If you need to relabel the disks with the older SMI (VTOC) label,
do this:
# format -e
format> label
[0] SMI Label
[1] EFI Label
Specify Label type[0]: 0
Cindy
Hugo Alejandro Mendez Giraldo - Sun Microsystems wrote:
Hi gurus,
I was playing with zfs in a V
Hi gurus,
I was playing with zfs in a V890 before it was installed for
production. We reinstalled it for production, but the 4 disks We used
to play with zfs have a non standard format (slices comes from s0 to s8
not s2 for backup s7 does not exists)
We need to recover those 4 disks to be u
Why not see if you can find (or write, or have written) an editor that does the
version name changes for you?
i.e. - each time you save, or each auto-save, it writes a different version of
the file, and when you exit, it asks if you'd like to retain the other versions
or not?
Sounds like it wo
On 10/7/06, Erik Trimble <[EMAIL PROTECTED]> wrote:
Chad Leigh -- Shire.Net LLC wrote:
>>> Plus, the number of files being created under typical
>>> modern systems is at least two (and probably three or four) orders
>>> of magnitude greater. I've got 100,000 files under /usr in Solar
On 10/6/06, Erik Trimble <[EMAIL PROTECTED]> wrote:
David Dyer-Bennet wrote:
> On 10/6/06, Nicolas Williams <[EMAIL PROTECTED]> wrote:
>
>> > >Maybe Erik would find it confusing. I know I would find it
>> > >_annoying_.
>> >
>> > Then leave it set to 1 version
>>
>> Per-directory? Per-filesyste
On Mon, Oct 09, 2006 at 11:16:41AM -0400, Jonathan Edwards wrote:
> On Oct 8, 2006, at 23:54, Nicolas Williams wrote:
> >Let's keep interface and implementation details separate. Most of
> >this
> >thread has been about interfaces precisely because that's what users
> >will interact with; users
On Oct 8, 2006, at 23:54, Nicolas Williams wrote:
On Sun, Oct 08, 2006 at 11:16:21PM -0400, Jonathan Edwards wrote:
On Oct 8, 2006, at 22:46, Nicolas Williams wrote:
You're arguing for treating FV as extended/named attributes :)
kind of - but one of the problems with EAs is the increase/blo
On Mon, Oct 09, 2006 at 12:44:34PM +0200, Joerg Schilling wrote:
> Nicolas Williams <[EMAIL PROTECTED]> wrote:
>
> > You're arguing for treating FV as extended/named attributes :)
> >
> > I think that'd be the right thing to do, since we have tools that are
> > aware of those already. Of course,
Joseph Mocker wrote:
However would it be great if I could somehow easily FV a file I am
working on with some arbitrary (closed) application I am forced to use
without the application really knowing about it, and with little or no
actions I have to take to do so?
To paraphrase an old wive's
Nicolas Williams <[EMAIL PROTECTED]> wrote:
> You're arguing for treating FV as extended/named attributes :)
>
> I think that'd be the right thing to do, since we have tools that are
> aware of those already. Of course, we're talking about somewhat magical
> attributes, but I think that's fine (t
Nicolas Williams <[EMAIL PROTECTED]> wrote:
> On Sat, Oct 07, 2006 at 01:43:29PM +0200, Joerg Schilling wrote:
> > The only idea I get thast matches this criteria is to have the versions
> > in the extended attribute name space.
>
> Indeed. All that's needed then, CLI UI-wise, beyond what we have
Erik Trimble <[EMAIL PROTECTED]> wrote:
> > The only idea I get thast matches this criteria is to have the versions
> > in the extended attribute name space.
> >
> > Jörg
> >
> >
> Realistically speaking, that's my conclusion, if we want a nice clean,
> well-designed solution. You need to hide
Roch <[EMAIL PROTECTED]> wrote:
> I would add that this is not a bug or deficientcy in
> implementation. Any NFS implementation tweak to make 'tar x'
> go as fast as direct attached will lead to silent data
> corruption (tar x succeeds but the files don't checksum
> ok).
>
> Int
On Fri, Oct 06, 2006 at 02:08:34PM -0700, Erik Trimble wrote:
> Also, "save-early-save-often" results in a version explosion, as does
> auto-save in the app. While this may indeed mean that you have all of
> your changes around, figuring out which version has them can be
> massively time-consu
[EMAIL PROTECTED] wrote:
I completely disagree. In this scenario (and almost all others), use of
regular snapshots will solve the problem. 'zfs snapshot -r' is
extremely fast, and I'm working on some new features that will make
using snapshots for this even easier and better-performing.
If
On Fri, Oct 06, 2006 at 11:57:36AM -0700, Matthew Ahrens wrote:
> [EMAIL PROTECTED] wrote:
> >On Fri, Oct 06, 2006 at 01:14:23AM -0600, Chad Leigh -- Shire.Net LLC
> >wrote:
> >>But I would dearly like to have a versioning capability.
> >
> >Me too.
> >Example (real life scenario): there is a samb
35 matches
Mail list logo