Hello everyone, I just wanted to play with zfs just a bit before I start using
it at my workplace on servers so I did set it up on my Solaris 10 U2 box.
I used to have all my disks mounted as UFS and everything was fine. I had my
/etc/vfstab as such:
#
fd - /dev
Yes sir:
[EMAIL PROTECTED]:/
# zpool status -v fserv
pool: fserv
state: DEGRADED
status: One or more devices is currently being resilvered. The pool
will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 5.90% done, 27h13m t
Anantha,
I was hoping to see a lot less trace records than that. Was DTrace
running the whole time or did you start it just before you saw the
problem?
Can you sieve thru the trace to see if you can see any subsequent
firings whose timestamp differences are big? (e.g. > 1s).
You can try this a
would this need an extension of the filesystem itself or could this be done
somehow else?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> On 9/15/06, can you guess? <[EMAIL PROTECTED]>
> wrote:
...
file-level, however, is really pushing
> it. You might end
> up with an administrative nightmare deciphering which
> files have how
> many copies.\
I'm not sure what you mean: the level of redundancy would be a per-file
attribute
On 9/15/06, can you guess? <[EMAIL PROTECTED]> wrote:
Implementing it at the directory and file levels would be even more flexible:
redundancy strategy would no longer be tightly tied to path location, but
directories and files could themselves still inherit defaults from the
filesystem and p