I just installed the Leopard beta that was distributed at WWDC. Sadly, the
installer provided no ZFS option (the only options were HFS Extended Journaled
and a case-sensitive version of the same).
However, typing this in the terminal:
$ sudo zpool status
Returned this:
ZFS Readonly implemntat
Mohammed Beik wrote:
Hi
Has anyone any notes on how best configure ZFS pool for NFS mount
to a 4-node RAC cluster.
I am particularly interested in config options for zfs/zpool and NFS
options at kernel level.
The zpool is being presented from x4500 (thumper), and NFS presented to
four nodes (x84
I had a bit of a problem with zfs today. Most of it stems from being told that
zfs can do a bunch of things that it can't really do. I'm not really concerned
about any of that right now, I just want to see if there's a way to get my data
back.
Here's an abbreviated version of what happened:
I
Wondering if anyone at WWDC has poked around the kexts, etc. for ZFS.
It seemed oddly missing today at the keynote in light of last week's
announcement. Is it too early to announce it due to some functions
that are still being added and thus Apple baking two versions of Time
Machine (one with and
Hi!
I wanted to let the ZFS Community know that the translated ZFS Admin Guide has
been open sourced on the G11n Documentation Download Center:
http://dlc.sun.com/osol/g11n/downloads/docs/current/
(English: http://opensolaris.org/os/community/zfs/docs/zfsadmin.pdf)
It's actually been a while si
Bill Sommerfeld <[EMAIL PROTECTED]> wrote:
> On Mon, 2007-06-11 at 23:03 +0200, [EMAIL PROTECTED] wrote:
> > >Maybe some additional pragmatism is called for here. If we want NFS
> > >over ZFS to work well for a variety of clients, maybe we should set
> > >st_size to larger values..
> >
> > +1; l
On Mon, Jun 11, 2007 at 02:03:58PM -0700, David Bustos wrote:
> Quoth Ed Ravin on Thu, Jun 07, 2007 at 09:57:52PM -0700:
> > My Solaris 10 box is exporting a ZFS filesystem over NFS. I'm
> > accessing the data with a NetBSD 3.1 client, which only supports NFS
> > 3. Everything works except when I
On Mon, 2007-06-11 at 23:03 +0200, [EMAIL PROTECTED] wrote:
> >Maybe some additional pragmatism is called for here. If we want NFS
> >over ZFS to work well for a variety of clients, maybe we should set
> >st_size to larger values..
>
> +1; let's teach the admins to do " st_size /= 24" mentally :-
Hi
Has anyone any notes on how best configure ZFS pool for NFS mount
to a 4-node RAC cluster.
I am particularly interested in config options for zfs/zpool and NFS
options at kernel level.
The zpool is being presented from x4500 (thumper), and NFS presented to
four nodes (x8400). There will be hig
Quoth Ed Ravin on Thu, Jun 07, 2007 at 09:57:52PM -0700:
> My Solaris 10 box is exporting a ZFS filesystem over NFS. I'm
> accessing the data with a NetBSD 3.1 client, which only supports NFS
> 3. Everything works except when I look at the .zfs/snapshot
> directory. The first time I list out the
>Maybe some additional pragmatism is called for here. If we want NFS
>over ZFS to work well for a variety of clients, maybe we should set
>st_size to larger values..
+1; let's teach the admins to do " st_size /= 24" mentally :-)
Casper
___
zfs-discuss
On Mon, 2007-06-11 at 00:57 -0700, Frank Batschulat wrote:
> a directory is strictly speaking not a regular file and this is in a way
> enforced by ZFS,
> the standards wording further defines later on..
So, yes, the standards allow this behavior -- but it's important to
distinguish between deliv
Hi Doug,
I need more information:
You need /devices and /dev on zfs root to boot. Not sure what you
mean by 'it doesn't work'?
What OS version is running on your boot slice (s0)?
Is this where your zfs root pool (s5) built from?
'installgrub new-stage1 new-stage2 /dev/rdsk/c0d0s0' p
On Jun 11, 2007, at 12:52 AM, Borislav Aleksandrov wrote:
Panic on snv_65&64 when:
#mkdir /disk
#mkfile 128m /disk/disk1
#mkfile 128m /disk/disk2
#zpool create data mirror /disk/disk1 /disk/disk2
#mkfile 128m /disk/disk1
#mkfile 128m /disk/disk2
At this point you have completely overwritten t
I just started to use zfs after longing to try it out for a long while now. The
problem is that I've "lost" 240Gb out of 700Gb
I have single 700G pool on a 3510 HW raid mounted on /nm4/data running
# du -sk /nm4/data
411025338 /nm4/data
While a
# df -hk
Filesystem size use
I think this falls under the bug (of which the number I do not have handy at
the moment) where ZFS needs to more gracefully fail in a situation like this.
Yes, be probably broke his zpool, but it really shouldn't have paniced the
machine.
-brian
On Mon, Jun 11, 2007 at 03:05:19PM -0100, Mario Goe
We could use this too, does anyone know if it's on the horizon?
- Bob
>>> ganesh <[EMAIL PROTECTED]> 6/8/2007 5:13 PM >>>
Hi Eric,
Is zfs dynamic lun expansion possible now?.
thanks!
Ganes
This message posted from opensolaris.org
___
zfs-discuss ma
I think in your test, you have to force some IO on the pool for ZFS to
recognize that your simulated disk has gone faulty, and that after the
first mkfile already. Immediately overwriting both files after pool
creation leaves ZFS with the impression that the disks went missing. And
even if ZFS noti
>
> Hi Doug, from the information I read so far, I assume
> you have
>
> c0d0s0 - ufs root
> c0d0s5 - zfs root pool 'snv' and root filesystem
> 'b65'
Hi Lin,
My complete layout follows:
c0d0s0: boot slice (holds a manually maintained /boot) -- UFS
c0d0s1: the usual swap slice
c0d0s3: S10U3 roo
Frank Batschulat <[EMAIL PROTECTED]> wrote:
> > Only one byte per directory entry? This confuses
> > programs that assume that the st_size reported for a
> > directory is a multiple of sizeof(struct dirent) bytes.
>
> Sorry, but a program making this assumption is just flawed and should be
> fi
> Only one byte per directory entry? This confuses
> programs that assume that the st_size reported for a
> directory is a multiple of sizeof(struct dirent) bytes.
Sorry, but a program making this assumption is just flawed and should be fixed.
The POSIX standard is crystal-clear here and explic
Panic on snv_65&64 when:
#mkdir /disk
#mkfile 128m /disk/disk1
#mkfile 128m /disk/disk2
#zpool create data mirror /disk/disk1 /disk/disk2
#mkfile 128m /disk/disk1
#mkfile 128m /disk/disk2
#zpool scrub data
panic[cpu0]/thread=2a100e33ca0: ZFS: I/O failure (write on off 0: zio
30002925770 [L0 bpli
22 matches
Mail list logo