Bennett, Steve wrote:
A slightly different tack now...
what filesystems is it a good (or bad) idea to put on ZFS?
root - NO (not yet anyway)
home - YES (although the huge number of mounts still scares me a bit)
Why ? I've seen Solaris systems with upwards of 10,000 mounts in /home
via the au
> It is likely that "best practice" will be to separate
> the root pool (that is, the pool where dataset are
> allocated)
On a system with plenty of disks it is a good idea. I started
doing this on my laptop, and later decided to combine root and
data into one pool. The filesystem boundary gave me
What danger is there in stripping off the leading / from zfs
command args and using what is left as a filesystem name?
Quite often I do a quick copy-paste to get from df output
to the zfs command line and every time I need to re-edit
the command line because the copy-paste takes the leading
/ wit
On Mon, 2006-07-03 at 18:02 +0800, Darren Reed wrote:
> What danger is there in stripping off the leading / from zfs
> command args and using what is left as a filesystem name?
If I'm understanding you correctly, then you can't do that, as the
mountpoint isn't always the same as the name of the fi
Tim Foster wrote:
On Mon, 2006-07-03 at 18:02 +0800, Darren Reed wrote:
What danger is there in stripping off the leading / from zfs
command args and using what is left as a filesystem name?
If I'm understanding you correctly, then you can't do that, as the
mountpoint isn't always the
On a system still running nv_30, I've a small RaidZ filled to the brim:
2 3 [EMAIL PROTECTED] pts/9 ~ 78# uname -a
SunOS mir 5.11 snv_30 sun4u sparc SUNW,UltraAX-MP
0 3 [EMAIL PROTECTED] pts/9 ~ 50# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mirpool1 33.6G 0
Hi,
of course, the reason for this is the copy-on-write approach: ZFS has
to write new blocks first before the modification of the FS structure
can reflect the state with the deleted blocks removed.
The only way out of this is of course to grow the pool. Once ZFS learns
how to free up vdevs this
Notice there's a product announcement on Tuesday:
http://www.prnewswire.com/cgi-bin/stories.pl?ACCT=104&STORY=/www/story/06-30-2006/0004390495&EDATE=
and Jonathan mentioned Thumper was due for release at the end of june:
http://blogs.sun.com/roller/page/jonathan?entry=phase_2
With ZFS offici
Hello Gurus;
I've been playing with ZFS and reading the materials, BLOGS and FAQs.
It's an awesome FS and I just wish that Sun would evangelize a little
bit more. But that's another story.
I'm writing here to ask a few very simple questions.
I am able to understand the RAID-5 write hole and
>I understand the copy-on-write thing. That was very well illustrated in
>"ZFS The Last Word in File Systems" by Jeff Bonwick.
>
>But if every block is it's own RAID-Z stripe, if the block is lost, how
>does ZFS recover the block???
You should perhaps not take "block" literally; the block is w
On Mon, Jul 03, 2006 at 06:43:59PM +0800, Darren Reed wrote:
>
> Well, I use "df -kl", but commands such as "df" will work just
> the same if I use "." or "/" or "/dev/dsk/c0t3d0s0" (and all three
> are the same.)
>
> Yes, arguably I am cut-past'ing the wrong part of the output..
>
> I suppose w
You don't need to grow the pool. You should always be able truncate the
file without consuming more space, provided you don't have snapshots.
Mark has a set of fixes in testing which do a much better job of
estimating space, allowing us to always unlink files in full pools
(provided there are no s
>
> Currently, I'm using executable maps to create zfs
> home directories.
>
> Casper
Casper, anything you can share with us on that? Sounds interesting.
thanks,
-- MikeE
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
>>
>> Currently, I'm using executable maps to create zfs
>> home directories.
>>
>> Casper
>
>
>Casper, anything you can share with us on that? Sounds interesting.
It's really very lame:
Add to /etc/auto_home as last entry:
+/etc/auto_home_import
And install /etc/auto_home_import as execut
On 7/3/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>>
>> Currently, I'm using executable maps to create zfs
>> home directories.
>>
>> Casper
>
>
>Casper, anything you can share with us on that? Sounds interesting.
It's really very lame:
Add to /etc/auto_home as last entry:
+/etc/auto_
On 7/3/06, James Dickens <[EMAIL PROTECTED]> wrote:
On 7/3/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> >>
> >> Currently, I'm using executable maps to create zfs
> >> home directories.
> >>
> >> Casper
> >
> >
> >Casper, anything you can share with us on that? Sounds interesting.
>
>
> I
I am new to zfs and do not understand the reason that you would want to create a separate file system for each home directory. Can some one explain to me why you would want to do this?
On 7/3/06, James Dickens <[EMAIL PROTECTED]> wrote:
On 7/3/06, James Dickens <[EMAIL PROTECTED]> wrote:> On 7/3/06
On 7/3/06, Nicholas Senedzuk <[EMAIL PROTECTED]> wrote:
I am new to zfs and do not understand the reason that you would want to
create a separate file system for each home directory. Can some one explain
to me why you would want to do this?
because in ZFS filesystems are cheap, you can assign
That's excellent news, as with the frequency that customers applications
go feral and write a whole heap of crap (or they don't watch closely
enough with gradual filling) we will forever be getting calls if this
functionality is *anything* but transparent...
Most explorers I see have filesystem 10
Doug Scott wrote:
It is likely that "best practice" will be to separate
the root pool (that is, the pool where dataset are
allocated)
On a system with plenty of disks it is a good idea. I started
doing this on my laptop, and later decided to combine root and
data into one pool. The filesystem
Hi
i found a bug its a bit hard to reproduce.
# zfs create pool2/t1
# touch /pool2/t1/file
# zfs snapshot pool2/[EMAIL PROTECTED]
# zfs clone pool2/[EMAIL PROTECTED] pool2/t2
# zfs share pool2/t2
on a second box nfs mount the filesystem, same error if a solaris
express box or linux
# mount e
> Doug Scott wrote:
> >>It is likely that "best practice" will be to
> separate
> >>the root pool (that is, the pool where dataset are
> >>allocated)
> >
> > On a system with plenty of disks it is a good idea.
> I started
> > doing this on my laptop, and later decided to
> combine root and
> > da
On Mon, Jul 03, 2006 at 11:13:33PM +0800, Steven Sim wrote:
> Could someone elaborate more on the statement "metadata drives
> reconstruction"...
ZFS starts from the ubberblock and works its way down (think recursive
tree traversal) the metadata to find all live blocks and rebuilds the
replaced v
23 matches
Mail list logo