On Sat, 2007-04-28 at 17:48 +0100, Peter Tribble wrote:
> On 4/26/07, Lori Alt <[EMAIL PROTECTED]> wrote:
> > Peter Tribble wrote:
> <snip>

> Why do administrators do 'df' commands?  It's to find out how much space
> > is used or available in a single file system.   That made sense when file
> > systems each had their own dedicated slice, but now it doesn't make that
> > much sense anymore.  Unless you've assigned a quota to a zfs file system,
> > "space available" is meaningful more at the pool level.
> 
> True, but it's actually quite hard to get at the moment. It's easy if
> you have a single pool - it doesn't matter which line you look at.
> But once you have 2 or more pools (and that's the way it would
> work, I expect - a boot pool and 1 or more data pools) there's
> an awful lot of output you may have to read. This isn't helped
> by zpool and zfs giving different answers., with the one from zfs
> being the one I want. The point is that every filesystem adds
> additional output the administrator has to mentally filter. (For
> one thing, you have to map a directory name to a containing
> pool.)

It's actually quite easy and easier than the other alternatives (ufs,
veritas, etc):

# zfs list -rH -o name,used,available,refer rootdg

And now it's setup to be parsed by a script (-H) since the output is
tabbed.  The -r says to recursively display children of the parent and
the -o with the specified fields says to only display the fields
specified.

(output from one of my systems)

blast(9):> zfs list -rH -o name,used,available,refer rootdg
rootdg  4.39G   44.1G   32K
rootdg/nvx_wos_62       4.38G   44.1G   503M
rootdg/nvx_wos_62/opt   793M    44.1G   793M
rootdg/nvx_wos_62/usr   3.01G   44.1G   3.01G
rootdg/nvx_wos_62/var   113M    44.1G   113M
rootdg/swapvol  16K     44.1G   16K

Even tho the mount point is setup as a legacy mount point, I know where
each of them is mounted due to the vol name.


And yes, this system has more than one pool:

blast(10):> zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
lpool                  17.8G   11.4G   6.32G    64%  ONLINE     -
rootdg                 49.2G   4.39G   44.9G     8%  ONLINE     -


> 
> > With zfs, file systems are in many ways more like directories than what
> > we used to call file systems.   They draw from pooled storage.  They
> > have low overhead and are easy to create and destroy.  File systems
> > are sort of like super-functional directories, with quality-of-service
> > control and cloning and snapshots.  Many of the things that sysadmins
> > used to have to do with file systems just aren't necessary or even
> > meaningful anymore.  And so maybe the additional work of managing
> > more file systems is actually a lot smaller than you might initially think.
> 
> Oh, I agree. The trouble is that sysadmins still have to work using
> their traditional tools, including their brains, which are tooled up
> for cases with a much lower filesystem count. What I don't see as
> part of this are new tools (or enhancements to existing tools) that
> make this easier to handle.

Not sure I agree with this.  Many times, you end up dealing with
multiple vxvol's and file systems.  Anything over 12 filesystems and
you're in overload (at least for me;) and I used my monitoring and
scripting tools to filter that for me. 

Many of the systems I admin'd were setup quite differently based on use
and functionality and disk size.

Most of my tools were setup to take most of these into consideration and
the fact that we ran almost every flavor of UNIX possible using the
features of each OS as appropriate.

Most of the tools will still work with zfs (if using df, etc) but it
actually makes it easier once you have a monitoring issue - running out
of space for example.

Most tools have high and low water marks so when a file system gets too
full, you get a warning.  ZFS makes this much easier to admin as you can
see which file system is being the hog and go directly to that file
system and hunt instead of first finding the file system, hence the
debate of the all-in-one / slice or breaking up to the major os fs's.

Benefit of all-in-one / is you didn't have to guess at how much space
you needed for each slice so you could upgrade, add optional software
without needing to grow/shrink the OS.

Drawback, if you filled up the file system, you had to hunt where it was
filling up - /dev, /usr, /var/tmp, /var, / ??? 

Benefit of multiple slices was one fs didn't affect the others if you
filled it up and you could find which was the problem fs very easily but
if you estimated incorrectly, you had wasted disk space in one slice and
not enough in another.

ZFS gives you the benefit of both all-in-one and partitioned as it draws
from a single pool of storage but also allows you to find which fs is
being the problem and lock it down with quota's and reservations.

> 
> For example, backup tools are currently filesystem based.

And this changes the scenario how?  I've actually been pondering this
for quite some time now.  Why do we backup the root disk?  With many of
the tools out now, it makes far more sense to do a flar/incremental
flars of the systems and or create custom jumpstart profiles to rebuild
the system.

Typical scenario for loosing the root file systems (catastrophic) is to
restore the OS, install the backup software to the fresh install, then
restore the OS via backup software to mirror disk.  

Why not just restore the OS from a base flar and apply the incremental?
Application data is what you really care about and any specific config
changes to the OS itself, the rest is fairly generic OS install w/
patches.

Other scenario is ufsdump/restore.  In that case, it doesn't really
change the scenario any as the scripts iterate across the file systems
you want to dump anyway (at least mine).

> 
> Eventually, the tools will catch up. But my experience so far
> is that while zfs is fantastic from the point of view of pooling,
> once I've got large numbers of filesystems and snapshots
> and clones thereof, and the odd zvol, it can be a devil of
> a job to work out what's going on.

No more difficult than doing ufs/vxfs snapshots and quick I/O, etc.
Only thing that really changes is the specific command for each and if
you're doing that, then you've already got the infrastructure for it
setup.

But that's just my viewpoint...

-- 
Mike Dotson

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to