[zfs-discuss] I seem to have backed myself into a corner - how do I migrate filesystems from one pool to another?

2007-05-25 Thread John Plocher

Thru a sequence of good intentions, I find myself with a raidz'd
pool that has a failed drive that I can't replace.

We had a generous department donate a fully configured V440 for
use as our departmental server.  Of course, I installed SX/b56
on it, created a pool with 3x 148Gb drives and made a dozen
filesystems on it.  Life was good.  ZFS is great!

One of the raidz pool drives failed.  When I went to replace it,
I found that the V440's original 72Gb drives had been "upgraded"
to Dell 148Gb Fujitsu drives, and the Sun versions of those drives
(same model number...) had different firmware, and more importantly,
FEWER sectors!  They were only 147.8 Gb!  You know what they say
about a free lunch and too good to be true...

This meant that zpool replace  faild because the
replacement drive is too small.

The question of the moment is "what to do?".

All I can think of is to

Attach/create a new pool that has enough space to
hold the existing content,

Copy the content from the old to new pools,

Destroy the old pool,

Recreate the old pool with the (slightly) smaller
size, and

copy the data back onto the pool.

Given that there are a bunch of filesystems in the pool, each
with some set of properties ..., what is the easiest way to
move the data and metadata back and forth without losing
anything, and without having to manually recreate the
metainfo/properties?

(adding to the 'shrink' RFE, if I replace a pool drive with
a smaller one, and the existing content is small enough
to fit on a shrunk/resized pool, the zpool replace command
should (after prompting) simply do the work.  In this situation,
losing less than 10Mb of pool space to get a healthy raidz
configuration seems to be an easy tradeoff :-)

TIA,

  -John



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs root: legacy mount or not?

2007-05-25 Thread John Plocher

Why not simply have a SMF sequence that does

early in boot, after / and /usr are mounted:
create /etc/nologin (contents="coming up, not ready yet")
enable login
later in boot, when user filesystems are all mounted:
delete /etc/nologin

Wouldn't this would give the desired behavior?
  -John


Eric Schrock wrote:

This has been discussed many times in smf-discuss, for all types of
login.  Basically, there is no way to say "console login for root
only". 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Success: Re: [zfs-discuss] Re: I seem to have backed myself into a corner - how do I migrate filesyst

2007-06-01 Thread John Plocher

I managed to correct the problem by writing a script inspired
by Chris Gerhard's blog that did a zfs send | zfs recv.  Now
that things are back up, I have a couple of lingering questions:


1) I noticed that the filesystem size information is not the
   same between the src and dst filesystem sets.  Is this
   an expected behavior?


[EMAIL PROTECTED]> zfs list -r tank/projects/sac
NAMEUSED  AVAIL  REFER  MOUNTPOINT
tank/projects/sac  49.0G   218G  48.7G  /export/sac
tank/projects/[EMAIL PROTECTED]   104M  -  48.7G  -
tank/projects/[EMAIL PROTECTED]  96.7M  -  48.7G  -
tank/projects/[EMAIL PROTECTED]  74.3M  -  48.7G  -
tank/projects/[EMAIL PROTECTED]  18.7M  -  48.7G  -

[EMAIL PROTECTED]> zfs list -r tank2/projects/sac
NAME USED  AVAIL  REFER  MOUNTPOINT
tank2/projects/sac  49.3G   110G  48.6G  /export2/sac
tank2/projects/[EMAIL PROTECTED]  99.7M  -  48.6G  -
tank2/projects/[EMAIL PROTECTED]  92.3M  -  48.6G  -
tank2/projects/[EMAIL PROTECTED]  70.1M  -  48.6G  -
tank2/projects/[EMAIL PROTECTED]  70.7M  -  48.6G  -

2) Following Chris's advice to do more with snapshots, I
   played with his cron-triggered snapshot routine:
   http://blogs.sun.com/chrisg/entry/snapping_every_minute

   Now, after a couple of days, zpool history shows almost
   100,000 lines of output (from all the snapshots and
   deletions...)

   How can I purge or truncate this log (which has got to be
   taking up several Mb of space, not to mention the ever
   increasing sluggishness of the command...)


  -John

Oh, here's the script I used - it contains hardcoded zpool
and zfs info, so it must be edited to match your specifics
before it is used!  It can be rerun safely; it only sends
snapshots that haven't already been sent so that I could do
the initial time-intensive copies while the system was still
in use and only have to do a faster "resync" while down in
single user mode.

It isn't pretty (it /is/ a perl script) but it worked :-)

------

#!/usr/bin/perl
# John Plocher - May, 2007
# ZFS helper script to replicate the filesystems+snapshots in
# SRCPOOL onto a new DSTPOOL that was a different size.
#
#   Historical situation:
# + zpool create tank raidz c1t1d0 c1t2d0 c1t3d0
# + zfs create tank/projects
# + zfs set mountpoint=/export tank/projects
# + zfs set sharenfs=on tank/projects
# + zfs create tank/projects/...
# ... fill up the above with data...
# Drive c1t3d0 FAILED
# + zpool offline tank c1t3d0
# ... find out that replacement drive is 10,000 sectors SMALLER
# ... than the original, and zpool replace won't work with it.
#
# Usage Model:
#   Create a new (temp) pool large enough to hold all the data
#   currently on tank
# + zpool create tank2 c2t2d0 c2t3d0 c2t4do
# + zfs set mountpoint=/export2 tank2/projects
#   Set a baseline snapshot on tank
# + zfs snapshot -r [EMAIL PROTECTED]
#   Edit and run this script to copy the data + filesystems from tank to
#   the new pool tank2
# + ./copyfs
#   Drop to single user mode, unshare the tank filesystems,
# + init s
# + zfs unshare tank
#   Shut down apache, cron and sendmail
# + svcadm disable svc:/network/http:cswapache2
# + svcadm disable svc:/system/cron:default
# + svcadm disable svc:/network/smtp:sendmail
#   Take another snapshot,
# + zfs snapshot -r [EMAIL PROTECTED]
#   Rerun script to catch recent changes
# + ./copyfs
#   Verify that the copies were successful,
# + dircmp -s /export/projects /export2/projects
# + zfs destroy tank
# + zpool create tank raidz c1t1d0 c1t2d0 c1t3d0
#   Modify script to reverse transfer and set properties, then
#   run script to recreate tank's filesystems,
# + ./copyfs
#   Reverify that content is still correct
# + dircmp -s /export/projects /export2/projects
#   Re-enable  cron, http and mail
# + svcadm enable svc:/network/http:cswapache2
# + svcadm enable svc:/system/cron:default
# + svcadm enable svc:/network/smtp:sendmail
#   Go back to multiuser
# + init 3
#   Reshare filesystems.
# + zfs share tank
#   Go home and get some sleep
#

$SRCPOOL="tank";
$DSTPOOL="tank2";

# Set various properties once the initial filesystem is recv'd...
# (Uncomment these when copying the filesystems back to their original pool)
# $props{"projects"} = ();
# push( @{ $props{"projects"} }, ("zfs set mountpoint=/export tank/projects"));
# push( @{ $props{"projects"} }, ("zfs set sharenfs=on tank/projects"));
# $props{"projects/viper"} = ();
# push( @{ $props{"projects/viper"} }, ("zfs set 
sharenfs=rw=bk-test:eressea:scuba:sac:viper:caboose,root=sac:viper:caboose,ro 
tank/projects/viper"));

sub getsnapshots(@) {
my (@filesystems) = @_;
my @snaps;
my @

Re: Success: Re: [zfs-discuss] Re: I seem to have backed myself into a corner - how do I migrate filesyst

2007-06-01 Thread John Plocher

eric kustarz wrote:
We specifically didn't allow the admin the ability to truncate/prune the 
log as then it becomes unreliable - ooops i made a mistake, i better 
clear the log and file the bug against zfs 



I understand - auditing means never getting to blame someone else :-)

There are things in the log that are (IMhO, and In My Particular Case)
more important than others.  Snapshot creations & deletions are "noise"
compared with filesystem creations, property settings, etc.

This seems especially true when there is closure on actions - the set of
zfs snapshot foo/[EMAIL PROTECTED]
zfs destroy  foo/[EMAIL PROTECTED]
commands is (except for debugging zfs itself) a noop

Looking at history.c, it doesn't look like there is an easy
way to mark a set of messages as "unwanted" and compress the log
without having to take the pool out of service first.

Oh well...

  -John


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: Success: Re: [zfs-discuss] Re: I seem to have backed myself into a corner - how do I migrate filesyst

2007-06-01 Thread John Plocher

Mark J Musante wrote:

Note that if you use the recursive snapshot and destroy, only one line is



My "problem" (and it really is /not/ an important one) was that
I had a cron job that every minute did

min=`date "+%d"`
snap="$pool/[EMAIL PROTECTED]"
zfs destroy "$snap"
zfs snapshot "$snap"

and, after a couple of days (at 86 thousand minutes/day), the
pool's history log seemed quite full (but not at capacity...)

There were no clones to complicate things...

  -John

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [arc-discuss] Take Three: PSARC 2007/171 ZFS Separate Intent Log

2007-07-09 Thread John Plocher
>> It seems to me that the URL above refers to the publishing
>> materials of *historical* cases. Do you think the case in hand
>> should be considered historical ?


In this context, historical means "any case that was not originally 
"open", and so can not be presumed to be clear of any proprietary info.

For this particular case, I don't expect there to be any such info,
so the process of opening it *should* be trivial - probably just 
changing any proprietary notices to Copyrights...
> 
> Yes, this was what I was asked to do. Looking more closely it doesn't look
> too bad. I'll start this process.

once the case has been cleaned up and marked "open", it will be mirrored 
onto OS.o within 24 hours.


   -John
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Solaris 10u5 Proposed Changes

2007-09-18 Thread John Plocher
Many/most of these are available at

http://www.opensolaris.org/os/community/arc/caselog//CCC


replacing /CCC with the case numbers below, as in

http://www.opensolaris.org/os/community/arc/caselog/2007/171

for the 2nd one below.  I'm not sure why the first one (2007/142) isn't
there - I'll check tomorrow...


-John


Kent Watsen wrote:
> How does one access the PSARC database to lookup the description of 
> these features?
> 
> Sorry if this has been asked before! - I tried google before posting 
> this  :-[
> 
> Kent
> 
> 
> George Wilson wrote:
>> ZFS Fans,
>>
>> Here's a list of features that we are proposing for Solaris 10u5. Keep 
>> in mind that this is subject to change.
>>
>> Features:
>> PSARC 2007/142 zfs rename -r
>> PSARC 2007/171 ZFS Separate Intent Log
>> PSARC 2007/197 ZFS hotplug
>> PSARC 2007/199 zfs {create,clone,rename} -p
>> PSARC 2007/283 FMA for ZFS Phase 2
>> PSARC/2006/465 ZFS Delegated Administration
>> PSARC/2006/577 zpool property to disable delegation
>> PSARC/2006/625 Enhancements to zpool history
>> PSARC/2007/121 zfs set copies
>> PSARC/2007/228 ZFS delegation amendments
>> PSARC/2007/295 ZFS Delegated Administration Addendum
>> PSARC/2007/328 zfs upgrade
>>
>> Stay tuned for a finalized list of RFEs and fixes.
>>
>> Thanks,
>> George
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>>
>>   
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Mountroot and Bootroot Comparison

2007-10-05 Thread John Plocher
Lori Alt wrote:
> I'm not surprised that having /usr in a separate pool failed.
> The design of zfs boot largely assumes that root, /usr, and
> /var are all on the same pool, and it is unlikely that we would
> do the work to support any other configuration any time soon.


This seems, uhm, undesirable.  I could understand if the initial
*implementation* chose to make these simplifying assumptions, but
if the *architecture* or *design* of the feature requires this,
then, IMO, this project needs a TCR to not be done that way.

Certainly, many of us will be satisfied with all-in-one pool,
just as we are today with all all-in-one filesystem, so this
makes sense as a first step.  But, there needs to be the
presumption that the next steps towards multiple pool support
are possible without having to re-architect or re-design the
whole zfs boot system.

   -John


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Mountroot and Bootroot Comparison

2007-10-05 Thread John Plocher
Nicolas Williams wrote:
> I'm curious as to why you think this

The characteristics of /, /usr and /var are quite different,
from a usage and backup requirements perspective:

/ is read-mostly, but contains critical config data.
/usr is read-only, and
/var (/var/mail, /var/mysql, ...) can be high volume read/write.

A scenerio with / on a pair of mirrored USB sticks, /usr on
DVD media with with RAM-based cache and /var & /export/home
on a large wide striped/mirrored/raided pool of its own isn't
too far fetched an idea.

   -John
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-code] /usr/bin and /usr/xpg4/bin differences

2007-12-18 Thread John Plocher
Sasidhar Kasturi wrote:
> Thank you,
>  Is it that /usr/bin binaries are more advanced than that of 
> /xpg4 things or .. the extensions of the /xpg4 things?

They *should* be the same level of "advancement", but each has a
different set of promises and expectations it needs to live up to...

> 
> If i want to make some modifications in the code.. Can i do it for 
> /xpg4/bin commands or .. i should do it for /usr/bin commands??


If you are doing this "just for yourself", it doesn't matter - fix
the one you use.  If you intend to push these changes back into the
OS.o source base, you will need to make the changes to both (and,
possibly interact with the OpenSolaris ARC Community if your changes
affect the architecture/interfaces of the commands).

In the case of df, I'm not at all sure why the two commands are
different. (I'm sure someone else will chime in and educate me :-)

-John

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss