Hey Nils & everyone

Finally getting around to answering Nil's mail properly - only a month
late!   I thought I'd also let everyone else know what's been going on
with the service, since 0.10 released in January this year.


On Tue, 2008-06-24 at 14:40 -0700, Nils Goroll wrote:
> first of all: Tim, thank you very much for your very useful auto-snapshot 
> script.
> I believe this really is what every ZFS user needs.

Glad you like it!

> As a laptop user, I wanted to make sure that I get snapshots taken even if my
> machine is down every night, so I added at scheduling support to 
> zfs-auto-snapshot,
> where each job re-schedules it's next execution, thus ending up with a 
> cron-like
> behaviour.

Okay, after careful consideration, I don't think I'm going to add this
into the code.  I absolutely understand the reasoning behind it, but in
cases where you're powering down a laptop overnight, you don't want to
just take a load of snapshots after you power on for every missed cron
job, you just want one (assuming your laptop hasn't magically changed
data-on-disk while it was asleep!)

The other thing that worries me about this, is that it exposes the user
to too much implementation detail: users would need to know about at(1)
timespecs, so for the same reason I didn't add support to allow people
to add their own crontab time strings, I'm not sure this is good either.



All that said, the idea of missing the dates when the cron job fires,
and somehow doing something about it is a good one - so instead, I think
we should add a new "zfs/interval" property value: "none". In the
manifest, instead of setting the interval to be "hours", "months",
"days", "minutes" - setting "none" would mean that a cron job wouldn't
be scheduled for that instance of the service.

Why's that good ? Well, then other scripts, events, etc. could still
manually fire the method script, eg:

$ /lib/svc/method/zfs-auto-snapshot 
svc:/system/filesystem/zfs-auto-snapshot:login

which would cause snapshots to be taken, under the policies set down by
that instance.

Perhaps you'd like them to fire on login to a desktop, or booting your
laptop, connecting to a network - really whatever event you think is
interesting.

This would mean you'd still get all the functionality the service
provides (rolling snapshots, off-site send/recv, avoiding scrub (though
that bug's fixed now)) and you can control which filesystems get
included in that instance as usual via the "//" "zfs/fs-name" property,
or hard-coding them in the instance itself.


As for the other changes you suggested, I've already put some slightly
better svcprop caching code in, but just not your implementation
( something about the block comment:

 ## NOTE/WARNING: THIS IS NOT GUARANTEED TO BE PORTABLE
 ## TO OTHER SHELLS ! Depends on whether your shell runs
 ## the while loop in a subshell or not !

made me a bit nervous!  :-)  I think the implementation I've got is
okay, bit I'm still testing it.



The other changes that will appear in 0.11 (which is nearly done) are:


 * changing way the default "//" fs-name works - I decided to make the
   default instances that use this feature always snapshot recursively,
   for performance reasons. 

   In my day-to-day use, whenever I'm using this feature, I nearly
   always end up snapshotting all child filesystems. So, the canned
   instances now set the "zfs/snapshot-children" property. When
   searching for filesystems that should be snapshotted, we only
   consider ones that have a locally-set property, rather than
   inherited one.   The down side of this, is that if a filesystem
   further down the hierarchy sets com.sun:auto-snapshot=false,
   it'll still get included in the snapshots, because of the recursive
   flag.     Turning off "zfs/snapshot-children" will get back to the
   old behaviour where we manually walk the dynamically generated list
   of filesystems to snapshot. It's slow for large numbers of children.

 * Slightly saner preinstall/postremove scripts (though they still
suck)

 * Better shell style - I figured if I'm aiming to get this into ON
   someday, I might as well clean up the code now

 * Bugfix reported by "Dan on March 13, 2008 at 02:29"
   
http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_10#comment-1205418551000

 * RBAC - I've been annoyed for a while that the service runs as root
   all the time and since my new day-job is actually requiring me to
   read up on RBAC anyway, I figured now's a good time to have it do the
   right thing.  

   On first look, I was thinking of creating a "ZFS Automatic Snapshot"
   profile and a default user (or role? can roles run cron jobs ?),
   which would have similar abilities as the "ZFS File System 
   Management" profile, but would also allow the user to stop/start the
   service, and run a given script which would do backups. I'm still
   reading up on RBAC, so the above may change!


Finally, in terms of wider exposure of this service - I'm talking to
some OpenSolaris desktop guys about potentially using this service in
the next Indiana release:

http://mail.opensolaris.org/pipermail/indiana-discuss/2008-July/007916.html
-> http://www.opensolaris.org/os/project/indiana/resources/problem_statement/

Talks about:

DSK-5: Provide a graphical interface to allow the user to regularly
       back up their data using ZFS snapshots. The user should be able
       browse their snapshots over time, and store them remotely if
       desired.

so I think it'd be really cool if they could use this service on the
backend, but it's still under discussion.

Anyway, long mail - hope this is of interest!

        cheers,
                        tim




_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to