"zpool history" has shed a little light. Lots actually.

The sub-dataset in question was indeed created, and at the time ludelete was run
there are some entries along the lines of "zfs destroy -r pond/zones/zonename".
There's no precise details (names, mountpoints) about the destroyed datasets -
and I think they should be included in the future.

However the detailed log has some pointers to "txg" and "dataset" numbers.
Can they help in recovering data? Perhaps, named transaction groups can be
rolled back?

According to the same zpool history, my target is recovery of "dataset = 370" 
which has the required mountpoint. Others are snapshots which are secondary
targets.

Namely, the detailed log displays this:

2009-05-27.22:34:24 [internal snapshot txg:710732] dataset = 1627 [user root on 
thumper]
2009-05-27.22:34:24 zfs snapshot pond/zones/dummy-server-j...@snv_114 [user 
root on thumper:global]
2009-05-27.22:34:24 [internal create txg:710734] dataset = 1632 [user root on 
thumper]
2009-05-27.22:34:24 zfs clone pond/zones/dummy-server-j...@snv_114 
pond/zones/DUMMY-server-java-snv_114 [user root on thumper:global]

Here the lucreate operation froze up until it was found hanging in the morning.
As known from previous post, ludelete was issued, and it first destroyed the
clone and snapshot created by lucreate; then it went on to massacre civilian
datasets ;)

2009-05-28.11:43:49 [internal destroy_begin_sync txg:712314] dataset = 1632 
[user root on thumper]
2009-05-28.11:43:52 [internal destroy txg:712316] dataset = 1632 [user root on 
thumper]
2009-05-28.11:43:52 [internal reservation set txg:712316] 0 dataset = 0 [user 
root on thumper]
2009-05-28.11:43:52 zfs destroy -r pond/zones/DUMMY-server-java-snv_114 [user 
root on thumper:global]
2009-05-28.11:43:52 [internal destroy txg:712318] dataset = 1627 [user root on 
thumper]
2009-05-28.11:43:53 zfs destroy -r pond/zones/dummy-server-j...@snv_114 [user 
root on thumper:global]

Main destruction was here: pond/zones/las 

2009-05-28.11:43:58 [internal destroy txg:712320] dataset = 425 [user root on 
thumper]
2009-05-28.11:43:58 zfs destroy -r pond/zones/las [user root on thumper:global]
2009-05-28.11:43:58 [internal destroy txg:712322] dataset = 459 [user root on 
thumper]
2009-05-28.11:43:59 [internal destroy_begin_sync txg:712323] dataset = 370 
[user root on thumper]
2009-05-28.11:44:03 [internal destroy txg:712325] dataset = 370 [user root on 
thumper]
2009-05-28.11:44:03 [internal reservation set txg:712325] 0 dataset = 0 [user 
root on thumper]
2009-05-28.11:44:04 [internal destroy txg:712326] dataset = 421 [user root on 
thumper]
2009-05-28.11:44:04 [internal destroy txg:712327] dataset = 455 [user root on 
thumper]
2009-05-28.11:44:05 [internal destroy txg:712328] dataset = 411 [user root on 
thumper]

It also took a bite at pond/zones/ldap03, but the zone is intact (possibly 
missing
a snapshot though - the set of snapshots differs from those available to other
zones); then the massacre was aborted:

2009-05-28.11:44:05 zfs destroy -r pond/zones/ldap03 [user root on 
thumper:global]
2009-05-28.11:44:06 [internal destroy txg:712330] dataset = 445 [user root on 
thumper]

//Jim

PS: I guess I'm up to an RFE: "zfs destroy" should have an interactive option,
perhaps (un-)set by default with an environment variable or presense of a
terminal console (vs. automated scripted usage in installers, patches, 
crontabs, 
etc.). Then ludelete would not be so stupid as to destroy user's data. 
How "enterprise" is that? :(
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to