Kees Nuyt wrote:
On Fri, 19 Jun 2009 11:50:07 PDT, stephen bond
<no-re...@opensolaris.org> wrote:
Kees,
is it possible to get at least the contents of /export/home ?
that is supposedly a separate file system.
That doesn't mean that data is in one particular spot on the
disk. The blocks of the zfilesystems can be interspersed.
You can try a recovery tool that supports file carving. This technique
looks for files based on their signatures while ignoring damaged,
nonexistent, or unsupported partition and/or filesystem info. Works
best on small files, but gets worse as file sizes increase (or more
accurately, gets worse as file fragmentation increases). Should work
well for files smaller than the stripe size, but possibly not at all for
compressed files unless you are using a data recovery app that
understands ZFS compression formats (I don't know of any myself).
Disable or otherwise do not run scrub or any other command that may
write to the array until you have exhausted your recovery options or no
longer care to keep trying.
EasyRecovery supports file carving as does RecoverMyFiles, and
TestDisk. I'm sure there are others too. Not all programs actually
call it file carving. The effectiveness of the programs may vary so it
is worthwhile to try any demo versions. The programs will need direct
block level access to the drive...network shares won't work You can run
the recovery software on whatever OS it needs, and based on what you are
asking for, you don't need to seek recovery software that is explicitly
Solaris compatible.
is there a
way to look for files using some low level disk reading
tool. If you are old enough to remember the 80s
there was stuff like PCTools that could read anywhere
on the disk.
I am old enough. I was the proud owner of a 20 MByte
harddisk back then (~1983).
Disks were so much smaller, you could practically scroll
most of the contents in a few hours.
The on disk data structures are much more complicated now.
I recall using a 12.5 Mhz 286 Amdek (Wyse) PC with a 20 mb 3600 rpm
Miniscribe MFM drive. A quick Google search for this item says its
transfer rate specs were 0.625 MB/sec, which sounds about right IIRC (if
you chose the optimal interleave when formatting. If you had the wrong
interleave performance suffered, however I also recall that the drive
also made less noise. I think I even ran that drive at a suboptimal
interleave for a while simply because it was quieter...you could say it
was an early indirect form of AAM (acoustic management).
To put that drive capacity and transfer rate into comparison with a
modern drive, you could theoretically fill the 20 mb drive in
20/0.625=32 seconds. A 500 GB (base 10) SATA2 drive (WD5000AAKS) has an
average write rate of 68 MB/sec. 466*1024/68=7012 seconds to fill.
Capacity growth is significantly out pacing read/write performance,
which I've seen summed up as modern drives are becoming like the tapes
of yesteryear.
Those data recovery tools took advantage of the filesystem's design that
it only erased the index entry (sometimes only a single character in the
filename) in the FAT. When NTFS came out, it took a few years for
unerase and general purpose NTFS recovery to be possible. This was
actually a concern of mine and one reason I delayed using NTFS by
default on several Windows 2000/XP systems. I waited until good
recovery tools were available before I committed to the new filesystem
(in spite of it being journaled, there initially just weren't any
recovery tools available in case things went horribly wrong, Live CDs
were not yet available, and there weren't any read/write NTFS tools
available for DOS or Linux). In short, graceful degradation and the
availability of recovery tools is important in selecting a filesystem,
particularly when used on a desktop that may not have regular backups.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss