On 7/4/22 05:36, B.M. wrote:
Hello
I create encrypted backups on blu-ray discs for some years now with a bash
script, but now I encountered a problem mounting some of these discs (but not
all of them - in fact, my last backups consist of two discs each, and I cannot
mount the first one but I can mount the second one for each of them - seems
strange...). It's not date-related (and they are not too old).
In detail, I use the following commands:
IMGFILE=/home/TMP_BKP/backup.img
IMGSIZE=24064000K
IMGLOOP=`losetup -f`
touch $IMGFILE
truncate -s $IMGSIZE $IMGFILE
losetup $IMGLOOP $IMGFILE
cryptsetup luksFormat --cipher aes-xts-plain64 $IMGLOOP
cryptsetup luksOpen $IMGLOOP BDbackup
mkudffs -b 2048 --label $1 /dev/mapper/BDbackup
mount -t udf /dev/mapper/BDbackup /mnt/BDbackup
... then I create my compressed backup files ...
umount /mnt/BDbackup
cryptsetup luksClose /dev/mapper/BDbackup
losetup -d $IMGLOOP
growisofs -dvd-compat -Z /dev/dvd=$IMGFILE; eject
In order to mount the disc, I use:
cryptsetup luksOpen -r /dev/dvd BDbackup
mount -t udf /dev/mapper/BDbackup /mnt/BDbackup
Unfortunately, this fails now for some of my discs and also for the last image
file I created (not deleted yet...):
mount: /mnt/BDbackup: wrong fs type, bad option, bad superblock on /dev/
mapper/BDbackup, missing codepage or helper program, or other error.
And dmesg shows:
UDF-fs: warning (device dm-10): udf_load_vrs: No VRS found
UDF-fs: Scanning with blocksize 2048 failed
UDF-fs: warning (device dm-10): udf_load_vrs: No VRS found
UDF-fs: Scanning with blocksize 4096 failed
Any ideas what may happen here?
Thank you.
Best,
Bernd
That approach is complex and Linux specific. I am unclear if a contents
put into a UDF filesystem mounted on a dm-scrypt volume inside a
loopback device, mastered and burned to BD-R, opened directly with
cryptsetup(1), and mounted results in an exact copy of the starting
contents (data and metadata). I would want to built a script to do
round-trip validation.
I prefer to do filesystem archive operations at the filesystem level,
and to use standard tools such as tar(1), gzip(1), rsync(1), and
ccencrypt(1). My scripts typically produce an encrypted file (plus an
MD5 file and a SHA256 file), which I then burn to disc using whatever
standard tool the platform provides. Later, I mount the disc using
whatever standard tool that particular platform provides and use
standard tools to access the content.
Suggestions:
1. Put your commands into one or more scripts. I try to keep my
scripts simple, each doing some incremental step of an overall process.
I can run them individually by hand, or feed them into higher level
scripts. Traditional Bourne syntax works well in many cases. I upgrade
to Perl if and when I need more power.
2. Assuming Bourne, enable the 'errexit' and 'nounset' options in the
script:
set -o errexit
set -o nounset
3. Assuming Bourne, enable the 'xtrace' option during development.
Once the script is working reliably, you can comment it out:
set -o xtrace
4. Put good comments in the script. Once a backup script like this is
working, I am loath to touch it. But when I must, climbing the learning
curve again is far easier with good comments.
5. For every command issued that makes some change to the system,
include additional commands to check that the command succeeded. The
script should stop and dump useful debugging information if a check fails.
6. Some shell commands may return before the changes have fully
propagated throughout the system, but the script will blindly charge
ahead at full speed regardless; resulting in race condition bugs.
Ironically -- the faster the computer, the more likely the problem.
Ideally, devise commands and checks that are smart enough to accommodate
such delays. A quick and dirty work-around is to add a short delay
between such the command and the check:
sleep 3
7. Add option processing to the script. Provide a "-n" option ("dry run").
8. Decompose the overall script into smaller pieces, and work each
piece in turn.
9. Refactor common functionality into reusable components (e.g.
functions, libraries). Beware that KISS coding techniques such as
cut-and-paste can be easier to write, debug, and maintain than advanced
programming techniques.
10. Idempotent scripts are nice, especially when there are failures part
way through a long process:
https://en.wikipedia.org/wiki/Idempotent
11. Do round-trip testing -- e.g. backup, restore to a side location,
and confirm the two are identical (e.g. data and metadata).
12. Write an automated test suite for the script and/or its components.
Perl has good support for test driven development.
13. Write documentation for the script and/or its components. Perl has
support for built-in text documentation, and automation for converting
that into manual pages.
David