On Jan 05 10:58:02, o...@drijf.net wrote:
> On Mon, Jan 05, 2015 at 10:19:54AM +0100, Jan Stary wrote:
> 
> > This is a daily mail from my Alix router.
> > I do a dump in daily.local (see below)
> > and most of the time it works just fine.
> > Occasionaly though, the DUMP fails saying
> > 
> > >   DUMP: End of tape detected
> > >   DUMP: Volume 1 completed at: Mon Jan  5 01:30:44 2015
> > >   DUMP: Volume 1 took 0:00:07
> > >   DUMP: Volume 1 transfer rate: 2101 KB/s
> > >   DUMP: Change Volumes: Mount volume #2
> > >   DUMP: fopen on /dev/tty fails: Device not configured
> > >   DUMP: The ENTIRE dump is aborted.
> > 
> > That puzzles me, as I dump to stdout,
> > redirecting to a file (see below).
> > 
> > (I vaguely remember that the reason I switched from
> > "dump -f file.dump ..." to "dump -f - ... > file.dump"
> > was that I was advised her by a developer about
> > the tape legacy of dump, but I forgot what exactly
> > was the problem then and can't find it in archives.)
> > 
> > Why would "dump -f -  ... > file.dump" think
> > that it reached an end of tape?
> 
> Because dump is a bit dumb. You need to use -a, see man page.

But I do, see the code below.

        Jan

> > 
> > On Jan 05 01:31:08, r...@gw.stare.cz wrote:
> > > OpenBSD 5.3-current (GENERIC) #2: Thu Jun 13 00:04:14 MDT 2013
> > >     dera...@i386.openbsd.org:/usr/src/sys/arch/i386/compile/GENERIC
> > > 
> > >  1:31AM  up 1 day, 16:35, 0 users, load averages: 1.61, 0.66, 0.34
> > > 
> > > Running daily.local:
> > > 
> > > Backing up into /backup/gw.stare.cz
> > > 
> > > errors dumping /var/log:
> > >   DUMP: Date of this level 0 dump: Mon Jan  5 01:30:31 2015
> > >   DUMP: Date of last level 0 dump: the epoch
> > >   DUMP: Dumping /dev/rwd0g (/var/log) to standard output
> > >   DUMP: mapping (Pass I) [regular files]
> > >   DUMP: mapping (Pass II) [directories]
> > >   DUMP: estimated 31599 tape blocks.
> > >   DUMP: Volume 1 started at: Mon Jan  5 01:30:37 2015
> > >   DUMP: dumping (Pass III) [directories]
> > >   DUMP: dumping (Pass IV) [regular files]
> > >   DUMP: End of tape detected
> > >   DUMP: Volume 1 completed at: Mon Jan  5 01:30:44 2015
> > >   DUMP: Volume 1 took 0:00:07
> > >   DUMP: Volume 1 transfer rate: 2101 KB/s
> > >   DUMP: Change Volumes: Mount volume #2
> > >   DUMP: fopen on /dev/tty fails: Device not configured
> > >   DUMP: The ENTIRE dump is aborted.
> > > -rw-------  1 hans  wheel   4.5M Jan  5 01:30 dump.home.0
> > > -rw-------  1 hans  wheel  53.1M Jan  5 01:30 dump.root.0
> > > -rw-------  1 hans  wheel   7.7M Jan  5 01:30 dump.var.0
> > > -rw-------  1 hans  wheel  14.3M Jan  5 01:30 dump.var.log.0
> > > -rw-------  1 hans  wheel   820K Jan  5 01:30 dump.var.spool.0
> > > -rw-------  1 root  wheel   5.2K Jan  5 01:30 root-201501050130.tar.gz
> > > -rw-------  1 root  wheel   1.3M Jan  5 01:31 etc-201501050130.tar.gz
> > > -rw-------  1 root  wheel   481K Jan  5 01:31 
> > > var-backups-201501050131.tar.gz
> > > -rw-------  1 root  wheel   4.2K Jan  5 01:31 
> > > var-named-201501050131.tar.gz
> > > 
> > > Checking subsystem status:
> > > 
> > > disks:
> > > Filesystem  1K-blocks      Used     Avail Capacity  Mounted on
> > > /dev/wd0a      204542     53740    140576    28%    /
> > > /dev/wd0d      608686    190580    387672    33%    /usr
> > > /dev/wd0e      251822     28258    210974    12%    /usr/local
> > > /dev/wd0f      251822      7010    232222     3%    /var
> > > /dev/wd0g      251822     32156    207076    13%    /var/log
> > > /dev/wd0h     1024526         2    973298     0%    /tmp
> > > /dev/wd0i      299054      4212    279890     1%    /home
> > > /dev/wd0j     1024526    795130    178170    82%    /backup
> > > 
> > > Last dump(s) done (Dump '>' file systems):
> > >   /dev/rwd0a      (      ) Last dump: Level 0, Date Mon Jan  5 01:30
> > >   /dev/rwd0f      (      ) Last dump: Level 0, Date Mon Jan  5 01:30
> > >   /dev/rwd0g      (      ) Last dump: Level 6, Date Sun Jan  4 01:30
> > >   /dev/rwd0i      (      ) Last dump: Level 0, Date Mon Jan  5 01:30
> > 
> > 
> >     Jan
> > 
> > 
> > 
> > #!/bin/sh
> > 
> > umask 077
> > 
> > err() {
> >     echo $@ >&2
> > }
> > 
> > # We distinguish two kinds of backups:
> > # BKPDUMP are dump(8)s of entire filesystems - level 0
> > # on Monday mornings, incerementals during the week.
> > # These are typically very big, and we store them to
> > # a dedicated backup disk.
> > # BKPTAR are tarballs of certain directories (/etc),
> > # that are tupically much smaller and we rotate them.
> > # TODO: rotate the old ones out of existence.
> > # If BKPSCP is defined, we also scp them there. This
> > # requires an unattended login via a ssh key.
> > 
> > BKPUSR=hans
> > BKOGRP=wheel
> > BKPLOG=/tmp/dump.$$.log
> > BKPDIR=/backup/`hostname`
> > 
> > BKPTAR="/etc /var/backups"
> > BKPSCP="h...@biblio.stare.cz:$BKPDIR"
> > BKPDMP="/ /var /var/mysql /var/postgresql /var/www /home"
> > 
> > bkpdmp() {
> > # $1 is the dump level
> >     l=$1
> >     for fs in $BKPDMP; do
> >             [ "$fs" = "/" ] && fsname=".root" || fsname=`echo $fs | tr / .`
> >             [ "x$l" = "x0" ] && rm -f $BKPDIR/dump$fsname.?
> >             f=$BKPDIR/dump$fsname.$l
> >             > $f && chown $BKPUSR:$BKPGRP $f && chmod 600 $f
> >             dump -$l -a -u -f - $fs > $f 2> $BKPLOG \
> >             || { err errors dumping $fs: ; cat $BKPLOG >&2 ; }
> >             rm -f $BKPLOG
> >     done
> >     { cd $BKPDIR ; ls -lh dump.* ; }
> > }
> > 
> > bkptar() {
> >     for dir in $BKPTAR; do
> >             f=$BKPDIR/`echo ${dir#/} | tr / -`-`date +%Y%m%d%H%M`.tar.gz
> >             > $f && chown $BACKUPUSR:$BACKUPGRP $f && chmod 600 $f
> >             tar czf $f $dir 2> $BKPLOG \
> >             || { err errors tarring $dir: ; cat $BKPLOG >&2 ; }
> >             # TODO: md5
> >             { cd $BKPDIR ; ls -lh ${f##*/} ; }
> >             test -n "$BKPSCP" && scp -q $f $BKPSCP
> >     done
> > }
> > 
> > if test -d $BKPDIR ; then
> >     echo; echo Backing up into $BKPDIR; echo
> >     bkpdmp $((`date +%u` - 1)) 
> >     bkptar
> > else
> >     err Backup directory $BKPDIR does not exist
> > fi

Reply via email to