A vtape do not reserve the space, it only use the space of the dump you
put on it.
The vtape size should be the maximum size of any days or larger, they
could be 120GB or 2TG, the result will be the same.
Some vtape will use 5GB, some will use 120GB.
Jean-Louis
On 03/01/18 04:08 PM, Chris Miller wrote:
Hi Winston,
For the longest time I did traditional backups (fulls and
incrementals, via tar). If that model still fits your needs, you
can stay with that.
Once I began backing up a small cluster of machines, Amanda's
paradigm began to show its value. However, it relies upon volume
management, and it requires an acceptance that "her" algorithms
allow a more optimal distribution of backups based on both
availability of backup media (virtual or otherwise) and the amount
of individual and collective changes on the client machines - as
well as certain parameters I set such as the maximum interval I am
willing to accept between full backups. The benefit is that with
the amount of space I've allocated to vtapes, I get the maximum
amount of change data on backup. It isn't overprovisioning; it's
about optimization. It's also proven itself in restores, where
instead of having to restore a full directory and then every L1
and L2 delta, I can simply tell Amanda to restore
file-version-as-of-specific-date.
I highly suggest a read of this FAQ:
http://wiki.zmanda.com/index.php/FAQ:How_are_backup_levels_defined_and_how_does_Amanda_use_them%3F
<http://wiki.zmanda.com/index.php/FAQ:How_are_backup_levels_defined_and_how_does_Amanda_use_them%3F>;
particularly the section about Amanda's planning strategy. If you
"insist" on constraining Amanda to one-volume-per-backup, you are
basically going against the strategy; without that capability, I
don't think that Amanda's overhead gives you anything you can't do
with tar and a cron job.
I understand how Amanda wants to try to "smooth" the mix of backup
levels and filesystem sizes so that backup costs about the same amount
of time and storage each cycle, and that is a very worthwhile goal, so
I don't want to impede that. I also understand that tape discipline is
already built-in to Amanda at a fundamental level, so I don't want to
mess with that, either. So, I seek advice.
Suppose "amanda.example.com
<http://amanda.example.com>"
is backing-up "client.example.com
<http://client.example.com>"
to "NAS0.example.com
<http://NAS0.example.com>".
My level 0 backups are typically 120GB and my level 1 backups are
typically 5GB, and I have 2TB on NAS0. That's 120+6*5 = 150GB/week,
meaning I have sufficient room for thirteen weeks of backup. This
seems to me like it might be a pretty common scenario and that there
might be example configs floating around that would size the vtapes
for optimal use. Is there one?
I have some questions:
1. Can I make my vtapes all 150GB, and then instruct Amanda to put
one cycle (one level 0, and six level 1) on one vtape, meaning
re-use a vtape multiple time in a backup cycle? I like this
approach quite a bit, if it is possible. It "packages" one level 0
with all the attendant level 1 differentials and eliminates my
strongest reservations about vtapes -- namely that I don't know
where anything is.
2. Failing that, should I make my vtapes 120GB, so I can fit a level
0 backup on one vtape, but then will Amanda truncate level 1
backups, so that the vtapes storage requirement is NOT 120G, but
more like 5GB?
3. Alternatively, I could make my vtapes all 5GB and then Amanda will
have to span fourteen vtapes for the level 0? This might be
optimal use of storage, but it scares me with added complexity. I
won't know where anything is, meaning I will have to have Amanda
tools to unpack a backup, and in the case of a disaster, that may
be really inconvenient.
I sure would like to have option 1, if I could...
--
Chris.
V:916.974.0424
F:916.974.0428
This message is the property of CARBONITE, INC. and may contain confidential or
privileged information.
If this message has been delivered to you by mistake, then do not copy or
deliver this message to anyone. Instead, destroy it and notify me by reply
e-mail