Hi Sebastian !

Thanks a lot for your answer !
I'll try to answer myself the best I can.

Please see my answers below.

Thanks again, and take care,

Samuel

Le mar. 28 sept. 2021 à 16:01, <neumei...@mail.de> a écrit :

> Am 27-Sep-2021 11:46:31 +0200 schrieb s...@w4tch.tv:
> >
> > Hello everyone,
> >
> > Anyone could help me here ?
> >
> > Thanks a lot !
> >
> > Samuel
> >
> > Le jeu. 23 sept. 2021 à 12:46, Samuel Zaslavsky <s...@w4tch.tv> a
> écrit :
> > >
> > > Hello everyone,
> > > We have set up a system with a tape library and Bacula to back up /
> archive some NASs. The idea is as simple as possible: no files expire, I
> just have to make sure that in the event of a disaster on a NAS, we will be
> able to recover all the files.
> > > (So I have a pretty basic knowledge of Bacula, enough to set up this
> very simple project, but didn't need to become an expert ... my apologies
> in advance if I did mistakes or if I talk nonsense :))
> > >
> > > So I have "Incremental" jobs that run every night.
> > > During the summer (and the holidays ...) the tape library found itself
> short of LTO, and for a period of about 15 days, jobs were created and put
> on hold ("Created but not yet running ".)
> > >
> > > Naively, I told myself that making LTOs available would solve the
> problem.
> > > In a sense, this is the case, because jobs have indeed started up
> again.
> > >
> > > But there is a big but !! It seems that the jobs have rewritten
> several times (as many times as jobs not launched during the period) almost
> the same data ...
> > > So to my surprise, I still find myself running out of LTO a few days
> after settling the pb.
> > >
> > > 1 / So I realize that I should have canceled all the "pending" jobs
> apart from the last one created, is that right? Or if not, how to handle
> this case? (this is important because I am in this case again!)
> > >
> > > 2 / How to recover the wasted space (almost 10 LTO8 !!)
> > > Specifically, how do you go about identifying exactly which volumes /
> jobid are to be "deactivated" and how do you do that? How to end with a
> clean incremental job running smoothly ?
> > >
> > > Thank you very much in advance for your help!
> > >
> > > Samuel
> > >
>
> Hello Samuel,
> I'm a little bit out of time by now, but I will give my best to help you
> to refine the questions a little bit.
>
> What do you mean by "no files expire?"
> The files on the NASs or the volume-, file-, job- retention of bacula?
>

I mean that my goal is to save any new file, once and permanently.
If monday I have file1 on my nas, I want it to be saved on tape.
Tuesday I add file2 : I want it to be saved on tape.
Wednesday, file1 is deleted from NAS: it's a mistake, and I still want to
keep file1 forever on tape (and be able to restore it ).
Every file that has existed once on my NAS must be saved permanently on a
tape.

Let me show you my (simplified)  configuration :

I mounted ( nfs ) my first NAS on, say, /mnt/NAS1/
My file set is :
FileSet {
  Name = "NAS1"
  File = /mnt/NAS1
  }

My job is
Job {
  Name = "BackupNAS1"
  JobDefs = "DefaultJob"
  Level = Incremental
  FileSet="NAS1"
  #Accurate = yes # Not clear what I should do here. activate to yes seemed
to add many unwanted files - probably moved/renamed files ?
  Pool = BACKUP1
  Storage = ScalarI3-BACKUP1 # this is my tape library
  Schedule = NAS1Daily #run every day
}

with
JobDefs {
  Name = "DefaultJob"
  Type = Backup
  Level = Incremental
  Client = lto8-fd
  FileSet = "Test File Set"
  Messages = Standard
  SpoolAttributes = yes
  Priority = 10
  Write Bootstrap = "/var/lib/bacula/%c.bsr"
}

My pool is :
Pool {
  Name = BACKUP1
  Pool Type = Backup
  Recycle = no
  AutoPrune = no
  Volume Retention = 100 years
  Job Retention = 100 years
  Maximum Volume Bytes = 0
  Maximum Volumes = 1000
  Storage = ScalarI3-BACKUP1
  Next Pool = BACKUP1
}

It seemed to be correct. I was suspecting some files were saved more than
once for different (and unclear) reasons, but these were rare situation
compared to the data saved, and still my goal of archiving everything was
reached...


>
> Normally when I start an incremental job and nothing changed on the NAS
> the written Bytes of the incremental-job are zero(status dir on the
> bconsole). Is this correct for you also? Are you really doing incremental
> backups?
>

After checks, I realize my minimum job size is 27G : seems to be some files
that have MTIME > now ... and be backuped every day...
But we usually have dozens of GB added everyday, and it seems to correspond
with what's copied on the tapes every day.
So yes, I really do incrementals. But thanks pointing me to "status
dir"...



> With that in mind the second idea I have is that bacula creates new
> volumes on the tape(s) since the volume-use duration is set to short.
> Documentation states:
> "You might use this directive, for example, if you have a Volume used for
> Incremental
> backups, and Volumes used for Weekly Full backups. Once the Full backup is
> done, you
> will want to use a different Incremental Volume. This can be accomplished
> by setting the
> Volume Use Duration for the Incremental Volume to six days. I.e. it will
> be used for the
> 6 days following a Full save, then a different Incremental volume will be
> used. Be careful
> about setting the duration to short periods such as 23 hours, or you might
> experience
> problems of Bacula waiting for a tape over the weekend only to complete
> the backups
> Monday morning when an operator mounts a new tape."
> Chapter: "Configuring the Director" under "Volume Use duration" or Page:
> 274f
>
>
I'm not sure I fully understand here : you say "since the volume-use
duration is set to short." . But I believe it's exactly the contrary here :
my volume-use duration is set to 100 years !? isn't it ?.



> I'm not an expert, PLEASE take it with a grain of salt
> 1. Being honest I don't know. But if the jobs didn't run and do *exactly*
> the same I would go ahead and delete them except one.
> 2. 10 LTO8 divided by 15 is quiet much data for a daily incremental backup.
>
--> this is because it saved every night about one month, as I tried to
explain : When I put back new tapes, all the waiting jobs started, with all
a cumulative list of files...thus saving most of the files 10 or 20 times !


> In the bacula-dir.conf you specified the director ressource type
> "Messages" there is a option called "append"
> A part of my bacula-dir.conf:
> # WARNING! the following will create a file that you must cycle from
> # time to time as it will grow indefinitely. However, it will
> # also keep all your messages if they scroll off the console.
> append = "/var/log/bacula/bacula.log" = all, !skipped
> console = all, !skipped
>
> At the end "all, !skipped" are the types or classes of messages which go
> into it. They are described in more detail in the "Messages
> Resource"-Chapter:
> https://www.bacula.org/11.0.x-manuals/en/main/Messages_Resource.html
>
> If I type the "messages"-command in the bconsole the output is in my case
> in both cases the same.
>
> This is regarding logs, right ? Doesn't seem to apply to me here. I'm
dealing with big video files being unnecessarily saved 10, 15 or 20 times
on tapes....
Or maybe I missed something here ?



> Furthermore:
> For every job there is a list. Under "Volume name(s):" are the volumes
> listed that bacula used in that job.
> Notice in my case with a 0Byte incremental job this list empty -> I'm not
> sure, but bacula shouldn't create empty volumes.
>
> Bacula states if you run "delete" in the bconsole:
> "In general it is not a good idea to delete either a
> Pool or a Volume since they may contain data."
>
> My uneducated guess where things could went wrong and I would go ahead to
> check condensed:
> -are you really running incremental backups?
>
I would say yes

> -Volume Use Duration is set quite short?

I would say no

> -maybe a problem with different pools?
>
The jobs for these NAS have only one Pool...

> -are the jobs set properly?
>
Well, can you tell me ? :))

> -multiple problems?
>
Probably....

>
> Back to your second question:
> I wouldn't delete them and set the volume-,file-,job- retention properly
> and let bacula sort it out.
>
The problem is that I don't usually want bacula to delete anything. I'm
just trying to let it save files only once !

>
> I hope that helps a little bit. If not, please wait a little bit longer
> and you may get an email from a person with more knowledge than me.
>
>
> I would appreciate it, if someone with more knowledge scans through my
> email and states things out, which may be wrong.Thanks!


It could not be me. I can only thank you for your time and help !

>
>

>
> Kind regards
>
> Sebastian
>
> -------------------------------------------------------------------------------------------------
> FreeMail powered by mail.de - MEHR SICHERHEIT, SERIOSITÄT UND KOMFORT
>
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to