I'm curious how hard it would be to add a line of data to the backup report.
During backup I can do a status on the client and see how many files it's
examined and how many it's backed up, so far. But on the final backup
report it only has the number of files backed up, and I'd like to also see
t
What is it that triggers Bacula to upgrade my backups from a Diff or Incr
to a Full because of a FileSet change?
Is it *ANY* change within the Fileset {}, even a comment, or is it based on
something else? I can't imagine it's a timestamp on the config file,
because you could have a monolithic conf
>From various places online I see that these are the options and signature
comparison values for the various hash methods.
1 : SHA1
2 : SHA256
3 : SHA512
5 : MD5
6 : XXHASH64
7 : XXH3_64
8 : XXH3_128
But, then diving into the code (15.0.2) I see that there are issues with
SHA512 and that's disabl
Hi All,
For the compression methods I can choose for my filesets, (gzip, lzo, zstd)
are any of these multi-threaded, or just single thread? And if
multi-threaded, how does it choose how many threads to use?
Thanks,
-John
--
- Adaptability -- Analytical --- Ideation Input - Belief -
--
I have a system which I'm backing up and have it set up for "Always
Incremental" backups. I ran a Full job by hand and have schedules for
Differentials and Incrementals, and once a week Bacula will roll up and
Incr and Diff jobs greater than 30 days along with the existing Full into a
rolling Full
Within options I can specify a Signature of XXHASH.
Is there a way I can reference this with the Verify setting?
>From the manual it appears my only Verity options are:
1 - SHA1
2 - SHA256
3 - SHA512
5 - MD5
Thanks,
-John
--
- Adaptability -- Analytical --- Ideation Input - Bel
They didn't change the name, it's a fork.
On Mon, Aug 29, 2022 at 8:16 AM Elias Pereira wrote:
> Hello Marcin,
>
> Sorry for the question, but why did you change the name from "baculum" to
> "bacularis"? :D
>
> On Fri, Aug 26, 2022 at 6:14 PM Marcin Haba wrote:
>
>> Hello Everybody,
>>
>> We ar
Fileset {
Name = “cadat"
EnableVss = no
EnableSnapshot = no
Include {
Options {
OneFS = no
RegexDir = "/mnt/cdat-.*"
}
Options {
OneFS = no
Exclude = yes
RegexDir = ".*"
}
File = "/mnt"
}
}
Justin, looking at this, Within /mnt, doesn't your exclude (".*
I actually think the removal of the animated dots would make the site more
readable. The motion on the screen is sort of nauseating while trying to
read and makes me want to not read any more.
On Mon, May 7, 2018 at 5:37 AM, Sven Hartge wrote:
> On 07.05.2018 07:07, Kern Sibbald wrote:
>
> > FY
"I'm not sure I see the utility of copying and pasting any of the other
formatted numbers here."
I do. Running calculations within a script, a tally of number of files,
job bytes, averages, etc.
On Fri, Sep 22, 2017 at 9:56 AM, Phil Stracchino
wrote:
> On 09/22/17 09:04, Christoph Litauer wrot
r backup levels.
On Tue, Oct 27, 2015 at 4:51 PM, Thing wrote:
> Hi,
>
> It is a standard bacula configuration, data will also not change much. So
> from your estimate with compression a 3TB drive would seem the minimum.
>
> On 28 October 2015 at 09:14, John Lockard wrote:
>
How often are you backing up? Fulls, Differentials, Incrementals? How
long do you want to keep each? How compressible is your data? How much
does the data change? How often does the data change?
Too many variables to answer your questions as given.
Only full backups, once a month, you'd need
I would use a "Copy" job so that impact to the client is minimal.
On Wed, Sep 16, 2015 at 2:04 PM, Kepler Mihály wrote:
> Hi!
>
> What is the good saving strategy if I have tape and disk storage too?
>
> How can I create backup from "client1" to both storage (tape and disk
> storage)?
>
> Is it
Sorry for jumping in late, but was on vacation.
I use the attached script, which I modified from a script posted by Jonas
Björklund. You'll need to modify it to add your database values and email
specifics (if you want to lock them into the perl script). If you see
something wrong, please tell m
You could "almost" do things this way. Unfortunately, you'll have to
occasionally wipe the "mirror" systems.
If I restored a full to the "mirror" machine, followed by a differential,
followed by any number of incremental backups, there's almost a 100% chance
that files which were deleted since a
Compression?
Backup level (files not backed up because they haven't been changed)?
Exclusions?
Have you gone through a full list of files to be backed up and a full list
of the files which were actually backed up?
On Thu, May 7, 2015 at 1:42 PM, Romer Ventura wrote:
> Hello,
>
>
>
> I have 2 HP
Are you starting all your jobs at the same time? Wondering if your having
issues with all of your jobs competing for bandwidth and slowing down.
Thinking a staggered start might help you.
On Thu, Dec 4, 2014 at 9:27 AM, Rai Blue wrote:
> Hi everyone,
> I'm using Bacula 7.0 to backup a pack of s
1024=1K (easily
modified in the script). I'm sure what I've done could probably be done
more cleanly/efficiently/correctly... you're free to make those changes
as you wish.
-John
On Mon, Nov 10, 2014 at 10:35 AM, John Lockard wrote:
> There are a number of scripts available which
There are a number of scripts available which can be run via cron to give
you stats on your backup jobs. Check your Bacula source
directory/examples/reports
In here I found report.pl from Jonas Björkland which I modified to give me
specific information I was looking for, but it's an excellent sta
Yes, but which IO?
Disk IO on the client?
Network IO from the client to the network?
Network IO from the network to the Bacula Director?
Network IO from the Bacula Director to the Bacula SD?
Disk IO on the Bacula SD?
Database IO on the Bacula Director?
Seems like you have more work to do than ju
I run into this issue with several of my servers and dealt with it by
creating "migrate" jobs. First job goes to disk. Second job runs some
reasonable time later and migrates the D2D job to tape. I had a number of
key servers I did this for with the advantage that I could offsite the
tapes and k
I know this is probably a stupid question, but I've seen stupid questions
solve things in the past...
Are both your tape drive and tape at least LTO3? If your drive is LTO-3
and your tape is LTO-2, then your results make perfect sense.
-John
On Thu, Apr 19, 2012 at 1:07 AM, Andre Rossouw wrote
Best solution for this one... Run your backup server on UTC time rather
than local.
-John
On Mon, Mar 26, 2012 at 3:59 AM, Frank Seidinger
wrote:
> Dear Bacula Users,
>
> I think that I've found a minor bug in bacula concerning the adjustment
> of clocks on the start of daylight saving (or summ
Yes, quite possible.
Check the examples/reports directory in the source tarball.
I've taken the reports.pl script and tweaked it to do
somethings specific to me. It's very straightforward
and you just kick it off with cron.
-John
On Fri, Oct 08, 2010 at 10:38:37AM +0200, hOZONE wrote:
> hell
On Tue, Aug 11, 2009 at 02:39:39PM -0400, John Lockard wrote:
> I have modified my query.sql to include some queries that
> I use frequently and I thought maybe someone else would
> find them useful additions. Also, I was wondering if anyone
> had queries which they find useful and w
While the job is running, keep an eye on the system which houses
your MySQL database and make sure that it isn't filling up a
partition with temp data. I was running into a similar problem
and needed to move my mysql_tmpdir (definable in /etc/my.cnf)
to another location.
-John
On Wed, Aug 12, 20
I have modified my query.sql to include some queries that
I use frequently and I thought maybe someone else would
find them useful additions. Also, I was wondering if anyone
had queries which they find useful and would like to share.
In my setup, I need to rotate tapes on a weekly basis to
keep o
I'm fairly certain that when you do a "status dir", no matter
how many days you specify, you'll only see the soonest occuring
job of a certain level.
So, if you have a Full, Differential and an Incremental, of a
certain job defined then only the next of each of those will
be shown. If you change
On Fri, Jul 24, 2009 at 06:48:24AM +0200, Marc Cousin wrote:
> > In theory, the latency from random IO should be much closer to zero on a
> > flash drive than on a thrashing hard drive, so I was hoping I might need
> > only 1 or two 64GB or 128GB flash drives to provide decent spool size,
> > perha
Check your /tmp directory or MySQL 'tmpdir' location to see it
it's filling up with temporary DB data. This same problem
happened to, in my case it was dying at around the 180GB mark.
Moving the MySQL tmpdir to a much larger location took care of
my problem.
-John
On Wed, Jul 08, 2009 at 05:32:
I had two backups scheduled for the same time and it
appears that they both felt they wanted the same tape.
I have concurrency on, yet one job is running, writing
to tape #100058 in drive #1, yet the second job is
asking for tape #100058 in drive #0.
Any ideas?
-John
--
"Notebook. No photograph
When I'm running a backup job I can check the status of the
backup job (with statistics) through bconsole, using the command
'status client=clientname', which gives results like:
Connecting to Client clientname at clientname.si.umich.edu:9102
clientname-fd Version: 3.0.1 (30 April 2009)
If you do 'status client=[clientname]', on the bottom of the
output you'll see the status of the last several jobs which
ran for that client.
I would keep doing like you are with moving old-fd to new-fd
and deleting config for the old config, etc..
Your old backup files I would think should be sa
For backing up a laptop locally, bacula seems to be HUGE
overkill. How long will you be keeping these backups? How many
backups will you be keeping? My guess is that you'd be better
served by a little scripting, rsync and cron.
Each day, establish a new directory by date, then hourly run rsync
On Sat, May 23, 2009 at 12:11:28PM -0500, Zhengquan Zhang wrote:
> On Fri, May 22, 2009 at 02:12:13PM -0400, John Lockard wrote:
> > When you run a job by hand the schedule isn't involved.
> > Either way, for your "Schedule" entry you need "Level="
> &
On Sat, May 23, 2009 at 12:23:32PM -0500, Zhengquan Zhang wrote:
> On Fri, May 22, 2009 at 02:21:26PM -0400, John Lockard wrote:
> > Also, when you post your configs, it would be a really good idea
> > to remove password and account information.
>
> Thanks John, Can anyone
On Fri, May 22, 2009 at 03:11:03PM -0400, John Drescher wrote:
> > Release doesn't, however, eject the tape from the drive, correct?
> >
> It does the same as umount only it does not remove the drive from
> bacula's control.
> >
> > Also, I mentioned that I will always follow my unmount command
> >
Drescher wrote:
> On Fri, May 22, 2009 at 11:58 AM, John Lockard wrote:
> > And if I mounted a tape immediately after and also made sure
> > that automout was set to yes? Whenever I unmount a tape I
> > always make sure to mount another in it's stead.
> >
> >
rom Element Address 480 to 60 Failed
So, it came across a "bad" tape, puked, tried to unload to slot and
failed. It tried the unload twice (within the same minute) failed
both times, then sat there waiting for me to issue the mount command,
which
Also, when you post your configs, it would be a really good idea
to remove password and account information.
On Fri, May 22, 2009 at 02:12:13PM -0400, John Lockard wrote:
> When you run a job by hand the schedule isn't involved.
> Either way, for your "Schedule" entry you n
When you run a job by hand the schedule isn't involved.
Either way, for your "Schedule" entry you need "Level="
before the work "Full".
Schedule {
Name = "test"
Run = Level=Full at 11:50
}
But, your problem is that your Job doesn't have a Default
Level defined. You'll
y 22, 2009 at 11:03:51AM -0400, John Drescher wrote:
> On Fri, May 22, 2009 at 10:45 AM, John Lockard wrote:
> > Hi All,
> >
> > I know I don't have full information here, but don't want to send along
> > my full config as I'll guess that's overkil
Hi All,
I know I don't have full information here, but don't want to send along
my full config as I'll guess that's overkill.
22-May 06:17 tibor-sd JobId 4223: Please mount Volume "100027L2" or label a new
one for:
Job: Belobog-Data2-Users.2009-05-22_05.15.00_35
Storage: "N
On Thu, May 21, 2009 at 01:34:31PM +0300, Alnis Morics wrote:
> Yes, I can list all the files but that doesn't mean I can back them up. When
> I
> try to run the job, it terminates with an error, and there's also nothing I
> can restore.
>
> Here's the output of the last job:
>
>3743 21-Ma
On Fri, May 15, 2009 at 04:28:23PM -0400, John Lockard wrote:
> On Fri, May 15, 2009 at 01:30:57PM +0200, Bruno Friedmann wrote:
> > John Lockard wrote:
> > > Hi all,
> > >
> > > Saw this last night. What would cause these Fatal errors?
> > >
&
On Fri, May 15, 2009 at 01:30:57PM +0200, Bruno Friedmann wrote:
> John Lockard wrote:
> > Hi all,
> >
> > Saw this last night. What would cause these Fatal errors?
> >
> > Server: Linux 2.6.18 x86_64
> > Bacula Version:
> > Server: 3.0.0
> >
Appears that in acl.c, line 1145 there's a stray ";" at the
end of the line (version 3.0.1). Removal allows compilation
on Solaris.
-John
--
"What good is a ring Mr. Baggins if you don't have
any fingers." - Agent Elrond - Matrix of the Rings
---
On Thu, May 14, 2009 at 03:54:55PM -0400, John Drescher wrote:
> On Thu, May 14, 2009 at 2:26 PM, John Lockard wrote:
> > Hi all,
> >
> > Saw this last night. What would cause these Fatal errors?
> >
>
> Possible database corruption.
Other backups after th
Hi all,
Saw this last night. What would cause these Fatal errors?
Server: Linux 2.6.18 x86_64
Bacula Version:
Server: 3.0.0
Client: 2.4.4 (SPARC Solaris 8)
Filesystem: just under 1TB
Thanks for any help,
John
13-May 22:25 tibor-sd JobId 3833: Labeled new Volume "Monthly-SIN-0363" on
dev
Has anyone seen anything like this before?
21-Apr 14:10 tibor-dir JobId 3240: Start Backup JobId 3240,
Job=Belobog-Data3-Users.2009-04-21_12.10.22_07
21-Apr 14:10 tibor-dir JobId 3240: Using Device "NEO-LTO-1"
21-Apr 14:11 tibor-sd JobId 3240: Spooling data ...
21-Apr 16:10 tibor-dir JobId 3240:
Nope, disabling tso and tx changed nothing.
-John
On Fri, Apr 17, 2009 at 12:24:25PM -0400, John Lockard wrote:
> On Fri, Apr 17, 2009 at 09:39:24AM +1000, James Harper wrote:
> > Does belobog have the same network adapter and kernel as your other
> > servers?
>
> It
I've just switched over from 2.4.4 to 3.0.0 so my familiarity
with new features is close to null.
Is there a way I can (maybe just for a specific job) output
to a file *everything* which is happening with a backup job?
I'd like to run a job and get a file containing which files
were backed up, or
nd DIR are the same machine. The FD is on a different LAN
segment.
> James
-John
>
>
> > -Original Message-
> > From: John Lockard [mailto:jlock...@umich.edu]
> > Sent: Friday, 17 April 2009 01:38
> > To: bacula-users@lists.sourceforge.net
> >
Client and server at 2.4.4. Both client and server are Linux 2.6
Logs from client:
15-Apr 16:53 tibor-dir JobId 2954: Start Backup JobId 2954,
Job=Belobog-Data-Users.2009-04-15_16.32.07.04
15-Apr 16:53 tibor-dir JobId 2954: Using Volume "100108L2" from 'Scratch' pool.
15-Apr
I can't see a way in 2.4.x, but maybe it's present in the
3.0.x code... I would like to compress my Incremental backups,
but not my Differential backups or Full backups.
I keep my incremental backups on disk. They never transition
to Tape. My Differentials run weekly and I keep a week and a
hal
The time jumps at 2am, either forward or backward depending on
whether you're switching to or from DST. Most admins I know
just completely avoid the time period from 1:00am to 3:00am.
entirely because of the Daylight Saving Time switches.
If you're going to go UTC, then you should go UTC all the
Nevermind... I'm a moron.
On Fri, Apr 03, 2009 at 02:54:01PM -0400, John Lockard wrote:
> Hi All,
>
> Looking through the manual in the Message Resource section
> I don't see 'FileSet' as one of the options. (Version 2.4.4).
> Is this available but undocum
Hi All,
Looking through the manual in the Message Resource section
I don't see 'FileSet' as one of the options. (Version 2.4.4).
Is this available but undocumented or should I be putting in
a software change request?
Reason I ask, is that an email telling me that a job for
'Server1' finished isn
Sorry about the double posting. The listserver sent a message
making it sound like the message was rejected because of the
attachment.
-John
--
(In this one, Pinky is smart.)
Brain: Pinky, Are you pondering what I'm pondering?
Pinky: Yes I am.
nd send it to people who care.
#
# For it to work, you need to have all Bacula job report
# logging to a file, edit LOGFILE to match your setup.
# This should be run after all backup jobs have finished.
# Tested with bacula-2.4.4
# Some improvements by: John Lockard
# (University of Michigan - S
Attached, please find updates to bacula_mail_summary.sh which
was in the examples/reports directory in the source distribution.
I run this script once a week, after the log has been rotated
by my systems logrotate script.
I've tweaked the display formatting quite a bit. Rather than
displaying the
On Sat, Mar 21, 2009 at 05:10:09AM -0700, Kevin Keane wrote:
> John Lockard wrote:
> > The minimum setting I have on Max Concurrent Jobs is on the
> > Tape Library and that's set to 3. It appears that priority
> > trumps all, unless the priority is the same or better.
On Fri, Mar 20, 2009 at 04:11:55PM -0400, Jason Dixon wrote:
> On Fri, Mar 20, 2009 at 03:46:49PM -0400, Jason Dixon wrote:
> >
> > Just to be certain, I kicked off a few OS jobs just prior to the
> > transaction log backup. I also changed the Storage directive to use
> > "Maximum Concurrent Jobs
The minimum setting I have on Max Concurrent Jobs is on the
Tape Library and that's set to 3. It appears that priority
trumps all, unless the priority is the same or better.
So, if I have one job that has priority of, say, 10, then
any job running on any other tape drive or virtual library
will s
:48AM -0400, John Lockard wrote:
> Hi All,
>
> I have a mix of disk and tape backups. To disk I allow up to
> 20 jobs run concurrently. On my tape library I have 3 tape
> drives, so only allow a max of 3 jobs to run concurrently.
>
> I run Full backups once a month, Diff
Hi All,
I have a mix of disk and tape backups. To disk I allow up to
20 jobs run concurrently. On my tape library I have 3 tape
drives, so only allow a max of 3 jobs to run concurrently.
I run Full backups once a month, Differentials once a week
and incrementals most days of the week. I would
As a side note, I'm pretty sure you could shorten this definition to:
Schedule {
Name = Schedule-apache
Run = Level=Full Storage=Disk3-apache on 1,16 at 19:05
Run = Level=Differential Storage=Disk3-apache on 8,23 at 19:05
Run = Level=Incremental Storage=Disk3-apache on 2-7,9-15,17-22,2
If your problem is authentication or encryption then I would
suggest you check out msmtp (http://msmtp.sourceforge.net/).
This smtp client will use SSL/TLS for encrypted transport and
GSSAPI, Digest-MD5 and many more for authentication.
-John
On Tue, Oct 21, 2008 at 07:21:07AM +0200, Peter Herri
68 matches
Mail list logo