Hello Robert,
On 9/11/21 01:31, Robert Earl wrote:
> Why is bacula-fd looking for the SQL database on the client? There is no
> database running there, only the fd. The database is on the director
> machine.
Very likely you have started the BackupCatalog job on a client "matthew-fd"
instead of th
Why is bacula-fd looking for the SQL database on the client? There is no
database running there, only the fd. The database is on the director
machine.
aten-sd JobId 3813: Sending spooled attrs to the Director. Despooling 0
bytes ...
aten-sd JobId 3813: Elapsed time=00:00:01, Transfer rate=0 Bytes
thanks for the suggestion.
Checked the database, bacula has all permissions granted. Ran that script
before starting bacula.
I do have another server that is running the same configuration. It shows
the same table in the database and you can not access it as well. The
difference being the catalo
Hi,
Have you tried to to run the grant permission script on your database ?
If the table does exist, it sounds to me a permissions issue.
Good luck with recovering your catalog.
Regards
Davide
On May 16, 2016 05:45, "Jerry Lowry" wrote:
>
> Hi all,
>
> I am just finished recovering a system
Hi all,
I am just finished recovering a system disk boot failure on one of my
backup servers. It is running Centos (6.6/6.7 now); Mysql 5.6.28/5.6.30
(now) and Bacula 5.2.13.
Fortunately the system disk did not die, just some boot problem and I could
tell that it was not spinning like it should.
Hi Ana,
Thanks for the information on this. We don't have a requirement for 2, or 3.
For this reason I have just set our catalog to backup to a weekly
retention pool.
Thanks!
--
Wesley Render, Consultant
OtherData
-
Hello Wesley,
This will depend on your needs. There are a few possibilities that could
lead you to recover your database dump:
1) total disaster recovery: in this case, you will need the most recent
backup of your catalog.
2) reverting your catalog to a specific point in time: in this case, you
w
I was just wondering what people would recommend for the retention
period for the catalog backup job. For example should I set the
catalog backup job to go to a volume pool with a retention period of 1
week?
By default it looks like it is set to go to the default pool which is
set to 365
On 04/05/2012 02:41 PM, Stephen Thompson wrote:
> On 04/02/2012 03:33 PM, Phil Stracchino wrote:
>> (Locking the table for batch attribute insertion actually isn't
>> necessary; MySQL can be configured to interleave auto_increment inserts.
>> However, that's the way Bacula does it.)
>
> Are you
On 04/02/2012 03:33 PM, Phil Stracchino wrote:
> On 04/02/2012 06:06 PM, Stephen Thompson wrote:
>>
>>
>> First off, thanks for the response Phil.
>>
>>
>> On 04/02/2012 01:11 PM, Phil Stracchino wrote:
>>> On 04/02/2012 01:49 PM, Stephen Thompson wrote:
Well, we've made the leap from MyISAM t
On 04/03/2012 08:43 AM, Phil Stracchino wrote:
>
> Stephen, by the way, if you're not already aware of it: You probably
> want to set innodb_flush_log_at_trx_commit = 0.
>
> The default value of this setting is 1, which causes the log buffer to
> be written out to the lgo file and the logfile flus
Stephen, by the way, if you're not already aware of it: You probably
want to set innodb_flush_log_at_trx_commit = 0.
The default value of this setting is 1, which causes the log buffer to
be written out to the lgo file and the logfile flushed to disk at every
transaction commit. (Which obviousl
On 4/2/12 3:33 PM, Phil Stracchino wrote:
> On 04/02/2012 06:06 PM, Stephen Thompson wrote:
>>
>>
>> First off, thanks for the response Phil.
>>
>>
>> On 04/02/2012 01:11 PM, Phil Stracchino wrote:
>>> On 04/02/2012 01:49 PM, Stephen Thompson wrote:
Well, we've made the leap from MyISAM to I
On 4/3/12 3:28 AM, Martin Simmons wrote:
>> On Mon, 02 Apr 2012 15:06:31 -0700, Stephen Thompson said:
>>
That aside, I'm seeing something unexpected. I am now able to
successfully run jobs while I use mysqldump to dump the bacula Catalog,
except at the very end of the dump th
> On Mon, 02 Apr 2012 15:06:31 -0700, Stephen Thompson said:
>
> >> That aside, I'm seeing something unexpected. I am now able to
> >> successfully run jobs while I use mysqldump to dump the bacula Catalog,
> >> except at the very end of the dump there is some sort of contention. A
> >> few
On 04/02/2012 06:06 PM, Stephen Thompson wrote:
>
>
> First off, thanks for the response Phil.
>
>
> On 04/02/2012 01:11 PM, Phil Stracchino wrote:
>> On 04/02/2012 01:49 PM, Stephen Thompson wrote:
>>> Well, we've made the leap from MyISAM to InnoDB, seems like we win on
>>> transactions, but
First off, thanks for the response Phil.
On 04/02/2012 01:11 PM, Phil Stracchino wrote:
> On 04/02/2012 01:49 PM, Stephen Thompson wrote:
>> Well, we've made the leap from MyISAM to InnoDB, seems like we win on
>> transactions, but lose on read speed.
>
> If you're finding InnoDB slower than My
On 04/02/2012 01:49 PM, Stephen Thompson wrote:
> Well, we've made the leap from MyISAM to InnoDB, seems like we win on
> transactions, but lose on read speed.
If you're finding InnoDB slower than MyISAM on reads, your InnoDB buffer
pool is probably too small.
> That aside, I'm seeing something
On 02/06/2012 02:45 PM, Phil Stracchino wrote:
> On 02/06/2012 05:02 PM, Stephen Thompson wrote:
>> So, my question is whether anyone had any ideas about the feasibility of
>> getting a backup of the Catalog while a single "long-running" job is
>> active? This could be in-band (database dump) or o
On 02/06/2012 02:45 PM, Phil Stracchino wrote:
> On 02/06/2012 05:02 PM, Stephen Thompson wrote:
>> So, my question is whether anyone had any ideas about the feasibility of
>> getting a backup of the Catalog while a single "long-running" job is
>> active? This could be in-band (database dump) or o
On 02/06/2012 05:45 PM, Phil Stracchino wrote:
> Stephen,
> Three suggestions here.
[...]
> Route 4:
...I'm sorry. We'll come in again.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net ala...@metrocast.net p...@co.ordinate.org
Renaissance
On 02/06/2012 05:02 PM, Stephen Thompson wrote:
> So, my question is whether anyone had any ideas about the feasibility of
> getting a backup of the Catalog while a single "long-running" job is
> active? This could be in-band (database dump) or out-of-band (copy of
> database directory on files
Hello,
We were wondering if anyone using bacula had come up with a creative way
to backup their Catalog. We understand the basic dilemna -- that one
should not backup a database that is in use, because it's not a coherent
view.
Currently we've managed to keep our filesets and jobs small eno
Op 9/11/2011 3:50, ganiuszka schreef:
> 2011/11/8 Kenney, William P. (Information Technology Services)
> :
>> Hello All,
>>
>>
>>
>> Have been running Bacula 5.0.3 without any major problems, but the
>> BackupCatalog job is failing.
>>
>> Bacula-director is running.
>>
>> MySql is up and I can log
2011/11/8 Kenney, William P. (Information Technology Services)
:
> Hello All,
>
>
>
> Have been running Bacula 5.0.3 without any major problems, but the
> BackupCatalog job is failing.
>
> Bacula-director is running.
>
> MySql is up and I can log in from the console with no problem.
>
Hi,
Did you
Hello All,
Have been running Bacula 5.0.3 without any major problems, but the
BackupCatalog job is failing.
Bacula-director is running.
MySql is up and I can log in from the console with no problem.
The error message follows:
**
> To clarify, the 'etc' and Catalog backups still want to go into the Full
> pool even though I set the pool to Cat_Backup and the Catalog backup still
> forces me to manually Label a volume even though none of the other Jobs
> require it.
>
I would rework your config so that the catalog backup do
To clarify, the 'etc' and Catalog backups still want to go into the Full
pool even though I set the pool to Cat_Backup and the Catalog backup still
forces me to manually Label a volume even though none of the other Jobs
require it.
Mark
On 22 April 2010 09:44, John Drescher wrote:
> On Thu, Apr
On Thu, Apr 22, 2010 at 9:42 AM, John Drescher wrote:
> 2010/4/22 Mark Coolen :
>> Full, Diff and Inc backups are working fine. The problem I have is that the
>> catalog backup won't automatically label a volume in the Full pool. Now I've
>> decided to have the catalog backup and the 'etc' backup
2010/4/22 Mark Coolen :
> Full, Diff and Inc backups are working fine. The problem I have is that the
> catalog backup won't automatically label a volume in the Full pool. Now I've
> decided to have the catalog backup and the 'etc' backup placed in the
> Cat_Backup pool, but it doesn't seem to want
Full, Diff and Inc backups are working fine. The problem I have is that the
catalog backup won't automatically label a volume in the Full pool. Now I've
decided to have the catalog backup and the 'etc' backup placed in the
Cat_Backup pool, but it doesn't seem to want to work. What do I have wrongly
> You might also want to keep the bootstrap file for the job which last
> backed up the catalog.
>
In the past I have found this to be very important if you put your
catalog on a volume with more than 1 job on that volume.
When I had a database corruption problem (bad hardware) it was more
difficu
Russell Sutherland wrote:
> I have perused through the Catalog Maintenance section in the documentation:
>
> http://www.bacula.org/en/rel-manual/Catalog_Maintenance.html
>
> looking for some guidance on where to store the Catalog data from the
> Catalog Job. (This is the data which gets generated
> I have perused through the Catalog Maintenance section in
> the documentation:
>
> http://www.bacula.org/en/rel-manual/Catalog_Maintenance.html
>
> looking for some guidance on where to store the Catalog
> data from the Catalog Job. (This is the data which gets
> generated by the make_catalog_
I have perused through the Catalog Maintenance section in the documentation:
http://www.bacula.org/en/rel-manual/Catalog_Maintenance.html
looking for some guidance on where to store the Catalog data from the
Catalog Job. (This is the data which gets generated by the
make_catalog_backup script.)
John Drescher napsal(a):
>> I have no problem with backing up catalog on other server with less
>> expensive storage space. But I do not see any use of old catalog backups at
>> all. I would do "bscan into new catalog" in all cases except the one, where
>> I have catalog backup newer than last data
> I have no problem with backing up catalog on other server with less
> expensive storage space. But I do not see any use of old catalog backups at
> all. I would do "bscan into new catalog" in all cases except the one, where
> I have catalog backup newer than last data backed up and I need to reco
John Drescher napsal(a):
> On Mon, Oct 13, 2008 at 7:33 AM, Radek Hladik <[EMAIL PROTECTED]> wrote:
>> Hi,
>>I would like to ask how should I recycle catalog backups.
>> I am using disk based storage and catalog pool is by default set to
>> half-year or so long retention period. I would thi
On Mon, Oct 13, 2008 at 7:33 AM, Radek Hladik <[EMAIL PROTECTED]> wrote:
> Hi,
>I would like to ask how should I recycle catalog backups.
> I am using disk based storage and catalog pool is by default set to
> half-year or so long retention period. I would think, that week would be
> more
Hi,
I would like to ask how should I recycle catalog backups.
I am using disk based storage and catalog pool is by default set to
half-year or so long retention period. I would think, that week would be
more than sufficient.
What is old catalog backup good for? I mean if I have catalog
The catalog is the last backup of the night. (Lower Priority and later
start time).
This is the files in my catalog FileSet.
File = /var/spool/bacula/working/bacula.sql.gz
File = /var/spool/bacula/working/mysql.sql.gz
File = /etc/bacula/bacula-dir.conf
File = /etc/b
Frank Sweetser wrote:
> A little more work to get out, but no, lack of the catalog data does not make
> the volumes useless at all. Even if you had some disaster which physically
> destroyed everything but the backup volumes, you can still get your data back.
>
> The brute force method is to use b
Simon Gray wrote:
> Hi guys,
>
> I have a TS3100 (LTO-4) up and running without any problems. I've
> configured Bacula to store its catalog in MySQL. Our backups are up to
> around 3TB and spanning multiple tapes, obviously the data is fairly
> useless without the catalog to retrieve it? I'm co
"Simon Gray" <[EMAIL PROTECTED]> kirjoitti viestissä
news:[EMAIL PROTECTED]
> I'm interested in how others store their catalogs? do you include this
> as part of your main backup?
>
> How do you restore the data if the catalog is on the same media?
>
I run the catalog backup after any other backup
Hi guys,
I have a TS3100 (LTO-4) up and running without any problems. I've
configured Bacula to store its catalog in MySQL. Our backups are up to
around 3TB and spanning multiple tapes, obviously the data is fairly
useless without the catalog to retrieve it? I'm considering writing the
catalog
Hello all -
I'm in the process of evaluating/testing Bacula for our backup solution.
Currently I have 2.0 DIR, SD, and an FD running on a Ubuntu system and a
2.0 FD on a Win 2003 Server. For the most part everything works and
I've even managed to do a mostly bare-metal recovery of the Win system.
Never mind. After setting the MySQL password in the DIR config file, I
had forgotten to restart DIR. It's been a long day.
Suggestion for future enhancement: Whenever a daemon is about to start a
task, have it check the modification timestamp on it's config file. If
the file has been changed si
Hi there,
I am not using max volume us, however I am using Max volume jobs, which
shouldn't be an issue. Here's a snippet from my config.
Pool {
Name = Incremental
Pool Type = Backup
Recycle = yes # Bacula can automatically recycle
Volumes
AutoPrune = yes
On Thu, Jun 15, 2006 at 03:16:00PM -0400, William Reid wrote:
> Hi there,
> Quick question for everyone.
> Is there a way force the backup catalog job to use the same tape as the
> last job thats ran?
Bacula will use the tape currently in the drive unless it is not
a candidate. Do you have a max v
Hi there,
Quick question for everyone.
Is there a way force the backup catalog job to use the same tape as the
last job thats ran?
I am noticing that it runs after (priority=11) and it grabs it's own
tape, I'm hoping it will use the last tape that was written to...
Thanks,
Wm
Moin.
from the "make_catalog_backup" script:
# This script dumps your Bacula catalog in ASCII format
# It works for MySQL, SQLite, and PostgreSQL
#
# $1 is the name of the database to be backed up and the name
# of the output file (default = bacula).
# $2 is the user name with which to acce
Beren Gamble wrote:
I've got this error when bacula tries to backup the catalog:
RunBefore: /usr/bin/mysqldump: option '-u' requires an argument
Here's my job:
Job {
Name = "BackupCatalog-Daily"
Type = backup
Level = Full
Client = BACKUP1-fd
FileSet="Catalog"
Schedule = "DailyCycl
I've got this error when bacula tries to backup the catalog:
RunBefore: /usr/bin/mysqldump: option '-u' requires an argument
Here's my job:
Job {
Name = "BackupCatalog-Daily"
Type = backup
Level = Full
Client = BACKUP1-fd
FileSet="Catalog"
Schedule = "DailyCycleAfterBackup"
Storag
>But where do I find what is passed to that script?
RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup -u
-p"
Well, I found this, but I can't see that variable be set anywhere in
bacula-dir.conf
I have
Catalog {
Name = MyCatalog
dbname = bacula; user = bacula; password = "alittlepasswo
16-Aug 22:02 cookiemonster-dir: RunBefore: /usr/bin/mysqldump: Got error:
1045: Access denied for user: '@localhost' (Using password: YES) when
trying to connect 16-Aug 22:02 cookiemonster-dir: Start Backup JobId 28,
Job=BackupCatalog.2005-08-16_22.02.39
I guess somewhere I missconfigured somethin
Hi all. I've finally started to get backups of my clients to work right
(except for some issues with NT Backup doing the registry stuff)...but
I'm having a problem backing up the Catalog. I'm using the default job
tweaked a bit:
Job {
Name = "BackupCatalog"
JobDefs = "DefaultJob"
Level = F
56 matches
Mail list logo