Hi Lionel,
Please use Bacula dbcheck to perform that action.
https://linux.die.net/man/8/dbcheck
Best Regards
Pedro
Às 16:04 de 08/12/22, Lionel PLASSE escreveu:
Hello,
I want to clean my Catalog because I see old FileSet resources prior the last
2years and there is no Job attached.
So W
Hello,
I want to clean my Catalog because I see old FileSet resources prior the last
2years and there is no Job attached.
So When I run a restore wizard I have to choose between multiple FileSet
resource (in the mode ) which are not
operational.
I want to delete these entry manually in the F
Hello Robert,
On 9/11/21 01:31, Robert Earl wrote:
> Why is bacula-fd looking for the SQL database on the client? There is no
> database running there, only the fd. The database is on the director
> machine.
Very likely you have started the BackupCatalog job on a client "matthew-fd"
instead of th
Why is bacula-fd looking for the SQL database on the client? There is no
database running there, only the fd. The database is on the director
machine.
aten-sd JobId 3813: Sending spooled attrs to the Director. Despooling 0
bytes ...
aten-sd JobId 3813: Elapsed time=00:00:01, Transfer rate=0 Bytes
-dir.conf"
27-Dec 09:03 srvcar006-dir JobId 58: BeforeJob: awk:
/etc/bacula/scripts/make_catalog_backup_awk: line 53: function gensub never
defined
Thanks
Von: mau...@gmx.ch
Gesendet: Sonntag, 27. Dezember 2020 15:11
An: bacula-users@lists.sourceforge.net
Betreff: Re: [Bacula-users]
Autnetication failed?
Von: mau...@gmx.ch
Gesendet: Sonntag, 27. Dezember 2020 09:06
An: bacula-users@lists.sourceforge.net
Betreff: [Bacula-users] catalog authentication error
Please now Bacula 9.4.2 running fine, about creating the catalog appair now
the message "Peer authenticatio
Please now Bacula 9.4.2 running fine, about creating the catalog appair now
the message "Peer authentication failed for user "bacula"
>pg_dump: [archiver (db)] connection to database "bacula" failed: FATAL:
Peer authentication failed for user "bacula
Editing
cat /etc/postgresql/11/main/pg_hba.
o_convert\n";
}
if ( /^[0-9a-f]{32}$/ ) {
$_ = encode_base64(pack("H*", $_));
s/=*$//;
print;
}
elsif ( /^[0-9a-zA-Z\/+]{22}$/ ) {
print unpack("H*", decode_base64($_)), "\n";
}
else {
die "Doesn't look like a
This message appair everytime also after booting the machine this are a new
installation of bacula so i do not interrupt the normal bacup process on the
other machine.
Von meinem iPhone gesendet
Am 24.12.2018 um 14:51 schrieb Adam Nielsen :
>> to open device "Default" (/dev/nst0): ERR=De
> to open device "Default" (/dev/nst0): ERR=Device or resource busy
Something else is running that is accessing the tape. Have you
unmounted the tape or temporarily stopped the bacula-sd daemon?
Cheers,
Adam.
___
Bacula-users mailing list
Bacula-user
Hello
I need to restore from a old old the Catalog, I read from Bacula Site that I
need to do this with bscan
>bscan -V Juni_2011 -v -s -m -c bacula-sd.conf /dev/nst0
after 2 min appair the follow error
>bscan: acquire.c:235-0 Read open tape device "Default" (/dev/nst0) Volume
"Juni_20
Kern - thanks. Perhaps foolishly I didn't try any "one file test" jobs;
my comfort level with installing bacula on this distro with these tape
drives has become second-nature to the point that I didn't feel the need
(so I got bit), and of course with the weekend, trying to get it running
to hav
Hello Ted,
This is probably perfectly normal if you have not yet finished any
jobs. By default the Filename entries are put in the catalog at the end
of each Job. The one entry you did find is a "blank" filename which is
used when putting Directory entries into the catalog (i.e. for a
Direc
Greets - I have a new installation of Bacula on Debian Jessie
(5.2.6+dfsg-9.3) with a postgres backend. This is not my first bacula
installation (I have 5 other separate instances running on different
tape units, some single TL2000, some library changers - all happy) --
except for this new one
thanks for the suggestion.
Checked the database, bacula has all permissions granted. Ran that script
before starting bacula.
I do have another server that is running the same configuration. It shows
the same table in the database and you can not access it as well. The
difference being the catalo
Hi,
Have you tried to to run the grant permission script on your database ?
If the table does exist, it sounds to me a permissions issue.
Good luck with recovering your catalog.
Regards
Davide
On May 16, 2016 05:45, "Jerry Lowry" wrote:
>
> Hi all,
>
> I am just finished recovering a system
Hi all,
I am just finished recovering a system disk boot failure on one of my
backup servers. It is running Centos (6.6/6.7 now); Mysql 5.6.28/5.6.30
(now) and Bacula 5.2.13.
Fortunately the system disk did not die, just some boot problem and I could
tell that it was not spinning like it should.
Hi Ana,
Thanks for the information on this. We don't have a requirement for 2, or 3.
For this reason I have just set our catalog to backup to a weekly
retention pool.
Thanks!
--
Wesley Render, Consultant
OtherData
-
Hello Wesley,
This will depend on your needs. There are a few possibilities that could
lead you to recover your database dump:
1) total disaster recovery: in this case, you will need the most recent
backup of your catalog.
2) reverting your catalog to a specific point in time: in this case, you
w
I was just wondering what people would recommend for the retention
period for the catalog backup job. For example should I set the
catalog backup job to go to a volume pool with a retention period of 1
week?
By default it looks like it is set to go to the default pool which is
set to 365
ge: Recycle] Please consider the environment before printing this
> email and always recycle office paper
>
>
>
> *From:* Adam Clark [mailto:a...@eryjus.com]
> *Sent:* Friday, August 21, 2015 11:21 AM
> *To:* bacula-users@lists.sourceforge.net
>
> *Subject:* Re: [Bacula-users]
@lists.sourceforge.net
Subject: Re: [Bacula-users] Catalog writing to wrong storage device
Ana,
Thanks again for your reply. I had figured that would be the case; I was just
hoping for something a little more eloquent.
Thank you!
Adam Clark
Eryjus Consulting, LLC
[Recycle]Please consider the environment
Emília M. Arruda [mailto:emiliaarr...@gmail.com]
Sent: Friday, August 21, 2015 11:04 AM
To: Adam Clark
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Catalog writing to wrong storage device
Hello Adam,
There is a misunderstanding here. The volumes in bacula are "tied"
Hello Adam,
There is a misunderstanding here. The volumes in bacula are "tied" to
devices. This way you cannot have a volume in the directory mount point
specified for the Zentyal-File device being used by another device with a
different directory mount point (archive device).
It is possible to h
AM
To: Adam Clark
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Catalog writing to wrong storage device
Hello Adam,
Do you have any storage resource defined in the pool or schedule resources
defined for this job? The pool is not defined in your BackupCatalog job,
therefore
Hello Adam,
Do you have any storage resource defined in the pool or schedule resources
defined for this job? The pool is not defined in your BackupCatalog job,
therefore I suppose it is spedified in the schedule resource for this job?
Best regards,
Ana
On Thu, Aug 20, 2015 at 12:18 AM, Adam Clar
Hello all,
I have been using Bacula to back up several servers for about 2 months now. I
now would like to take the next step to FTP my backups off-site. However, I
back up about 10GB each night and that is a bit too much data to move every
night. My backups are to the file system.
To get a
The default postgres conf files that are installed are very poorly tuned
for Bacula. You might compare your old postgres conf file with the new one.
Kern
On 14-10-06 09:49 AM, Heitor Faria wrote:
Since i upgrade bacula (5 à 7) + postgresql (8.3 à 9.3),
restore and all access to the c
>
> Since i upgrade bacula (5 à 7) + postgresql (8.3 à 9.3),
>
restore and all access to the catalog are slow.
>
Postgresql usually gets faster each version, according to benchmarks
published in the Internet.
> Is there something (a script ?) to run when upgrade bacula ? and
> postgresql ?
>
De
Since i upgrade bacula (5 à 7) + postgresql (8.3 à 9.3), restore and all
access to the catalog are slow.
Is there something (a script ?) to run when upgrade bacula ? and postgresql
?
Thx
Antoine
--
Slashdot T
On 01/15/2014 02:54 PM, Wolfgang Denk wrote:
> Dear Dimitri,
>>> Basically whay I did is dumping the DB
>>> under MySQL
>>
>>> and then importing the dump into PostgreSQL.
>>
>> That's why the sequences didn't get reinitialized properly.
>
> Would there have been a better way to do that?
Nope.
Dear Thomas,
In message <52d6a29a.6010...@mtl.mit.edu> you wrote:
>
> > I ran this under "bconsole", i. e. as user bacula - is this not the
> > right thing to do?
...
> As someone I think already pointed out, it sounds like the owner of your
> bacula database sequences is another user - more than
Dear Dimitri,
In message <52d5c764.4050...@bmrb.wisc.edu> you wrote:
>
> > I didn't use any precanned procedure (is there one? I mean a
> > recommended/working one?). Basically whay I did is dumping the DB
> > under MySQL=20
>
> > and then importing the dump into PostgreSQL.
>
> That's why the
> I tried that, but it fails:
>
> Enter SQL query: alter sequence fileset_filesetid_seq restart with 76;
> Query failed: ERROR: must be owner of relation fileset_filesetid_seq
>
> I ran this under "bconsole", i. e. as user bacula - is this not the
> right thing to do?
Wolfgang,
As some
On 01/14/2014 04:57 PM, Wolfgang Denk wrote:
> I didn't use any precanned procedure (is there one? I mean a
> recommended/working one?). Basically whay I did is dumping the DB
> under MySQL
> and then importing the dump into PostgreSQL.
That's why the sequences didn't get reinitialized properl
Dear Thomas,
In message <52d59d74.6000...@mtl.mit.edu> you wrote:
>
> > Do you have any idea why this would happen? Is this something I can
> > influence?
> > Are there any other variables that might hit by similar issues?
>
> I can't say exactly why it happened to you but my guess would be tha
On 01/14/2014 02:26 PM, Thomas Lohman wrote:
> I can't say exactly why it happened to you but my guess would be that
> this problem could hit anyone porting from mysql to postgres.
At a guess migration scripts don't translate mysql's "autoincrement" (or
"identity" or whatever they call it) to po
Wolfgang,
> Dear Thomas,
>
> In message <52d555c5.9070...@mtl.mit.edu> you wrote:
>> My guess is that during the migration from MySQL to Postgres, the
>> sequences in Bacula did not get seeded right and probably are starting
>> with a seed value of 1.
>
> Do you have any idea why this would happen
Dear Thomas,
In message <52d555c5.9070...@mtl.mit.edu> you wrote:
> My guess is that during the migration from MySQL to Postgres, the
> sequences in Bacula did not get seeded right and probably are starting
> with a seed value of 1.
Do you have any idea why this would happen? Is this something
My guess is that during the migration from MySQL to Postgres, the
sequences in Bacula did not get seeded right and probably are starting
with a seed value of 1.
the filesetid field in the fileset table is automatically populated by
the fileset_filesetid_seq sequence.
Run the following two quer
Hello,
I've tried to switch a bacula configuration that has been running for
years using from MySQL to PostgreSQL. Everything worked apparently
fine (I did the same before with two other instalations, where the
very same steps worked, too), but when trying to run jobs in the new
PostgreSQL enviro
Looks like it has already been raised as a bug
http://bugs.bacula.org/view.php?id=1979
I will retry with Enable VSS = no
Thanks
From: Steve Lee
Sent: 29 April 2013 08:05
To: bacula-users@lists.sourceforge.net
Subject: Catalog mismatch between jobfiles and count(f
> On Mon, 29 Apr 2013 07:05:19 +, Steve Lee said:
>
> Hi
>
> Running Bacula 5.2.6 on Linux version 3.5.0-22
> Client running on bacula-win64-5.2.10 on Windows Server 2008 (SP1)
>
> We recently added verify jobs to our schedule and have noticed that one of
> the jobs always fails. it sho
Hi
Running Bacula 5.2.6 on Linux version 3.5.0-22
Client running on bacula-win64-5.2.10 on Windows Server 2008 (SP1)
We recently added verify jobs to our schedule and have noticed that one of the
jobs always fails. it shows a mismatch between expected and examined files.
Files Expected:
On Tue, Jan 22, 2013 at 8:31 PM, Cleuson Alves wrote:
>
> Hello, I have noticed that as not recycle my catalog but the volumes, after
> a period, the catalog looking for a content that is no longer in the system
> as the message "For one or more of the JobIds selected, the files were found
> , so
Hello, I have noticed that as not recycle my catalog but the volumes, after
a period, the catalog looking for a content that is no longer in the system
as the message "For one or more of the JobIds selected, the files were found
, so file selection is not possible. ", so how can I keep only the cat
Hi all,
First, let me explain what my setup is and what problem forced me to
rebuild my catalog.
I have a backup server running Bacula under Ubuntu 12.04 so the bacula
version is 5.2.5. Backups are made in files and I backup around 15 linux
servers each days.
On the backup server, mysql has it's
Hi all,
First, let me explain what my setup is and what problem forced me to
rebuild my catalog.
I have a backup server running Bacula under Ubuntu 12.04 so the bacula
version is 5.2.5. Backups are made in files and I backup around 15 linux
servers each days.
On the backup server, mysql has it's
On 04/05/2012 02:41 PM, Stephen Thompson wrote:
> On 04/02/2012 03:33 PM, Phil Stracchino wrote:
>> (Locking the table for batch attribute insertion actually isn't
>> necessary; MySQL can be configured to interleave auto_increment inserts.
>> However, that's the way Bacula does it.)
>
> Are you
On 04/02/2012 03:33 PM, Phil Stracchino wrote:
> On 04/02/2012 06:06 PM, Stephen Thompson wrote:
>>
>>
>> First off, thanks for the response Phil.
>>
>>
>> On 04/02/2012 01:11 PM, Phil Stracchino wrote:
>>> On 04/02/2012 01:49 PM, Stephen Thompson wrote:
Well, we've made the leap from MyISAM t
On 04/03/2012 08:43 AM, Phil Stracchino wrote:
>
> Stephen, by the way, if you're not already aware of it: You probably
> want to set innodb_flush_log_at_trx_commit = 0.
>
> The default value of this setting is 1, which causes the log buffer to
> be written out to the lgo file and the logfile flus
Stephen, by the way, if you're not already aware of it: You probably
want to set innodb_flush_log_at_trx_commit = 0.
The default value of this setting is 1, which causes the log buffer to
be written out to the lgo file and the logfile flushed to disk at every
transaction commit. (Which obviousl
On 4/2/12 3:33 PM, Phil Stracchino wrote:
> On 04/02/2012 06:06 PM, Stephen Thompson wrote:
>>
>>
>> First off, thanks for the response Phil.
>>
>>
>> On 04/02/2012 01:11 PM, Phil Stracchino wrote:
>>> On 04/02/2012 01:49 PM, Stephen Thompson wrote:
Well, we've made the leap from MyISAM to I
On 4/3/12 3:28 AM, Martin Simmons wrote:
>> On Mon, 02 Apr 2012 15:06:31 -0700, Stephen Thompson said:
>>
That aside, I'm seeing something unexpected. I am now able to
successfully run jobs while I use mysqldump to dump the bacula Catalog,
except at the very end of the dump th
> On Mon, 02 Apr 2012 15:06:31 -0700, Stephen Thompson said:
>
> >> That aside, I'm seeing something unexpected. I am now able to
> >> successfully run jobs while I use mysqldump to dump the bacula Catalog,
> >> except at the very end of the dump there is some sort of contention. A
> >> few
On 04/02/2012 06:06 PM, Stephen Thompson wrote:
>
>
> First off, thanks for the response Phil.
>
>
> On 04/02/2012 01:11 PM, Phil Stracchino wrote:
>> On 04/02/2012 01:49 PM, Stephen Thompson wrote:
>>> Well, we've made the leap from MyISAM to InnoDB, seems like we win on
>>> transactions, but
First off, thanks for the response Phil.
On 04/02/2012 01:11 PM, Phil Stracchino wrote:
> On 04/02/2012 01:49 PM, Stephen Thompson wrote:
>> Well, we've made the leap from MyISAM to InnoDB, seems like we win on
>> transactions, but lose on read speed.
>
> If you're finding InnoDB slower than My
On 04/02/2012 01:49 PM, Stephen Thompson wrote:
> Well, we've made the leap from MyISAM to InnoDB, seems like we win on
> transactions, but lose on read speed.
If you're finding InnoDB slower than MyISAM on reads, your InnoDB buffer
pool is probably too small.
> That aside, I'm seeing something
On 02/06/2012 02:45 PM, Phil Stracchino wrote:
> On 02/06/2012 05:02 PM, Stephen Thompson wrote:
>> So, my question is whether anyone had any ideas about the feasibility of
>> getting a backup of the Catalog while a single "long-running" job is
>> active? This could be in-band (database dump) or o
On 02/06/2012 02:45 PM, Phil Stracchino wrote:
> On 02/06/2012 05:02 PM, Stephen Thompson wrote:
>> So, my question is whether anyone had any ideas about the feasibility of
>> getting a backup of the Catalog while a single "long-running" job is
>> active? This could be in-band (database dump) or o
On 02/06/2012 05:45 PM, Phil Stracchino wrote:
> Stephen,
> Three suggestions here.
[...]
> Route 4:
...I'm sorry. We'll come in again.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllewys.net ala...@metrocast.net p...@co.ordinate.org
Renaissance
On 02/06/2012 05:02 PM, Stephen Thompson wrote:
> So, my question is whether anyone had any ideas about the feasibility of
> getting a backup of the Catalog while a single "long-running" job is
> active? This could be in-band (database dump) or out-of-band (copy of
> database directory on files
Hello,
We were wondering if anyone using bacula had come up with a creative way
to backup their Catalog. We understand the basic dilemna -- that one
should not backup a database that is in use, because it's not a coherent
view.
Currently we've managed to keep our filesets and jobs small eno
On 12/23/11 5:27 PM, Thomas Lohman wrote:
> The update postgres script for 5.2.x is missing these two lines which
> you can run manually from within psql (connect to the bacula db as your
> Postgres admin db user):
>
> grant all on RestoreObject to ${bacula_db_user};
> grant select, update on re
The update postgres script for 5.2.x is missing these two lines which
you can run manually from within psql (connect to the bacula db as your
Postgres admin db user):
grant all on RestoreObject to ${bacula_db_user};
grant select, update on restoreobject_restoreobjectid_seq to
${bacula_db_user};
On Dec 23, 2011, at 6:26 PM, David Newman wrote:
> On 12/23/11 2:38 PM, Dan Langille wrote:
>>
>> On Dec 23, 2011, at 5:35 PM, David Newman wrote:
>>
>>> On 12/23/11 2:21 PM, Dan Langille wrote:
On Dec 20, 2011, at 1:19 PM, David Newman wrote:
> bacula 5.2.2, FreeBSD 8.2-RELEASE
>
On 12/23/11 2:38 PM, Dan Langille wrote:
>
> On Dec 23, 2011, at 5:35 PM, David Newman wrote:
>
>> On 12/23/11 2:21 PM, Dan Langille wrote:
>>> On Dec 20, 2011, at 1:19 PM, David Newman wrote:
>>>
bacula 5.2.2, FreeBSD 8.2-RELEASE
After upgrading bacula-server from 5.0.3 to 5.2.2 u
On Dec 23, 2011, at 5:35 PM, David Newman wrote:
> On 12/23/11 2:21 PM, Dan Langille wrote:
>> On Dec 20, 2011, at 1:19 PM, David Newman wrote:
>>
>>> bacula 5.2.2, FreeBSD 8.2-RELEASE
>>>
>>> After upgrading bacula-server from 5.0.3 to 5.2.2 using FreeBSD ports
>>> and updating the (PostgreSQL
On 12/23/11 2:21 PM, Dan Langille wrote:
> On Dec 20, 2011, at 1:19 PM, David Newman wrote:
>
>> bacula 5.2.2, FreeBSD 8.2-RELEASE
>>
>> After upgrading bacula-server from 5.0.3 to 5.2.2 using FreeBSD ports
>> and updating the (PostgreSQL) bacula database, all jobs run fine except
>> for the final
On Dec 20, 2011, at 1:19 PM, David Newman wrote:
> bacula 5.2.2, FreeBSD 8.2-RELEASE
>
> After upgrading bacula-server from 5.0.3 to 5.2.2 using FreeBSD ports
> and updating the (PostgreSQL) bacula database, all jobs run fine except
> for the final one on the bacula server, the one that dumps the
bacula 5.2.2, FreeBSD 8.2-RELEASE
After upgrading bacula-server from 5.0.3 to 5.2.2 using FreeBSD ports
and updating the (PostgreSQL) bacula database, all jobs run fine except
for the final one on the bacula server, the one that dumps the catalog
before making a backup.
The error looks like this:
Op 9/11/2011 3:50, ganiuszka schreef:
> 2011/11/8 Kenney, William P. (Information Technology Services)
> :
>> Hello All,
>>
>>
>>
>> Have been running Bacula 5.0.3 without any major problems, but the
>> BackupCatalog job is failing.
>>
>> Bacula-director is running.
>>
>> MySql is up and I can log
2011/11/8 Kenney, William P. (Information Technology Services)
:
> Hello All,
>
>
>
> Have been running Bacula 5.0.3 without any major problems, but the
> BackupCatalog job is failing.
>
> Bacula-director is running.
>
> MySql is up and I can log in from the console with no problem.
>
Hi,
Did you
Hello All,
Have been running Bacula 5.0.3 without any major problems, but the
BackupCatalog job is failing.
Bacula-director is running.
MySql is up and I can log in from the console with no problem.
The error message follows:
**
Sorry to be asking so many questions, again.
I have the following catalog record for a file:
path: /backup/archive/datasnap0/etc/
file: wgetrc
md5 : 7rJlwjvesDfThOLanMuCog
After restoring this file I have run md5, and I get:
eeb265c23bdeb037d384e2da9ccb82a2 wgetrc
Aft
I'd like to write catalog information to CD/DVD to accompany my tape
sets, which will be between 1 and 15 LT04 tapes. I'm using Postgres
for the Catalog. The idea is to make it easy for people to restore from
these tapes 10 years from now.
The idea is that someone can read the CD and learn about B
> To clarify, the 'etc' and Catalog backups still want to go into the Full
> pool even though I set the pool to Cat_Backup and the Catalog backup still
> forces me to manually Label a volume even though none of the other Jobs
> require it.
>
I would rework your config so that the catalog backup do
To clarify, the 'etc' and Catalog backups still want to go into the Full
pool even though I set the pool to Cat_Backup and the Catalog backup still
forces me to manually Label a volume even though none of the other Jobs
require it.
Mark
On 22 April 2010 09:44, John Drescher wrote:
> On Thu, Apr
On Thu, Apr 22, 2010 at 9:42 AM, John Drescher wrote:
> 2010/4/22 Mark Coolen :
>> Full, Diff and Inc backups are working fine. The problem I have is that the
>> catalog backup won't automatically label a volume in the Full pool. Now I've
>> decided to have the catalog backup and the 'etc' backup
2010/4/22 Mark Coolen :
> Full, Diff and Inc backups are working fine. The problem I have is that the
> catalog backup won't automatically label a volume in the Full pool. Now I've
> decided to have the catalog backup and the 'etc' backup placed in the
> Cat_Backup pool, but it doesn't seem to want
Full, Diff and Inc backups are working fine. The problem I have is that the
catalog backup won't automatically label a volume in the Full pool. Now I've
decided to have the catalog backup and the 'etc' backup placed in the
Cat_Backup pool, but it doesn't seem to want to work. What do I have wrongly
http://articles.sitepoint.com/article/site-mysql-postgresql-1/2 and
http://www.xach.com/aolserver/mysql-to-postgresql.html
Cheers
Arne
On Wed, Feb 17, 2010 at 7:30 PM, Joseph L. Casale
wrote:
> I am trying to take a mysql version 11 db and import it into a postgre db for
> later
> update to ve
I am trying to take a mysql version 11 db and import it into a postgre db for
later
update to version 12. I exported as per the manual,
`mysqldump -u root -p -f -t -n bacula >bacula_backup.dmp`
After manually creating the postgre db and verifying it, I attempted
`psql -Ubacula bacula < bacula_bac
Hi
I would like to know if I can put offline some part of my catalog.
At the moment I have 1.5G of data in mysql, since I setted the prune to 4
years.
I was wondering if there is a procedure to put offline partial a catalog,
for example 3 months at the time only for that jobs that have the 4 years
look into migrating it.
--Jeremy
-Original Message-
From: Martin Simmons [mailto:mar...@lispworks.com]
Sent: Friday, August 07, 2009 14:30
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Catalog too big / not pruning?
>>>>> On Fri, 7 Aug 2009 07:55:08
Wasn't sitting here the whole time, but it was 2-3 hours each run.
--Jeremy
-Original Message-
From: Alan Brown [mailto:a...@mssl.ucl.ac.uk]
Sent: Tuesday, August 11, 2009 12:13
To: Jeremy Koppel
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Catalog too big
On Thu, 6 Aug 2009, Jeremy Koppel wrote:
> I thought that meant it wasn't going to actually do anything, but it did
> reduce the DB size to 6.5GB. I had actually stopped bacula before running it
> this time, so perhaps that had an effect. After that, I went ahead and ran
> dbcheck (thanks, Jo
> On Fri, 7 Aug 2009 07:55:08 -0700, Jeremy Koppel said:
>
> I ended up running dbcheck 3 more times. The first time got another
> 10,000,000, the second another 8,000,000+, and the 3rd was trivial. Running
> it a fourth time came up all 0s. Running another full vacuum got the DB
> size dow
shut down Bacula during the standard vacuum? Is this
needed?
--Jeremy
-Original Message-
From: Martin Simmons [mailto:mar...@lispworks.com]
Sent: Thursday, August 06, 2009 13:11
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Catalog too big / not pruning?
>&
> On Thu, 6 Aug 2009 05:59:24 -0700, Jeremy Koppel said:
>
> We're running Postgresql 8.0.8; we can't currently update this machine
> (we'll have to move Bacula to a newer box when we have one available). Ran
> that query, and the top 4 do have very large numbers:
>
>
> relname
:58
To: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Catalog too big / not pruning?
The job table is probably not causing the bloat, unless you have millions of
rows. The space is usually consumed by the file table and its indexes.
Try running vacuumdb with the --analyze and --v
The job table is probably not causing the bloat, unless you have millions of
rows. The space is usually consumed by the file table and its indexes.
Try running vacuumdb with the --analyze and --verbose options, which prints
info about the number of pages used by each table/indexes and also the nu
2009/8/4 Jeremy Koppel :
> Lately, I’ve been going though our file server looking for
> disk space to reclaim, and I’ve come across 14GB worth of data in the
> Postgres DB, used only by Bacula. Reading through the Bacula manual, I see
> that each file record is supposed to take up
Lately, I've been going though our file server looking for disk
space to reclaim, and I've come across 14GB worth of data in the Postgres DB,
used only by Bacula. Reading through the Bacula manual, I see that each file
record is supposed to take up 154 bytes in the DB, so I have
On Thu, Feb 26, 2009 at 8:55 AM, Berend Dekens wrote:
> Hi all,
>
> Because my Bacula setup keeps evolving over time, so does the specifics
> of the database. Currently I have 2 pools with volumes and jobs that I
> want to get rid of.
>
> I tried purging and then deleting them (it would be nice if
Hi all,
Because my Bacula setup keeps evolving over time, so does the specifics
of the database. Currently I have 2 pools with volumes and jobs that I
want to get rid of.
I tried purging and then deleting them (it would be nice if BAT allowed
the selection of multiple volumes and it would stop ju
> You might also want to keep the bootstrap file for the job which last
> backed up the catalog.
>
In the past I have found this to be very important if you put your
catalog on a volume with more than 1 job on that volume.
When I had a database corruption problem (bad hardware) it was more
difficu
Russell Sutherland wrote:
> I have perused through the Catalog Maintenance section in the documentation:
>
> http://www.bacula.org/en/rel-manual/Catalog_Maintenance.html
>
> looking for some guidance on where to store the Catalog data from the
> Catalog Job. (This is the data which gets generated
> I have perused through the Catalog Maintenance section in
> the documentation:
>
> http://www.bacula.org/en/rel-manual/Catalog_Maintenance.html
>
> looking for some guidance on where to store the Catalog
> data from the Catalog Job. (This is the data which gets
> generated by the make_catalog_
I have perused through the Catalog Maintenance section in the documentation:
http://www.bacula.org/en/rel-manual/Catalog_Maintenance.html
looking for some guidance on where to store the Catalog data from the
Catalog Job. (This is the data which gets generated by the
make_catalog_backup script.)
1 - 100 of 167 matches
Mail list logo