On 26/02/2011 11:50 AM, Dan Langille wrote:
> On 2/25/2011 10:54 AM, C M Reinehr wrote:
>> On Thu 24 February 2011 07:27:48 pm Dan Langille wrote:
>>> On 2/24/2011 10:08 AM, C M Reinehr wrote:
>>>> On Wed 23 February 2011 10:30:11 pm Dan Langille wrote:
>>>>> A recent bug report (http://bugs.bacula.org/view.php?id=1688) which
>>>>> indicated that we are no longer supporting SQLite.
>>>>>
>>>>> I feel it's safe to stop regression testing against it.
>>>> I'm sorry to hear that. I've been using SQLite for the entire time that
>>>> I've been using Bacula -- ten years or so -- with absolutely no
>>>> difficulty. Admitedly, my needs are limited -- three servers& eight pc's
>>>> -- but it's been simple to administer& troubleshoot and reliable.
>>> What version of Bacula are you using now? What version of SQLite?
>> I stick with Debian Stable and still am running Lenny. The Bacula version is
>> 2.4.4 and the SQLite version is 3.5.9. In the near future I'll be upgrading
>> to Squeeze and it looks as if that will be Bacula v5.0.2& SQLite v3.7.3.
>> (So, in any case, I guess I'm good for another two years or so, until the
>> next Debian Stable is released! ;-)
> I've been running daily regression tests for MySQL, PostgreSQL, and
> SQLite. The latter crashes far more than the other two. Look at
> http://regress.bacula.org/ and look for langille. You'll see.
I recently switched from SQLite3 to PostgreSQL on a Debian system.
I was already using the Bacula 5.0.2 packages, and moved from SQLite3 3.7.5 to
PostgreSQL 9.0.3.
Attached is a perl script I wrote to do the migration with.
The scripts in the Bacula source examples area did not work for me.
I have only tested it on my system, so it may not work for you.
I haven't had a system that has gone through lots of previous versions of
Bacula that could have old unused tables in the database
etc...
Use at your own risk.
The steps to do the conversion are:
- Make sure you have plenty of disk space somewhere.
You'll need enough for your SQLite3 dump; plus approximately that size again
for data file produced by the conversion; plus enough
space for the PostgreSQL database.
My SQLite3 dump was 7GB which produced an 11GB PostgreSQL database.
- Keep a safe copy of your configuration files in the /etc/bacula/ directory.
- As the bacula user, dump the SQLite3 database by running:
/etc/bacula/scripts/make_catalog_backup.pl MyCatalog
You can become the bacula user by running: su -s /bin/bash - bacula
- Move the /var/lib/bacula/bacula.sql dump file produced from the step above to
where you have plenty of disk space.
- Remove (or purge) your bacula-common-sqlite3, bacula-director-sqlite3 and
bacula-sd-sqlite3 packages via apt-get, or aptitude, or
whatever tool you use to manage your Debian packages with.
- Install the postgresql, bacula-common-pgsql, bacula-director-pgsql,
bacula-sd-pgsql packages.
This creates the bacula database for you with all the correct tables etc.
- Shut down the newly installed bacula director and storage daemons.
eg. service bacula-director stop ; service bacula-sd stop
- In the directory where you moved the bacula.sql file to, as the bacula user,
run my attached script passing the bacula.sql file as
a parameter.
eg. ./bacula_sqlite3_to_pgsql.pl bacula.sql
- The conversion process will begin and it can take a long time.
- If successful you shouldn't see any errors at all.
If not, you'll have to address the errors by fixing the script and run it
again.
Each time it runs it regenerates the data files, truncates the appropriate
PostgreSQL tables and loads the data into them again.
When the data is loaded into the tables, the serial type columns have their
sequence numbers updated so that new inserted data
will have the correct serial numbers to cause no overlap.
- You'll need to merge your /etc/bacula/ configuration files back into place.
In my case, the only change to my original configuration files I needed to
make was to Catalog {} section setting the correct
database parameters.
- Start up the bacula director and storage daemon.
- Test.
I hope this is helpful to someone out there.
Obviously you should probably test this on a virtual machine or something
before you do it for real.
If all goes bad, you could reinstall your bacula-*-sqlite3 packages again and
import the bacula.sql dump into them to back out.
Although I didn't try this as my conversion worked okay.
Regards,
----------
Jim Barber
DDI Health
#!/usr/bin/perl
=head1 DESCRIPTION
This program filters a Bacula SQLite3 database dump as produced by the
/etc/bacula/scripts/make_catalog_backup.pl script and turns
it into a format suitable for importing into a newly created PostgreSQL Bacula
database instance.
The data is then loaded into the PostgreSQL database.
The SEQUENCE settings are updated to reflect where the SERIAL type columns are
up to.
Any existing Bacula data in the PostgreSQL database will be removed, so this is
only suitable for use on a clean install of Bacula.
The dump filename is to be passed as a command line parameter, or its contents
piped to this program.
All DDL is thrown away, so the PostgreSQL Bacula database must be created
before the results from this script are imported into it.
Also some Bacula tables are ignored since their contents should have been
created correctly when the Bacula DB was created.
At this stage the "Log" table is also skipped because it can contain multi-line
entries and the code doesn't cater for that.
Old log messages are probably not that important anyway?
Files generated by this script are created in the current directory, so make
sure it has enough room.
You can clean up all generated table.* files once this script is complete.
This script was tested on a Debian system with Bacula version 5.0.2; PostgreSQL
9.0.3; and a dump from SQLite3 3.7.5.
=cut
use warnings qw(all);
use strict;
# Flush output immediately.
#
$| = 1;
# Subroutine to open a file for a table if it isn't open already.
# It takes a file handle and table name as parameters.
# It'll add some statements to the top of the file to begin with.
#
sub open_table_file
{
my ($fh, $table) = @_;
# Return the existing file handle if it is already open and ready for
use.
#
return $fh if defined $fh and fileno $fh;
# Open the file.
#
die "No table name supplied" if not defined $table or $table eq "";
my $filename = "table.$table";
die "Couldn't open $filename for writing: $!" if not open $fh, ">
$filename";
# Write out statements to the start of the file.
# - The 1st statement tells PostgreSQL to take backslashes in strings
literally.
# - The 2nd statement starts a transaction.
# - The 3rd statement will clear out any entries in the table.
#
print $fh "set standard_conforming_strings=on;\n";
print $fh "BEGIN;\n";
print $fh "TRUNCATE $table;\n";
return $fh;
}
#-----------------------------------------------------------------------
# Mainline
die "Must be run by the bacula user." if "bacula" ne (getpwuid($>))[0];
# Hashes to keep track of file handles and maximum sequence numbers for tables.
#
my %filehandle;
my %sequence;
print "Generating SQL files for each table...";
while(<>)
{
chomp;
# We are only interested in INSERT INTO statements.
#
next if ! /^\s*INSERT\s+INTO\s+/i;
# Get the table name from the statement and convert to lowercase.
# Skip some tables.
#
(my $table = $_) =~ s/^\s*INSERT\s+INTO\s+"?([^"\s]+)"?\s+.*/\L$1/i;
next if $table =~ /^(log|nextid|status|version)$/;
# Open the table SQL file if necessary.
#
$filehandle{$table} = &open_table_file($filehandle{$table}, $table);
my $fh = $filehandle{$table};
# Fix up the statement so that the table name is not surrounded by
quotes.
#
s/^\s*(INSERT\s+INTO)\s+"$table"\s+(.*)$/$1 $table $2/i;
# Fix up badly formatted data.
#
if ($table eq "job")
{
# TIMESTAMP type columns cannot accept 0, so change to NULL
instead.
#
s/^\s*(INSERT\s+INTO\s+Job\s+VALUES\s*\(([^,]+,){7})0,/$1NULL,/i;
s/^\s*(INSERT\s+INTO\s+Job\s+VALUES\s*\(([^,]+,){8})0,/$1NULL,/i;
s/^\s*(INSERT\s+INTO\s+Job\s+VALUES\s*\(([^,]+,){9})0,/$1NULL,/i;
s/^\s*(INSERT\s+INTO\s+Job\s+VALUES\s*\(([^,]+,){10})0,/$1NULL,/i;
}
elsif ($table eq "jobmedia")
{
# The JobMedia table in the SQLite3 database contains a 'Copy'
column on the end while the PostgreSQL one doesn't.
# If there are too many columns being inserted remove the last
one.
#
s/^\s*(INSERT\s+INTO\s+JobMedia\s+VALUES\s*\(([^,]+,){9}[^,]+)(,[^\)]+)?\s*(\)\s*;)\s*$/$1$4/i;
}
elsif ($table eq "media")
{
# TIMESTAMP type columns cannot accept 0, so change to NULL
instead.
#
s/^\s*(INSERT\s+INTO\s+Media\s+VALUES\s*\(([^,]+,){7})0,/$1NULL,/i;
s/^\s*(INSERT\s+INTO\s+Media\s+VALUES\s*\(([^,]+,){8})0,/$1NULL,/i;
s/^\s*(INSERT\s+INTO\s+Media\s+VALUES\s*\(([^,]+,){9})0,/$1NULL,/i;
s/^\s*(INSERT\s+INTO\s+Media\s+VALUES\s*\(([^,]+,){38})0,/$1NULL,/i;
# Empty volume status needs to be fixed.
# I'm not sure what to set this to, but Append seems like the
best choice?
# That way the tape will try to be written to and if it is Full
it will be marked so,
# or if there is a problem then hopefully it'll be marked in
Error.
#
s/^\s*(INSERT\s+INTO\s+Media\s+VALUES\s*\(([^,]+,){19})'',/$1'Append',/i;
}
# Write the statement to the table file.
#
print $fh "$_\n";
# Keep track of the highest sequence number we've encountered for
certain tables.
# For Bacula, the serial type column is always the first one in the
table.
#
next if $table !~
/^(basefiles|client|file|filename|fileset|job|jobmedia|location|log|media|path|pool)$/;
(my $seq = $_) =~
s/^\s*INSERT\s+INTO\s+\S+\s+VALUES\s*\(\s*([^,]+),.*$/$1/i;
$sequence{$table} = 0 if not defined $sequence{$table};
$sequence{$table} = $seq if $seq > $sequence{$table};
}
print "\n";
# Some tables contain a serial type column.
# For these, add a SQL statement to make sure new rows don't duplicate the
serial number of existing rows.
#
print "Adding SQL to update SEQUENCE numbers...";
foreach my $table (keys %sequence)
{
# Get the file handle for the table and the sequence number to be used
for the next row inserted into the table.
#
my $fh = $filehandle{$table};
my $seq = $sequence{$table} + 1;
# The serial type column in all the tables is called ${table}id except
for the basefiles table.
#
my $column = $table . "id";
$column = "baseid" if $table eq "basefiles";
# The serial sequences are stored in ${table}_${column}_seq.
# Add SQL to update the values.
#
print $fh "ALTER SEQUENCE $table" . "_" . "$column" . "_seq RESTART
WITH $seq;\n";
}
print "\n";
# Add a COMMIT statement to the end of each file to finish the transaction;
close them off; and process them as we go.
#
foreach my $table (sort keys %filehandle)
{
my $fh = $filehandle{$table};
print $fh "COMMIT;\n";
close $filehandle{$table};
print "Processing the $table table...";
system "psql -q -d bacula -f table.$table";
print "\n";
}
exit 0;
------------------------------------------------------------------------------
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users