Hi
It seems all is in this sentence
"The pgsql driver is not currently installed"
Did you have your bacula running with postgresql ?
So to access the bacula catalog the underlaying Zend Framework & PHP need also
the pdo_pgsql extension
(which is quite easy to install : just activate it or get it
Hi:
I'm trying to install Webacula in my Ubuntu Server in wich I have Bacula and
PostgreSQL running fine. I have this configuration in the file config.ini:
[general]
;db.adapter = PDO_MYSQL
db.adapter = PDO_PGSQL
db.config.host = localhost
db.config.username = postgres
db.config.password = postgr
I would have to say that Bob Hetzel is on to something. I have 769 clients
and the client list is unwieldly. The only way for me to find my client is
to cut and past the list into an editor and use a search function. After
choosing a restore client I often restore to an alternate computer. The
rest
In the message dated: Thu, 13 Nov 2008 00:03:18 +0100,
The pithy ruminations from Arno Lehmann on
were:
=> Hello,
=>
=> 11.11.2008 18:15, Bob Hetzel wrote:
=> > I've currently got over 150 backup clients installed with bacula so when
=>
=> > I want to do a restore and I have it list the clie
Hi,
13.11.2008 00:11, Joerg Wunsch wrote:
> As Russell Sutherland wrote:
>
>> When trying to write/append some data to an existing labelled and
>> mounted tape I get:
>>
>> *messages
>> 12-Nov 16:56 backup-sd JobId 12120: Job
>> backup-data.2008-11-12_16.56.30 waiting. Cannot find any appendable
As Russell Sutherland wrote:
> When trying to write/append some data to an existing labelled and
> mounted tape I get:
>
> *messages
> 12-Nov 16:56 backup-sd JobId 12120: Job
> backup-data.2008-11-12_16.56.30 waiting. Cannot find any appendable
> volumes.
Looks quite close to a similar problem I
Hello,
11.11.2008 22:22, Carlo Maesen wrote:
...
> After I create the 3 pools (with different retentions), I only have
> to create 1 client with a file/job retention of 1 year. When the
> volume retention of the incremental-pool expires (4weeks), the
> corresponding files/jobs will be pruned from
Hello,
11.11.2008 10:00, Isabel Bermejo wrote:
> Hi,
> I'm using Bacula to backup servers into files (not tapes). One of the
> servers has to backup 30GB of information. It has been working fine for 2
> years but 3 weeks ago I received a Fatal Error message.
>
> Here I post a little bit of the me
Hello,
11.11.2008 18:15, Bob Hetzel wrote:
> I've currently got over 150 backup clients installed with bacula so when
> I want to do a restore and I have it list the clients by name the list
> is rather unwieldy. I'm thinking the list is ordered by when they were
> added?
>
> If it's doing a
Hi,
12.11.2008 23:06, subbustrato wrote:
> Is it possible set as storage demon an Ethernet Disk?
I assume that by "Ethernet disk" you mean a NAS device.
Yes, it is possible if the device actually runs a SD.
Yes, it is possible to use a NAS device as final destination for
Baculas volumes.
No,
When trying to write/append some data to an existing labelled and
mounted tape I get:
*messages
12-Nov 16:56 backup-sd JobId 12120: Job
backup-data.2008-11-12_16.56.30 waiting. Cannot find any appendable
volumes.
Please use the "label" command to create a new Volume for:
Storage: "Dell-P
Is it possible set as storage demon an Ethernet Disk?
bye,
sub
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Gran
>> On Fri, 7 Nov 2008 22:14:29 +0200, "Jari
>> Fredriksson" <[EMAIL PROTECTED]> said:
>
>> Now wondering best practises with DVD backups.
>
> The biggest one for me was adding -dvd-compat to the
> "self.growparams" settings in /etc/bacula/dvd-handler
> (near line 112)
Ok, a happy campe
> On Fri, 7 Nov 2008 22:14:29 +0200, "Jari Fredriksson" <[EMAIL PROTECTED]>
> said:
JF> Now wondering best practises with DVD backups.
The biggest one for me was adding -dvd-compat to the "self.growparams"
settings in /etc/bacula/dvd-handler (near line 112)
--
"In the bathtub of history
We may have a problem with deadlock on batch insert.
I have not looked closely, it it appears to be two batch inserts
running at the same time.
Begin forwarded message:
> From: Jason Dixon <[EMAIL PROTECTED]>
> Date: November 12, 2008 7:39:00 AM PST
> To: Dan Langille <[EMAIL PROTECTED]>
> Cc:
On Nov 12, 2008, at 7:39 AM, Jason Dixon wrote:
> On Wed, Nov 12, 2008 at 07:07:13AM -0800, Dan Langille wrote:
>>
>> On Nov 11, 2008, at 2:32 PM, Jason Dixon wrote:
>>
>>> We have a new Bacula server (2.4.2 on Solaris 10 x86) that runs
>>> fine for
>>> most backup jobs. However, we've encount
Hello list!
I'm testing Bacula 2.5.19 (upcoming 3.0.0) and copying jobs from disk pools
to tape.
I'm getting some errors during the copy process.. has anyone else seen
these?:
bacula-sd JobId 2994: Start Copying JobId 2994,
Job=CopyPool3UncopiedToTape.2008-11-12_16.40.09.26
bacula-sd JobId 2994
There is no other related data for these jobs.
I made a mistake though: you need
delete from job where type in ('R') ...
to delete restore jobs. Type 'V' is verify and type 'D' is admin.
__Martin
> On Wed, 12 Nov 2008 10:47:11 +0800, Quanzhong Zhang said:
>
> Hello Martin,
>
> Thank yo
junior.listas wrote:
> 1) mysql tables became huge ( each backup adds a million and six
> hundred thousand lines ), so i split the configuration into 2 daemons
> with 2 different bases, one for mon,tue,wed and other for thu,fri( and
> one 3th for monthly bkps ) ; because between a backup starts
On Wed, Nov 12, 2008 at 07:07:13AM -0800, Dan Langille wrote:
>
> On Nov 11, 2008, at 2:32 PM, Jason Dixon wrote:
>
>> We have a new Bacula server (2.4.2 on Solaris 10 x86) that runs fine for
>> most backup jobs. However, we've encountered a particular job that
>> hangs indefinitely with the statu
On Nov 12, 2008, at 12:20 AM, Arno Lehmann wrote:
> Hi,
>
> 12.11.2008 03:00, Dan Langille wrote:
>> http://www.enterprisenetworkingplanet.com/netos/article.php/3784081
>>
>> "If you're looking for a darned good open-source backup solution,
>> this
>> may be your lucky day for an interview with
On Nov 11, 2008, at 2:32 PM, Jason Dixon wrote:
> We have a new Bacula server (2.4.2 on Solaris 10 x86) that runs fine
> for
> most backup jobs. However, we've encountered a particular job that
> hangs indefinitely with the status "Dir inserting attributes". It's
> important to note that all
Hi,
I'm using a LTO-4 autochanger with a bacula-sd (v2.4.2) configured since
about one or two months with:
Maximum Block Size = 2097152
When I configured this setting I didn't get any SD or director warnings
or issues (or at least none that I could see).
Today I wanted to restore a full backup o
Jason Dixon-6 wrote:
>
> On Wed, Nov 12, 2008 at 12:56:21AM -0800, ebollengier wrote:
>>
>> Hello,
>>
>> The batch mode improve the speed with postgresql by a factor of 10 (maybe
>> 20), using
>> a very big job (15M files) with the standard mode won't work too.
>
> I don't understand what yo
On Wed, Nov 12, 2008 at 12:56:21AM -0800, ebollengier wrote:
>
> Hello,
>
> The batch mode improve the speed with postgresql by a factor of 10 (maybe
> 20), using
> a very big job (15M files) with the standard mode won't work too.
I don't understand what you're saying. Are you suggesting that e
Bacula 2.4.1
I need to do a verify of the state of a filesystem at a particular date
vs what's in there now, as some files have been overwritten with nulls
whilst having their timestamps preserved (hardware problems).
I've tried using "Verify Disk to Catalog", but this only seems to say if a
fil
Hi Berend,
I was thinking of doing it that way, a full backup for offsite followed by a
full to stay in the changer. It just seems a long way around it, a full backup
for each client takes 15 hours in total.
Thanks Kevin, but I thought I had already included that
"Add the option 'recycle curr
Hey John,
I have a similar problem at the moment. The problem originates from the
fact that Bacula seems to be unable to seperate restore jobs based on
storage pools (so a system with files stored in multiple pools will
require volumes from all pools used).
As far as I know this is currently n
Hi everyone,
I have been running nightly backups using a single pool of 19 volumes. The
schedule is for
Schedule {
Name = "WeeklyCycle"
Run = Full 1st sat at 23:05
Run = Differential 2nd-5th sat at 23:05
Run = Incremental mon-fri at 23:05
}
What i want to do is take tapes offsite once a
Hello,
I have done several tests and I don't understand well the recycling to the
Scratch pool.
According to the documentation and my tests, the recycling occurs only when a
job
needs a tape and there is no one appendable in the pool. In this case, it will
try to recycle a tape (from the sam
Hello,
The batch mode improve the speed with postgresql by a factor of 10 (maybe
20), using
a very big job (15M files) with the standard mode won't work too.
But, you will be able to cancel the job because the director checks the job
status between each insertion. With the batch mode, you have
Mikel Jimenez Fernandez wrote:
> Ronald Buder wrote:
>
>> James Harper wrote:
>>
Mikel Jimenez Fernandez escribió:
> Hello
> I always backup my clients throught openvpn net (10.10.0.0/24) but I
> need to backup one or two client trough public ips.
>>>
Hi,
12.11.2008 03:00, Dan Langille wrote:
> http://www.enterprisenetworkingplanet.com/netos/article.php/3784081
>
> "If you're looking for a darned good open-source backup solution, this
> may be your lucky day for an interview with this data-sucking vampire."
>
> It does a good job of introdu
Hi,
Thanks for your answers. I've been doing a lot of tests. I have tried to
catch the dump and I can see the progress while the dump runs. The problem
is that Bacula still waiting for the dump finishes so when the job tries to
read from the fifofile I've created the result is a message error: "Can
34 matches
Mail list logo