Hemant Shah wrote:
>
>
> --- On Fri, 3/20/09, Jesper Krogh wrote:
>
>
>> From: Jesper Krogh
>> Subject: Re: [Bacula-users] Better way to garbage collect postgresql database
>> To: hj...@yahoo.com
>> Cc: "baculausers"
>> Date: Friday, March 20, 2009, 12:30 AM
>> Hemant Shah wrote:
>>
>>>
On Fri, Mar 20, 2009 at 04:11:55PM -0400, Jason Dixon wrote:
> On Fri, Mar 20, 2009 at 03:46:49PM -0400, Jason Dixon wrote:
> >
> > Just to be certain, I kicked off a few OS jobs just prior to the
> > transaction log backup. I also changed the Storage directive to use
> > "Maximum Concurrent Jobs
On Fri, Mar 20, 2009 at 04:54:01PM -0400, John Lockard wrote:
> On Fri, Mar 20, 2009 at 04:11:55PM -0400, Jason Dixon wrote:
> > >
> > > Running Jobs:
> > > JobId Level Name Status
> > > ==
> > > 11239 In
On Fri, Mar 20, 2009 at 03:46:49PM -0400, Jason Dixon wrote:
>
> Just to be certain, I kicked off a few OS jobs just prior to the
> transaction log backup. I also changed the Storage directive to use
> "Maximum Concurrent Jobs = 1" for FileStorage. This forces only one OS
> job at a time.
>
> I
Hi,
20.03.2009 11:24, James Harper wrote:
> I have half an idea for a feature request but it's not well defined
> yet...
>
> Basically, I have a bunch of clients to back up, some are on gigabit
> network and some are stuck on 100mbit. They are being backed up to a
> disk that has throughput of ar
On Fri, Mar 20, 2009 at 02:37:06PM -0400, Jason Dixon wrote:
> On Fri, Mar 20, 2009 at 10:36:16AM -0700, Kevin Keane wrote:
> > Jason Dixon wrote:
> > >
> > > Here is an example from yesterday. Job 11174 is the transaction logs.
> > > The others are OS jobs I ran manually from bconsole.
> > >
> >
On Friday 20 March 2009 18:56:55 Thomas Mueller wrote:
> On Fri, 20 Mar 2009 18:18:34 +0100, Kern Sibbald wrote:
> > It looks like you are trying to build Bacula using a Debian packaging
> > that was designed for 2.4. Version 2.5.x is significantly different and
> > will require a number of modifi
The minimum setting I have on Max Concurrent Jobs is on the
Tape Library and that's set to 3. It appears that priority
trumps all, unless the priority is the same or better.
So, if I have one job that has priority of, say, 10, then
any job running on any other tape drive or virtual library
will s
On Fri, Mar 20, 2009 at 10:36:16AM -0700, Kevin Keane wrote:
> Jason Dixon wrote:
> >
> > Here is an example from yesterday. Job 11174 is the transaction logs.
> > The others are OS jobs I ran manually from bconsole.
> >
> > Running Jobs:
> > JobId Level Name Status
> > ==
I've decided to do some tests with Spool Attributes to see if it speeds up my
full
backups to tape. I noticed that the documentation says I can set Spool
Attributes in
the Job resource. It does not mention that I can set Spool Attributes in the
Schedule Resource, although it does have Spool
> I stand somewhat corrected. I was wrong in stating
> that priority of a job on a certain media blocked
> only jobs on that media. It actually blocks all other
> lower priority jobs from running no matter whether the
> lower priority job is on the same media or not.
>
I find this makes prioritie
On Fri, 20 Mar 2009 18:18:34 +0100, Kern Sibbald wrote:
> It looks like you are trying to build Bacula using a Debian packaging
> that was designed for 2.4. Version 2.5.x is significantly different and
> will require a number of modifications.
i made modifications for 2.5. the package builds wit
Jason Dixon wrote:
> On Fri, Mar 20, 2009 at 06:56:38AM -0700, Kevin Keane wrote:
>
>> Jason Dixon wrote:
>>
>>> They don't. Previously, the OS backups and the log backups each had
>>> their own pool on the same storage device (tape drive). Recently, the
>>> OS backups have used their own
I stand somewhat corrected. I was wrong in stating
that priority of a job on a certain media blocked
only jobs on that media. It actually blocks all other
lower priority jobs from running no matter whether the
lower priority job is on the same media or not.
-John
On Fri, Mar 20, 2009 at 10:04:4
It looks like you are trying to build Bacula using a Debian packaging that was
designed for 2.4. Version 2.5.x is significantly different and will require
a number of modifications.
Regards,
Kern
On Thursday 19 March 2009 13:51:54 Thomas Mueller wrote:
> hi
>
> as one asked for ubuntu packag
Hemant Shah wrote:
>
> --- On Thu, 3/19/09, Kevin Keane wrote:
>
>
>> From: Kevin Keane
>> Subject: Re: [Bacula-users] Better way to garbage collect postgresql database
>> To:
>> Cc: "baculausers"
>> Date: Thursday, March 19, 2009, 8:30 PM
>> Hemant Shah wrote:
>>
>>> Folks,
>>>
>>> Thi
Dang it looks as though this morning this doesn't seem to be the case. I
have split up the trouble server and am now checking for other issues. I am
also in the process of recreating the database in ASCII format so that we
can rule that out as an issue even though there are no logs in postgres that
--- On Fri, 3/20/09, Jesper Krogh wrote:
> From: Jesper Krogh
> Subject: Re: [Bacula-users] Better way to garbage collect postgresql database
> To: hj...@yahoo.com
> Cc: "baculausers"
> Date: Friday, March 20, 2009, 12:30 AM
> Hemant Shah wrote:
> > This is a database question, but I figure
--- On Thu, 3/19/09, Kevin Keane wrote:
> From: Kevin Keane
> Subject: Re: [Bacula-users] Better way to garbage collect postgresql database
> To:
> Cc: "baculausers"
> Date: Thursday, March 19, 2009, 8:30 PM
> Hemant Shah wrote:
> > Folks,
> >
> > This is a database question, but I figured
On Fri, Mar 20, 2009 at 06:56:38AM -0700, Kevin Keane wrote:
> Jason Dixon wrote:
> >
> > They don't. Previously, the OS backups and the log backups each had
> > their own pool on the same storage device (tape drive). Recently, the
> > OS backups have used their own pool on a File device instead.
Martin Simmons wrote on 20/03/2009 11.59.10:
> > On Fri, 20 Mar 2009 09:42:36 +0100, Ferdinando Pasqualetti said:
> >
> > Does anybody knows if there is a reason why the "prune" command, which
is
> > not dangerous and also automatically triggered in some cases ask for a
> > confirmation
Hi All,
I have a mix of disk and tape backups. To disk I allow up to
20 jobs run concurrently. On my tape library I have 3 tape
drives, so only allow a max of 3 jobs to run concurrently.
I run Full backups once a month, Differentials once a week
and incrementals most days of the week. I would
Jason Dixon wrote:
> On Fri, Mar 20, 2009 at 03:51:58AM -0700, Kevin Keane wrote:
>
>> Jason Dixon wrote:
>>
>>> On Thu, Mar 19, 2009 at 06:08:23PM -0700, Kevin Keane wrote:
>>>
>>>
Jason Dixon wrote:
> I've tried that. But since the scheduled OS
On Fri, Mar 20, 2009 at 03:51:58AM -0700, Kevin Keane wrote:
> Jason Dixon wrote:
> > On Thu, Mar 19, 2009 at 06:08:23PM -0700, Kevin Keane wrote:
> >
> >> Jason Dixon wrote:
> >>
> >>> I've tried that. But since the scheduled OS backup jobs are already
> >>> running, the client-initiated
> On Fri, 20 Mar 2009 09:42:36 +0100, Ferdinando Pasqualetti said:
>
> Does anybody knows if there is a reason why the "prune" command, which is
> not dangerous and also automatically triggered in some cases ask for a
> confirmation before being executed, while the "purge" command, which
>
Jason Dixon wrote:
> On Thu, Mar 19, 2009 at 06:08:23PM -0700, Kevin Keane wrote:
>
>> Jason Dixon wrote:
>>
>>> I've tried that. But since the scheduled OS backup jobs are already
>>> running, the client-initiated transaction log jobs are forced to wait.
>>>
>>>
>> Then you prob
I have half an idea for a feature request but it's not well defined
yet...
Basically, I have a bunch of clients to back up, some are on gigabit
network and some are stuck on 100mbit. They are being backed up to a
disk that has throughput of around 20-30mbytes/second.
I am allowing 2 jobs to run a
John Drescher schrieb:
> This is pbzip2, I use it for a custom build process with gentoo. I am
> not sure how hard it would be to add this to bacula.
I'm not willing to go thru the bacula-code, but I think it might be easy
to write my own wrapper for pbzip2 if I know how bacula calls the
compr
Hello,
I ask myself:
Is it possible to make the backup work by activating the concurrent
job and compression (gzip) only on certain client?
The combination of a score pad and a client might not compress does not
manage problems in the restoration?
Thanks.
Olivier Delestre
--
Doug Forster schrieb:
> I have gone into the database and can see that the database is empty for the
> job in question. I think that there is an issue with the insertion of over a
> million entrees all at once that is giving bacula a hard time. I have found
> a supporting post here:
> http://www.ba
Hello List,
maybe I am missing something, but I had this question since a lot of time.
Does anybody knows if there is a reason why the "prune" command, which is
not dangerous and also automatically triggered in some cases ask for a
confirmation before being executed, while the "purge" command, wh
Hi there,
we have a Problem with labeling of volumes. The volumes are labelled with the
name of the job plus the pool it used.
We use Harddisks as media. Strange thing is, that volumes are labeled with the
wrong name. We currently have 4 jobs.
And planning to implement it to more. It isn't
John Drescher schrieb:
>>> I
>>> have not seen a cpu that can do more than 20 MB/s. I know my 2.83GHz
>>> core2 quad is no way as fast as my LTO2 tape drive when it comes to
>>> compression.
>> there is a multi-threading version of bzip2 - but I have no idea whether
>> bacula will be able to handle
33 matches
Mail list logo