On Wed, Jun 30, 2010 at 8:35 AM, Robert LeBlanc wrote:
> On Wed, Jun 30, 2010 at 1:06 AM, Kern Sibbald wrote:
>
>>
>> This seems to a support issue. The dump that you posted shows no
>> indication
>> of a crash, which means that your understanding of a crash an mine are
>> different.
>>
>> This
A complete backup takes several days so I didn't run the compressed backup to
completion. But if I use examined files as a gauge it's wasn't as bad as 8x.
It took 16 hours to process 300,000+ files with compression enabled and only 4
with it disabled.
Derek
On Jul 1, 2010, at 18:57, "James
>
> I've seen a very significant slow in backup speed by enabling gzip
compress,
> 32MB/s (without gzip) vs 4MB/s (with gzip). The server I'm backing
up has
> lots of CPU 24x2.6ghz so the compression time shouldn't be a huge
factor. Is
> this normal for bacula or is there an optimization I'm mi
Hi,
Please disregard this thread. Bacula is doing everything right.
Everything started with the summary showing compression rate equals zero
and the final fd bytes just a bit bigger than the sum of the file
sizes... Now that I have disabled compression completely I've noticed
that the encrypti
Gustavo Gibson da Silva wrote:
> Hi there,
>
> I have a bacula 5.0.2 setup that uses compression based on file
> compression and it is working fine. Now I've decided to play with
> encryption and I have just noticed that the files are no longer
> compressed. Are encryption and compression mutual
Hi there,
I have a bacula 5.0.2 setup that uses compression based on file
compression and it is working fine. Now I've decided to play with
encryption and I have just noticed that the files are no longer
compressed. Are encryption and compression mutually exclusive or do I
need another setup i
Hi,
On Thu, 01 Jul 2010, Derek Harkness wrote:
> Sorry I miss spoke in the original post. I'm backing up a server which
> has 24x2.6ghz cpus and is barely using any of them.
Sorry, on reflection, you were quite clear. I misread :-)
Gavin
---
On 07/01/10 15:27, Derek Harkness wrote:
> Sorry I miss spoke in the original post. I'm backing up a server which has
> 24x2.6ghz cpus and is barely using any of them. I bacula server is much
> smaller, only 4 cpus. It looks like bacula has a single threaded compress
> engine which appears to
Sorry I miss spoke in the original post. I'm backing up a server which has
24x2.6ghz cpus and is barely using any of them. I bacula server is much
smaller, only 4 cpus. It looks like bacula has a single threaded compress
engine which appears to bottle neck the whole process. For most backup
On Thu, 01 Jul 2010, Derek Harkness wrote:
> I've seen a very significant slow in backup speed by enabling gzip
> compress, 32MB/s (without gzip) vs 4MB/s (with gzip). The server I'm
> backing up has lots of CPU 24x2.6ghz so the compression time shouldn't be
> a huge factor. Is this normal for
On 07/01/2010 02:37 PM, Martin Simmons wrote:
> Does the FileSet have just one Include section and one Options section? If
> it is more complicated than that, please post it.
The complete FileSet is:
# List of files to be backed up
FileSet {
Name = "FullSet"
Include {
Options {
On 07/01/10 10:29, Jon Schewe wrote:
> Although the more annoying thing is question 2, any takers?
>
> 2) So while the long job was running the daily job was paused until the
> long job finished (as I explained in 1), the really annoying thing was
> that a second daily job was put in the schedule q
On 07/01/2010 01:17 PM, Jonathan B Bayer wrote:
> Hello Andreas,
>
> Have you checked the drive?
The drive is a RAID-5 volume on an 8-disk Adaptec 5805 controller.
>
> Try copying the files to another location and see if the copies give
> the same error.
By copies you just mean cp, right? If so:
On Thu, Jul 1, 2010 at 10:29 AM, John Drescher wrote:
> On Thu, Jul 1, 2010 at 9:45 AM, Jon Schewe wrote:
>> On 7/1/10 8:29 AM, John Drescher wrote:
>>> On Thu, Jul 1, 2010 at 9:21 AM, Jon Schewe wrote:
>>>
So I've got a job that takes over 48 hours to complete (I'm working on
fixing t
On 7/1/10 9:29 AM, John Drescher wrote:
> On Thu, Jul 1, 2010 at 9:45 AM, Jon Schewe wrote:
>
>> On 7/1/10 8:29 AM, John Drescher wrote:
>>
>>> On Thu, Jul 1, 2010 at 9:21 AM, Jon Schewe wrote:
>>>
>>>
So I've got a job that takes over 48 hours to complete (I'm working on
On Thu, Jul 1, 2010 at 9:45 AM, Jon Schewe wrote:
> On 7/1/10 8:29 AM, John Drescher wrote:
>> On Thu, Jul 1, 2010 at 9:21 AM, Jon Schewe wrote:
>>
>>> So I've got a job that takes over 48 hours to complete (I'm working on
>>> fixing this), but in the meantime I have a second job scheduled to
>>>
On 01.07.2010 / 10:05:06 -0400, Derek Harkness wrote:
> I've seen a very significant slow in backup speed by enabling gzip compress,
> 32MB/s (without gzip) vs 4MB/s (with gzip). The server I'm backing up has
> lots of CPU 24x2.6ghz so the compression time shouldn't be a huge factor. Is
> thi
I've seen a very significant slow in backup speed by enabling gzip compress,
32MB/s (without gzip) vs 4MB/s (with gzip). The server I'm backing up has
lots of CPU 24x2.6ghz so the compression time shouldn't be a huge factor. Is
this normal for bacula or is there an optimization I'm missing.
Hi,
Running Bacula 3.0.2 on CentOS 5.4. Getting an error I haven't encountered
before backing up a Windows 7 client.
>From the director:
bconsole> status client
.
Connecting to Client xx-fd at 10.xx.xx.xx:9102
xx-fd Version: 3.0.3 (18 October 2009) VSS Linux Cross-compile Wi
On Thu, Jul 1, 2010 at 9:21 AM, Jon Schewe wrote:
> So I've got a job that takes over 48 hours to complete (I'm working on
> fixing this), but in the meantime I have a second job scheduled to
> happen daily writing to a different pool. 2 odd things happened:
> 1) The second job couldn't run at the
On 7/1/10 8:29 AM, John Drescher wrote:
> On Thu, Jul 1, 2010 at 9:21 AM, Jon Schewe wrote:
>
>> So I've got a job that takes over 48 hours to complete (I'm working on
>> fixing this), but in the meantime I have a second job scheduled to
>> happen daily writing to a different pool. 2 odd things
I'm a complete div, the scripts are in the source.
download the tar.gz for the appropriate version of bacula you are using,
extract it and have a look in updatedb.
I couldn't get the scripts to actually run so I opened them up and ran the
contents as a query in mysql, seems to have done the jo
On Thursday 01 July 2010 03:21:23 pm Jon Schewe wrote:
> So I've got a job that takes over 48 hours to complete (I'm working on
> fixing this), but in the meantime I have a second job scheduled to
> happen daily writing to a different pool. 2 odd things happened:
> 1) The second job couldn't run at
So I've got a job that takes over 48 hours to complete (I'm working on
fixing this), but in the meantime I have a second job scheduled to
happen daily writing to a different pool. 2 odd things happened:
1) The second job couldn't run at the same time as a long job. This
might be because bacula can'
Hello Andreas,
Have you checked the drive?
Try copying the files to another location and see if the copies give
the same error.
JBB
Thursday, July 1, 2010, 7:10:38 AM, you wrote:
> On 06/30/2010 12:55 PM, Martin Simmons wrote:
>>> On Tue, 29 Jun 2010 18:06:41 +0200, Andreas Koch said:
Mogaroy wrote:
> Got it working finally !! Thanks Mike, for the detailed explanation,
> and also for updating the wiki. It will definitely save a lot of
> trouble for anyone intending to backup oracle with bacula using the
> steps mentioned in the earlier version of the wiki.
> Thanks to the origin
Mike Holden wrote:
> Mogaroy wrote:
>
> > Thanks Mike, for clearing up a few things.Appreciate your patience !
> >
> > I finally managed to identify the cause of the errors.The problem,
> > as I see it, is with the way in which the hotbackupscript.sql is
> > being created.
> > The host rsync co
> On Thu, 01 Jul 2010 13:10:38 +0200, Andreas Koch said:
>
> On 06/30/2010 12:55 PM, Martin Simmons wrote:
> >> On Tue, 29 Jun 2010 18:06:41 +0200, Andreas Koch said:
> >>
> >> Can no one help with this? I'm somewhat worried that the many lines of
> >>
> >>Warning: Can't verify checksu
It is located at :\SIS Common Store. It is not being backed
up. Backing up this area doesn't really solve the problem. There are
several issues:
1. Each file stored in the common area is named by UUID (e.g.
4c045966-5142-a3ef-d8d385e4ab0c.sis ). As far as I can tell there is no
way of knowing whic
On 06/30/2010 12:55 PM, Martin Simmons wrote:
>> On Tue, 29 Jun 2010 18:06:41 +0200, Andreas Koch said:
>>
>> Can no one help with this? I'm somewhat worried that the many lines of
>>
>> Warning: Can't verify checksum for (filename...)
>>
>> during backup indicate a configuration problem a
>
> We have two HP StorageWorks servers which are running Microsoft
Storage
> Server Service Pack 2. For those who don't know this is a version of
> Windows Server 2008 which is stripped out and optimized for serving
> files. It also has the option of using Microsoft's implementation of
> Single I
Hi,
We have two HP StorageWorks servers which are running Microsoft Storage
Server Service Pack 2. For those who don't know this is a version of
Windows Server 2008 which is stripped out and optimized for serving
files. It also has the option of using Microsoft's implementation of
Single Instance
Just for the sake of archives, this was reported here and I did not see it
before
http://bugs.bacula.org/view.php?id=1399
>It seems that just overwriting doesn't work. When I set NextPool = Tuesday in
>the pool I defined in the copy job def, everything works (on tuesdays). So as
>it is now, I h
33 matches
Mail list logo