Hello Stephan,
Thanks for your feeback, sometime, few users have performance problem due to
bad
settings and can give bad advise or the impression that nothing can be done
whereas it works well
on ten of thousand of silent users.
In this case the performance problem is due to the presence of bad
Hello,
Graham Keeling wrote:
>
> Hello,
>
> I now believe that the 'taking hours' problem that I was having was
> down to having additional indexes on my File table, as Eric suggested.
>
> I am using mysql-5.0.45.
>
> I had these indexes:
> JobId
> JobId, PathId, FilenameId
> PathId
> Filena
Graham Keeling wrote:
>
>> > I don't understand this at all.
>> > If you cannot trust the JobIds or FileIds in the File table, then the
>> > postgres
>> > query is also broken. The postgres query doesn't even mention JobTDate.
>> > In fact, the postgres query is using StartTime to do the orderi
Graham Keeling wrote:
>
> On Thu, Apr 08, 2010 at 07:44:14AM -0700, ebollengier wrote:
>>
>> Hello,
>>
>>
>> Graham Keeling wrote:
>> >
>> > Hello,
>> >
>> > I'm still waiting for my test database to fill up
Hello,
Graham Keeling wrote:
>
> Hello,
>
> I'm still waiting for my test database to fill up with Eric's data
> (actually,
> it's full now, but generating the right indexes is taking lots of time).
>
>
> But, I have another proposed solution, better than the last one I made.
>
> My previou
ebollengier wrote:
>
>
> Graham Keeling wrote:
>>
>> On Wed, Apr 07, 2010 at 08:22:09AM -0700, ebollengier wrote:
>>> I tweaked my test to compare both queries, and it shows no difference
>>> with
>>> and without base job part... If you want to
Graham Keeling wrote:
>
> On Wed, Apr 07, 2010 at 08:22:09AM -0700, ebollengier wrote:
>> I tweaked my test to compare both queries, and it shows no difference
>> with
>> and without base job part... If you want to test queries on your side
>> with
>>
ebollengier wrote:
>
>
>
> Graham Keeling wrote:
>>
>> On Tue, Apr 06, 2010 at 09:01:13AM -0700, ebollengier wrote:
>>> Hello Graham,
>>
>> Hello, thanks for your reply.
>>
>>> Graham Keeling wrote:
>>> >
>>
Graham Keeling wrote:
>
> On Tue, Apr 06, 2010 at 09:01:13AM -0700, ebollengier wrote:
>> Hello Graham,
>
> Hello, thanks for your reply.
>
>> Graham Keeling wrote:
>> >
>> > Hello,
>> > I'm using bacula-5.0.1.
>> > I ha
Hello Graham,
Graham Keeling wrote:
>
> Hello,
> I'm using bacula-5.0.1.
> I have a 2.33GHz CPU with 2G of RAM.
> I am using MySQL.
> I had a VirtualFull scheduled for my client.
>
> My log says the following:
>
> Apr 4 18:56:02 Start Virtual Backup JobId 56,
> Job=Linux:cvs.2010-04-04_18.5
Hi Anthony,
Avarca, Anthony wrote:
>
> All,
>
> I'm using bacula to backup desktop and laptop clients. The desktops work
> well with a schedule, but laptops are another story. Does anyone have a
> strategy to backup laptops? Is it possible to have the user trigger a
> backup?
>
> Any feedback
Hi Bruno,
I will be in Bruxels the 5/6/7 too, see you there!
Bye
Bruno Friedmann-2 wrote:
>
> Hi Bacula'ers
>
> Did some of you are going to the FOSDEM (www.fosdem.org) during 6,7
> February 2010
> in Brussels.
> Don't miss the 5th free-beer night event too.
>
>
> --
>
> Bruno Frie
Hello,
Sławomir Paszkiewicz wrote:
>
> Hello!
> I've been using Bacula and everything was running smoothly but now,
> when I must restore some *very, very* important data, bacula tolds me:
>
> 25-lis 18:16 bls JobId 1114: Error: block.c:318 Volume data error at
> 0:684407952!
> Block checksum
Christian Schnelle-2 wrote:
>
> Hi,
>
> after an upgrade from bacula 1.38 to 2.2.4 i got trouble with one
> backup job. This jobs backup round about 2TB data.
> After writing the data to the tapes bconsole shows:
> 'Dir inserting Attributes'.
> This inserting runs now for about 3 days.
>
> Po
e I need to remove.
> I
> don't want to go through the trouble of recompiling mysql since this is a
> test system.
>
>
> Paul
>
> -Original Message-
> From: ebollengier [mailto:e...@eb.homelinux.org]
> Sent: Saturday, October 17, 2009 10:45 AM
> To: ba
Paul A-2 wrote:
>
> Hi trying to get webrestore to work, so far I can see page clients but not
> jobs. I believe this is because I have no imported the bweb.sql.
>
>
>
> According to the install file its says, "3) Make sure that brestore cache
> tables are in your catalog (bweb-xxx.sql file
Sean M Clark wrote:
>
> We've got several machines where Symantec Antivirus appears to butt in
> and bog down file tranfers severely as it apparently scans every single
> file before it's transferred (or at least this is what I believe is
> happening).
>
> Previously, we found that we could te
Hello,
Perhaps this picture can help you to understand differents MaxTime
directives
http://www.bacula.org/3.0.x-manuals/en/concepts/concepts/New_Features_in_3_0_0.html#SECTION0042414000
Bye
Joseph L. Casale wrote:
>
> I have a job that runs either fulls or diffs, sometimes when
>
Júlio Maranhão-2 wrote:
>
> On Mon, Sep 28, 2009 at 6:10 PM, ebollengier
> wrote:
>> How are you 100% sure that only Bacula will change this archive bit on
>> your
>> system? (It can
>> lead to serious consistency problems if users run winzip on your back)
&g
Júlio Maranhão-2 wrote:
>
>> In theory, Accurate means that it should back up any file that is in any
>> way different to what it was when the backup was last done. I'm not sure
>> to what extent it works in practice.
>
> I will measure the memory cost and correctness.
>
>>
>> Bacula began in
Hello James,
James Harper wrote:
>
> I do a full backup once a week, and then incremental backups for the
> rest of the week.
>
> MSSQL is configured to back up transaction logs every day and keep the
> backup for 3 days (by which time it would have been backed up). It's
> Thursday today and I
Hello,
Clark Hartness wrote:
>
> I am attempting to send a list of files to Bacula from the Client using
> this syntax described in the manual:
>
> Any file-list item preceded by a less-than sign (<) will be taken to be
> a file. This file will be read on the Director's machine at the time t
Hello,
Daniel Holtkamp wrote:
>
> Hello everyone !
>
> Please correct me if i'm wrong, but i did notice that all temporary
> batch-insert tables created by bacula are disk-based.
>
To avoid problems with encoding, bacula is always using blob to store
strings.
Daniel Holtkamp wrote:
>
> Af
Hello,
Vtape is used only for test purpose, and the maximum size is defined at the
compilation time (see vtape.h).
I suggest you to avoid backuping your critical data on vtape device...
Bye
Shad L. Lords wrote:
>
> Is there a size limit to a vtape device? I've tries specifying tape sizes
Hello Craig,
Craig Ringer wrote:
>
> Hi all
>
> I'm wondering if there's any way to limit the memory used by `accurate'
> mode - say, by spooling the required data to an mmap()ped file instead
> of an in-memory block, or handling it in chunks.
>
Not at this time, mmap isn't portable and it
Hello,
You can also use the bweb pure PL function for that (see
bweb/scripts/bweb-postgresql.sql).
Bye
DavidLeeLambert wrote:
>
> I was looking into making queries for a particular file in a Bacula
> Postgres database, and came up with the attached functions. If someone
> really wanted to
Bweb provides this indication in the "follow backup" window. It's based on a
least-squares-fit linear equation on postgresql, and a more simple function
on mysql.
You can see the progression of the number of files and the total bytes.
Bye
sublayer wrote:
>
> Now with estimate command i can kn
Hello,
Arno Lehmann wrote:
>
> Hi,
>
> 13.11.2008 21:58, Dan Langille wrote:
>> On Nov 13, 2008, at 11:56 AM, Heitor Medrado de Faria wrote:
>>
>>> Heitor Faria
>>>
>>> Dan Langille wrote:
On Nov 13, 2008, at 11:16 AM, Heitor Medrado de Faria wrote:
> Guys,
>
> This is ur
ebollengier wrote:
>
>
>
> Jason Dixon-6 wrote:
>>
>> On Wed, Nov 12, 2008 at 12:56:21AM -0800, ebollengier wrote:
>>>
>>> Hello,
>>>
>>> The batch mode improve the speed with postgresql by a factor of 10
>>> (maybe
Jason Dixon-6 wrote:
>
> On Wed, Nov 12, 2008 at 07:07:13AM -0800, Dan Langille wrote:
>>
>> On Nov 11, 2008, at 2:32 PM, Jason Dixon wrote:
>>
>>> We have a new Bacula server (2.4.2 on Solaris 10 x86) that runs fine for
>>> most backup jobs. However, we've encountered a particular job that
>>
Jason Dixon-6 wrote:
>
> On Wed, Nov 12, 2008 at 12:56:21AM -0800, ebollengier wrote:
>>
>> Hello,
>>
>> The batch mode improve the speed with postgresql by a factor of 10 (maybe
>> 20), using
>> a very big job (15M files) with the standard mode wo
Hello,
The batch mode improve the speed with postgresql by a factor of 10 (maybe
20), using
a very big job (15M files) with the standard mode won't work too.
But, you will be able to cancel the job because the director checks the job
status between each insertion. With the batch mode, you have
Yes, for sure, but it's quite easy to find the right syntax
*update volume=vol75 inchanger=0
Invalid value. It must be yes or no.
*update volume=vol75 inchanger=no
New InChanger flag is: no
LeJav wrote:
>
> even "update volume=xxx inchanger=0" does not work
>
>
>> Hello,
>>
>> Yes, it
Hello,
Yes, it seems to be a bug (or an unwanted feature). Please, can you open a
bug
in bugs.bacula.org ?
A simple workaround is put an other volume in the empty slot, or to update
the volume with something like "update volume=xxx inchanger=0".
Bye
LeJav wrote:
>
> Hello,
>
> I am using ba
If it's 40,000 rows of the filename or path table, each rows are something
like 10 or 15 bytes long
If you want to reduce the size of your database, you need to review your
retention periode or to prune jobs (ie, remove backup content information
from the catalog).
See the manual for more in
Hello,
Not yet in bacula, but you can take a look of xdelta, this tool will
generate binary patches that you will be able to backup with bacula.
Bye
Daniel Kis wrote:
>
> Hi everyone!
>
> I would have a question regarding Bacula.
>
> I want to use Bacula to save for instance two 5 GB files
36 matches
Mail list logo