mputes a differential between that
and the original dbname.sql.gz file. The differential is
dbname.MMDD.diff Finally, it dbname.YYYMMDD.gz and keeps the
differential. I use "xdelta3" for generating diffs since the default
"diff"
kup/sede/samba04
/backup/sede/samba05
... creating a new file as needed, where each of the listed files is one
single Volume. For instructions, see the manual here:
http://www.bacula.org/rel-manual/Automatic_Volume_Recycling.html
--
Darien Hager
[EMAIL PROTECTED]
-
ume, or re-use/recycle
an existing volume? Is automatic labeling turned on?
Myself, I have three pools (full, diff, incr) and each is set to create
volumes (e.g. "FULL0005") as necessary with a Maximum Volume Bytes of
2gb, and it works pretty well, so I think what y
ot on the same
Filesystem as where the File= line begins. To fix that, set "onefs = no"
in your fileset options.
--
Darien Hager
[EMAIL PROTECTED]
-
This SF.net email is sponsored by DB2 Express
Download DB2 Expr
into the database. (With a lot of tapes, this may take a
while... but if you were using SQLite, I'm guessing it was a fairly
light set-up to files?)
--Darien Hager
[EMAIL PROTECTED]
-
This SF.net email is sponsored
res to get the whole
scene. But the actors are constantly moving (database still running).
Even if each *individual* picture (file) is accurate, if they don't
match up at the seams anymore then your panorama (database backup of
several datab
quot;I get a Connection refused when connecting to my
Client"
and heading "My Windows Client Immediately Dies When I Start It"
Were it a wiki, I'd just rewrite it myself, but...
--Darien Hager
[EMAIL PROTECTED]
I don't seem to have any .trace files from using setdebug. Am I doing
something wrong, misreading the documentation, etc?
Running the command, everything appears okay...
> *setdebug
> Enter new debug level: 200
> Available daemons are:
> 1: Director
> 2: Storage
> 3: Client
>
s a little
cleaner that way, and the pools will be correctly applied even if you
run the job manually.
--Darien Hager
[EMAIL PROTECTED]
-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE vers
ng it's
status to Full or tweaking the Maximum Volume Jobs count manually. I'd
try the former just to be safe--I don't know what a volume does if
Maximum Volume Jobs is set to less than the number of jobs already on it.
--Darien Hager
[EMAIL PROTECTED]
-
In this case the padding with underscores is done because "Level" Might be
blank, and if it is it will cause the substring call of 0-4 to complain.
=
--Darien Hager
[EMAIL PROTECTED]
--
oring for Bacula use. I've mentioned the reasons when this was suggested
> some time ago.
Oh. Uhm. I tried to find out your reasons back then, except that
SourceForge automatically puts "xml" into most messages as pa
Cross-posting this particular reply to the developers list as well...
On Apr 26, 2007, at 8:16 AM, Ryan Novosielski wrote:
> This has been rejected before, and the reason being that you really
> don't want to have to have a database in order to read your tapes. If
> you have an emergency that you
On Apr 24, 2007, at 11:38 PM, Falk Sauer wrote:
> On Wednesday 25 April 2007 writes Ross Boylan:
>> Does anyone know of a tool/script that will remap one set of uid's
>> and
>> gid's to another? I.e., if sarah has id 1005 on the original system,
>> and I restore it to a system where sarah has
On Apr 19, 2007, at 7:18 AM, Mike Vasquez wrote:
>
> I forgot to mention the error in previous email:
> 18-Apr 22:00 director-di: hl72.2007-04-18_22.00.07 Fatal error:
> bnet.c:775
> Unable to connect to File daemon on 123.123.123.12:9102. ERR=No
> route to
> host
Check that your director ca
> Its annoying and can't be good for the tape. eventually I
> 'released' the
> tape and ejected it.
>
> Any advice on how to calm it down?
All I can think of (I don't use a tape drive) it that you may have
this set for the storage daemon:
Volume Poll Interval = 15m
Of course, if all
On Apr 24, 2007, at 5:50 AM, Luca Ferrari wrote:
>
>
> Now if I run the first job (mammuth_uff_a_job) the volume with the
> label
> mammuth_uff_a_job_2007_ is created, but if I run the second
> job, that I'd
> like to be on a different volume, the system keeps using the
> previous volum
I'd just like to bump this (hopefully simple) question back into the
fray. In summary:
Question: What are the main features of the "Admin" job type?
Feedback: The documentation doesn't have very much on it, only
comments in passing.
Pre-formal-request suggestion: Admin jobs should be able
On Apr 23, 2007, at 6:25 AM, Damian Lubosch wrote:
> I have a machine to backup with about 1-2 million small files (~1kb).
> When I run a migration job for about 4 GB of such data the performance
> is going down. The tape rewinds very often and the overall performance
> is about 3MB/sec. I found
On Apr 18, 2007, at 1:15 PM, David Lebel wrote:
And /backup would be defined on the OS level as a symlink to the
current "slot", let assume for this example, to /drive
then, have the system just switch the /backup directory/symlink
around the 5 drives, let say, /library/slot2 where a hard d
d fictive wall clock times when each
> job starts
> and stops under the proposed two priority system.
=
=====
Item 1: Allow per-client job priority ordering
Origin: Darien Hager <[EMAIL PROTECTED]>
Date: April
On Apr 17, 2007, at 11:24 AM, Darien Hager wrote:
> On Apr 17, 2007, at 10:25 AM, Carlos Cristóbal Sabroe Yde wrote:
>> This is the scenario:
>> I needed this because we had problems with file permissions
>> on a file server (someone pressed the wrong combination of
>
Mucking around with the console I noticed I could set the level of a
backup job to "Base" or "Since". There is some mention of "Base"
backups as being vaporware as of 1.30 within the FAQ, but otherwise
the manual doesn't seem to have that much on these options.
Is Base is some sort of varia
On Apr 17, 2007, at 10:25 AM, Carlos Cristóbal Sabroe Yde wrote:
> Thanks, I will try it.
>
> This is the scenario:
> I needed this because we had problems with file permissions on a
> file server
> (someone pressed the wrong combination of keys with a '-R' on it :-
> ( ) and I
> don't want to
On Apr 16, 2007, at 7:56 AM, Jerry Amundson wrote:
> On 4/16/07, Alan Brown <[EMAIL PROTECTED]> wrote:
>> On Sat, 14 Apr 2007, Kern Sibbald wrote:
>>> This decision is motivated by the fact that the number of emails
>>> has grown to
>>> be quite large, and hence to read them all requires a good
On Apr 14, 2007, at 5:03 PM, Arno Lehmann wrote:
> I've got no idea why the SD would need that much memory... usually I
> don't notice a remarkable memory consumption by the SD.
>
> Can you reproduce the problem?
Yes, it continues to happen--I'm just not sure how to check what code
is causing
I've searched through the documentation and there seems to be fairly
little details on "Admin" beyond that they "don't actually do
anything" beyond launching admin scripts. It seems that I can run
scripts on the director using them, but if I try to run them on
clients (through various dire
On Apr 13, 2007, at 1:14 PM, Ryan Novosielski wrote:
>
> Darien Hager wrote:
>>
>>> spath-store: signal.c:140 exepath=/etc/bacula/bacula-sd
>>> Calling: /etc/bacula/btraceback /etc/bacula/bacula-sd 19091
>>> execv: /etc/bacula/btraceback failed: ERR=
I've got a problem where the SD is crashing and I'm not sure why.
There are two SDs on the network, and it's only happening with one.
Here's the debug (level 300) from the SD in it's dying moments.
spath-store: reserve.c:694 MediaType device=File request=File
spath-store: reserve.c:718 Try
Item 1: Allow per-client job priority ordering
Origin: Darien Hager <[EMAIL PROTECTED]>
Date: Date submitted (e.g. 28 October 2005)
Status: Initial Request
What:
Allow an optional per-client priority setting which is applied AFTER
the current priority scheduling alg
Item 1: Allow Jobdefs to inherit from other Jobdefs
Origin: Darien Hager <[EMAIL PROTECTED]>
Date: Date submitted (e.g. 28 October 2005)
Status: Initial Request
What: Allow JobDefs to inherit/modify settings from other JobDefs
Why:Makes setting up jobs much easi
On Apr 5, 2007, at 12:00 PM, Joseph Wright wrote:
> It seems that for my configuration, which uses file based backups
> instead of tape, if I want to have many concurrent jobs running I
> have to create separate storage directives in the director config
> for each client, and for each of these I
I know you can set up jobs with a across-director priority number,
but I'm wondering if it is possible to prioritize jobs on a per-
client basis. Specifically, I have job(defs) for job A and job B, and
I need A jobs to always run before B jobs for each.
Currently I have them set to different
I was originally going to post to ask if it was working for anyone
else, but I've solved my problem.
However, In case someone on the project team can escalate this
concern to SourceForge, some of the links to the list archives are
broken.
First, here's the trail of links that DOES work:
> I right click on the folders being backup on the windows 2003
> server and
> select properties. That gives me a combine size of the folders and
> it comes
> out to around 3.5g. How it comes out to be 6.9g during backup I do
> not know.
> I'm hoping someone can shed some light on this issue
On Mar 29, 2007, at 11:28 AM, [EMAIL PROTECTED] wrote:
> How is the
>File = "|"
> mechanism implemented in the FileSet resource? Is there any way for
> the external
> program to determine which client backup is causing the call to the
> program? For
> example, if Bacula set and ex
> My question: Is there a way to set up the bacula-storage to dump files
> to a disk into a file system structure that could be shared using SMB,
> NFS, whatever? How can I achieve this effect, or is it not currently
> supported / thought of? Any reading pointers on that?
I don't think it's possib
I think I found one way to attack the "multiple USB backup drives"
issue.
Mounting NTFS disks to a folder
http://www.microsoft.com/resources/documentation/windows/xp/all/
proddocs/en-us/dm_modify_access_path.mspx
Documentation for diskpart command-line tool
http://technet2.microsoft.com/Window
On Mar 19, 2007, at 10:38 AM, Jorj Bauer wrote:
>>> # bacula-dir -c '|/usr/local/sbin/generate-dir-config'
>>>
>>> ... would cause it to run /usr/local/sbin/generate-dir-config to
>>> generate a new configuration file.
>>
>>
>> Hey, that sounds pretty useful. When you say "globally", I assume
On Mar 19, 2007, at 6:59 AM, Jorj Bauer wrote:
> What: The ability to read a configuration file as stdout from an
> executable
>
> Why: The configuration files (particularly for the Director) are very
> complex. In my case I find it easier to have a program generate them
> from meta-i
On Mar 16, 2007, at 12:30 PM, C M Reinehr wrote:
> Sometimes I just can't help myself. Should it be 'Baculite' or
> 'Baculan'.
> Maybe we should hold another poll. ;-)
Well, it comes by night and sucks the vital essence from our
computers... If you extend the metaphor, we're the kings of our
I wonder if a small improvement would be to basically send them an e-
mail as part of the signup process which has some helpful links,
small manual TOC, etc.
I'm not sure to what extent you can customize the "Welcome to the
list" mailman message on sourceforge...
--
--Darien A. Hager
[EMAIL
> On Thursday 01 February 2007 18:02, Jeronimo Zucco wrote:
> Someone pointed out how to modify the code recently on this list
> (it involves
> modifying the SD).
>
> Another approach would be to run a "dummy" job that does the
> ClientRunBefore -- it could be a Verify with an empty FileSet. Th
> Am I missing something? (Or does the SD split the file into chunks
> only at the very end of the process?)
Followup: No, the final humungous file wasn't broken up at the end of
the run.
Either "Maximum File Size" is broken, I'm reading the documentation
wrong, or I've missed some necessary
e volume and subsequent data are written into the next file."
Am I missing something? (Or does the SD split the file into chunks
only at the very end of the process?)
--Darien Hager
[EMAIL PROTECTED]
-
Take S
On Mar 1, 2007, at 6:50 AM, Ryan Novosielski wrote:
>>
>> On Thursday 01 March 2007 10:32:49 Kern Sibbald wrote:
>>
>> this item does indeed work very stable. I've been using this in
>> production for
>> exporting large Oracle databases (100+ GBytes each) once per week
>> for about
>> two yea
On Feb 9, 2007, at 8:40 AM, Brian Debelius wrote:
> Windows version 2.0.2 using MySQL
>
> My console crashes anytime I try to do anything with pools. Where
> and
> how would I enable debugging to get some information on what is
> happening?
Is it just the console itself, or the director as
On Feb 8, 2007, at 9:02 AM, Brian Debelius wrote:
> I am just curious as to what others are doing with backing up multiple
> servers to disk. Do you use a pool for each server?; or do you use
> just
> one pool for all?
>
> Currently I am using multiple pools, the create individual back up
> fi
On Feb 7, 2007, at 2:37 PM, Jason King wrote:
> I've got my director communicating with ALL of my remote servers (file
> daemons) correctly. Doing a *stat and checking on the clients gives me
> the client information. I setup the jobs in the diretors config
> file and
> tried to run the backup.
On Feb 7, 2007, at 6:44 AM, Jason King wrote:
> I have bacula-dir, bacula-sd and bacula-fd running on the same server
> for testing purposes. For some reason, when I do a status all, the
> director can NEVER connect to the file-daemon. From my reading it
> appears that I have my file-daemon confi
On Feb 4, 2007, at 4:12 AM, Pierre Bernhardt wrote:
> I have different pools created:
>
> Daily for incremental backups
> Weeklyfor differential backups
> Monthly for full backups
>
> If a Daily job executes and a full backup must be saved the bu
> schould go
On Feb 6, 2007, at 9:18 AM, Zeratul wrote:
> I'm wondering if there is any posibility to group more jobs under a
> generic
> name or to create any kind of hierarchy. I have a total (until now)
> of 30
> clients with 2 types of backup jobs, with 2 types of storage (disk
> and tape)
> and wit
On Feb 5, 2007, at 11:46 AM, Arno Lehmann wrote:
> Hi,
>
> On 2/5/2007 6:35 PM, Robert Nelson wrote:
>> Incremental backups are based solely on the modification date of
>> the file.
>> If the file modification date is later than the last full or
>> differential
>> backup then the file will be
I'm wondering if anyone has advice for backing up databases. I have
some python scripts working to do per-database backup/restore over
FIFOs (pg_dump, pg_restore, nonblocking fifo polling), but the nature
of the method means that there is almost no such thing as a
differential or incrementa
On Feb 1, 2007, at 8:57 AM, Richard White wrote:
> Can anyone tell me where to look for the cause of this error?
>
> 29-Jan 16:07 lbackup-dir: Start Backup JobId 1462,
> Job=ArcIMS.2007-01-29_16.07.46
> 29-Jan 18:07 lbackup-dir: ArcIMS.2007-01-29_16.07.46 Fatal error:
> Network error with FD duri
> Kern, if you are reading this, what are the chances that a
> heartbeat could be implemented between the director and the storage
> daemon?
Would there be any significant downsides to a global heartbeat
directive in the director? When the director initially connects to an
FD/SD it could m
On Jan 31, 2007, at 11:07 AM, Brian Debelius wrote:
> Config error: Cannot open included config file : No such file or
> directory
After "config file" it should list the path it attempted to open.
From what you've written it appears blank, suggesting some sort of
syntax or input issue rathe
On Jan 31, 2007, at 10:46 AM, Brian Debelius wrote:
> I am trying to split up a config file on Windows, using the @
> directive
> as shown in http://www.bacula.org/dev-manual/
> Customizin_Configurat_Files.html but it
> returns an error that it cannot find the file. Has anyone done
> this in
(Looks like my last message was misaddressed. Resending.)
On Jan 30, 2007, at 1:09 PM, Kern Sibbald wrote:
> Another thing to check for is HP printers on the network [...]
I don't think that's the problem in my case. While we have an HP
server on our internal office network, the two servers I w
On Jan 30, 2007, at 1:02 AM, Arno Lehmann wrote:
> As far as I know, the heartbeat is not sent during despooling and / or
> attribute despooling. So, if you have that enabled, try turning it off
> to see what happens.
I haven't enabled spooling--I'm backing up to a plain old file on the
server,
Maybe I should be actually posting this to the dev list on account of
the version being fresh out of the oven...
Anyway, I have a weird problem going on. Both the storage daemon and
the sole client are set up with a heartbeat interval (30 seconds),
but the backup always dies five minutes
61 matches
Mail list logo