Dear Bacula experts,
I am using the latest Bacula opensource version:
backup2-dir Version: 9.4.1 (20 December 2018) x86_64-pc-linux-gnu ubuntu
18.04
My experience is when I run a copy job (copy volumes from file storage to
tape storage) and when the tape is not mounted and labelled and bacula is
Apparently so, since the job(s) completed normally last night. I do
appreciate the feedback, though.
In trying to get all the parameters to match between both systems, I had
set the device name in the bacula-sd.conf file incorrectly.
On the primary, this is from bacula-dir.conf:
Storage {
Na
On 9/6/2018 1:17 PM, Brendan Martin wrote:
I have configured a copy job on my primary backup server. The copy
volumes will be on a secondary system running another storage daemon.
Both systems are running Bacula 7.4.4 on Debian 9.
From the primary console, this command:
status storage=
I have configured a copy job on my primary backup server. The copy
volumes will be on a secondary system running another storage daemon.
Both systems are running Bacula 7.4.4 on Debian 9.
From the primary console, this command:
status storage=Remote-File
successfully returns version, daemo
Hello,
I'd like to implement Copy Jobs for offsite backups and would like to know
if there is any particularly good wisdom about doing so and also in the
context of Volume sizes. I'm also interested in bandwidth limiting for said
Copy Jobs, which I see Bacula can do. Has that worked well in practi
Thanks; I suspected that would eventually be the case.
On 10/22/2015 8:24 PM, Heitor Faria wrote:
>> I am running Bacula 5.2.6 on Debian 8.1. My backups are disk-based.
>>
>> I have a local primary backup server and a remote secondary backup
>> server. The secondary is intended to contain copies
> I am running Bacula 5.2.6 on Debian 8.1. My backups are disk-based.
>
> I have a local primary backup server and a remote secondary backup
> server. The secondary is intended to contain copies of the backups on
> the primary. Each is running its own storage daemon, which, according
> to the
I am running Bacula 5.2.6 on Debian 8.1. My backups are disk-based.
I have a local primary backup server and a remote secondary backup
server. The secondary is intended to contain copies of the backups on
the primary. Each is running its own storage daemon, which, according
to the documentat
Hello Lukas,
I forgot to mention that you can have two copy jobs running and sending to
different pools (in this case, you would need more than one pool per
client, but this way you will have your backups replicated to two different
groups of volumes) setting the next pool in the schedule resource
Hello Lukas,
Regarding your first post, could you give more details about "is it
possible to setup copy job for a single client so that all the jobs data
is stored on two different volumes
"? Do you mean the same data being replicated into two different volumes
in the same pool? Sorry, I cannot s
On Tue, Aug 11, 2015 at 01:01:35AM -0300, Ana Emília M. Arruda wrote:
> Hello Lukas,
>
> I was wondering if this could be solved using cloned copy jobs :)
can you give me some pointers what are cloned copy jobs and how to setup them?
otherwise, correct solution is to have separated pools and cop
Hello Lukas,
I was wondering if this could be solved using cloned copy jobs :)
Best regards,
Ana
On Mon, Aug 10, 2015 at 11:36 AM, Lukas Hejtmanek
wrote:
> Hello,
>
> is it possible to setup copy job for a single client so that all the jobs
> data
> is stored on two different volumes? But with
Hello,
is it possible to setup copy job for a single client so that all the jobs data
is stored on two different volumes? But without specifying an extra pool for
such a client. If I specify an extra pool, I can specify PoolUncopiedJobs.
That works but not for a single client only.
If I specify e
Hello Brendan,
In each pool you are going to copy jobs from, you need a "Next Pool"
directive pointing to the pool to where you are going to copy jobs. I do
not see this configuration in your CSG Daily Copy, for example. Also,
regarding to the "No Next Pool for pool File" message, you need to
conf
I'm trying to get some copy jobs set up but so far all test runs have
failed. I'm not sure how to handle my pool setup relative to the copy
jobs. I may be trying to make it too complicated.
I have separate pools defined for Full, Differential and Incremental
backups. I have defined Copy pool
On 2014-07-03 04:03 PM, Luis Aparicio wrote:
> 2014-07-03 15:51 GMT-03:00 Dan Langille :
>
>> On 2014-07-02 07:35 PM, Brady, Mike wrote:
>>
>>> I can't view the Issue. Can you change the permissions on it
>>> please.
>>
>> Luis:
>>
>> Before you unlock that ticket: be aware that you have posted
Ok.
That's because I posted it as private.
Don't worry i'll change the passwords.
Can you unlock the ticket please?
Thanks
Luis
2014-07-03 15:51 GMT-03:00 Dan Langille :
> On 2014-07-02 07:35 PM, Brady, Mike wrote:
>
>> I can't view the Issue. Can you change the permissions on it please.
>>
>
>
On 2014-07-02 07:35 PM, Brady, Mike wrote:
> I can't view the Issue. Can you change the permissions on it please.
Luis:
Before you unlock that ticket: be aware that you have posted your
configuration files in that ticket. It appears you have included your
passwords in the uploaded files. If y
I can't view the Issue. Can you change the permissions on it please.
On 2014-07-02 09:36, Luis Aparicio wrote:
> Ready! Issue ID 0002072
>
> Thanks Luis
>
> 2014-06-14 7:10 GMT-03:00 Kern Sibbald :
>
> Hello,
>
> Please submit a bug report on this, but be sure to include your
> bacula-
Ready!
Issue ID 0002072
Thanks
Luis
2014-06-14 7:10 GMT-03:00 Kern Sibbald :
> Hello,
>
> Please submit a bug report on this, but be sure to include your
> bacula-dir.conf file, and the two emails that were sent, and finally the
> complete job report output if you can.
>
> Best regards,
> Kern
Hello,
Please submit a bug report on this, but be sure to include your
bacula-dir.conf file, and the two emails that were sent, and
finally the complete job report output if you can.
Best regards,
Kern
On 06/11/2014 06:22 PM, Luis
On 2014-06-12 04:22, Luis Aparicio wrote:
> I run around 150 bkp jobs at night and at daytime I copy those jobs to
> another media pool.
>
> Since upgraded to 7.x every time a Copy job finished send 2 (two) e-mails
> instead of 1 (one) like in 5.2.13.
>
> In 5.2.13 only one email saying "C
I run around 150 bkp jobs at night and at daytime I copy those jobs to
another media pool.
Since upgraded to 7.x every time a Copy job finished send 2 (two) e-mails
instead of 1 (one) like in 5.2.13.
In 5.2.13 only one email saying "Copy Job: OK Job_name" but now in 7.x
another e-mail arrived per
On Sun, Jun 16, 2013 at 08:23:11PM +0100, Gary Cowell wrote:
> I'm using a DTDTT scenario, with the copy jobs being sent to USB disk via
> vchanger.
>
> This works fine, but I have a questio
>
> When my COPY job runs in the schedule, I get a COPY job, and also an
> Incremental job.
>
I think th
I'm using a DTDTT scenario, with the copy jobs being sent to USB disk via
vchanger.
This works fine, but I have a questio
When my COPY job runs in the schedule, I get a COPY job, and also an
Incremental job.
I only have my FULL jobs from my FULL pool set to copy, so I get one copy
job per FULL j
Hello everyone.
I know that Copy (and Migrate) jobs must occur in the same SD, but if that
SD has a single Library Tape with 2 (or more) drivers, can I run a Copy Job
in the same library tape (and the same SD)?
Today, I have two different Library Tapes with 1 driver in each, both
controlled by th
Hi folks,
I've been running copy jobs for a couple of months now and I'm
wondering why they vary so much in speed. Sometimes they zoom along
just fine around 60MB / sec, at other times transfer rates never go
beyond 5MB / sec. Only one copy job is running at a time.
We're copying from a disk bas
On 30 Jan 2012, at 16:05, Adrian Reyer wrote:
> On Mon, Jan 30, 2012 at 12:18:30PM +, Joe Nyland wrote:
>> When run manually this morning, I was given a little more information:
>> run job=FileServer1_Copy pool=FileServer1_Full storage=FileServer1_Full
>> 30-Jan 06:55 FileServer1-dir JobId 0:
On Mon, Jan 30, 2012 at 12:18:30PM +, Joe Nyland wrote:
> When run manually this morning, I was given a little more information:
> run job=FileServer1_Copy pool=FileServer1_Full storage=FileServer1_Full
> 30-Jan 06:55 FileServer1-dir JobId 0: Fatal error: No Next Pool specification
> found in
Hello everyone,I emailed the list previously about some issues which I am having with copy jobs in my Bacula setup - I appear to have resolved my previous problems, however I'm now faced with the following issue:This morning, I noticed that in my backup reports from Bacula, I had the following logg
I just started working with Copy jobs, and while they are working (mostly)
exactly as expected I am seeing the following strangeness...
When I run my defined Copy job which is set with:
--
Type = Copy
Selection Type = Job
Selection Pattern = "Zimbra"
--
It spawns the "CopyMustHaves" job and the
On Aug 15, 2011, at 12:16 PM, Jérôme Blion wrote:
> Hello,
>
> I'm trying to copy jobs from one server to another.
> I defined 2 storage daemons:
>
>
[snip]
>
> In this link:
> http://www.bacula.org/manuals/en/concepts/concepts/Migration_Copy.html
> It's written that "Migration is only im
I would also like to know the answer to Jerome's initial question of whether
the limitation still exists, which stops us from being able to copy between
storage daemons.
Copying data between SD's would be the 'icing on the cake' to my new Bacula
system!
Kind regards,
Joe
On 15 Aug 2011, at
Hello,
I'm trying to copy jobs from one server to another.
I defined 2 storage daemons:
==
# Definition of file storage device
Storage {
Name = "File"
Address = tucana.domain# N.B. Use a fully qualified
name
I looked into the duplicate jobs options and according to the
documentation the default is to not allow duplicate jobs. So why am I
seeing duplicate jobs queued? In this case I have a copy to tape job:
Job {
Name = "CopyToTape"
Type = Copy
#Schedule = "WeeklyCycleAfterBackup"
Priori
hi,
on sunday we are running an end-of-week script
which sets all tapes in loader to status used
and runs release on both drives.
release storage=Tape drive=0
release storage=Tape drive=1
on monday morning copy jobs are scheduled:
CopyDiskToTape into pool DiskCopy
CopyDiskToExtClone into pool E
hello,
we are using a 2-drive autochanger.
both drives are configured almost identical
(except name,device) config see below.
we are running daily copy disk to tape jobs into
different copy pools.
problem:
bacula automatically chooses only Drive-1
and load/unloads tapes every day.
only if i mo
>Hello, I am able to perform a copy job to tape, but I am unable to find any
>information on how to restore this data. Bacula's documentation talks more
>about migration than copy. I didn't want to migrate the data, I wanted to copy
>so I have it. Is there any detailed information on restoring
Hello, I am able to perform a copy job to tape, but I am unable to find any
information on how to restore this data. Bacula's documentation talks more
about migration than copy. I didn't want to migrate the data, I wanted to
copy so I have it. Is there any detailed information on restoring copy
2010/5/6
> Hi,
> I'm having a problem copying jobs. I labeled a new volume in the pool
> "DailyScratch" with the label command. Then I ran a job using the pool
> "ThursdayOD". As expected, bacula took the volume from the Scratch Pool and
> moved it to the other pool. Backup ran without errors, "l
Hi,
I'm having a problem copying jobs. I labeled a new volume in the pool
"DailyScratch" with the label command. Then I ran a job using the pool
"ThursdayOD". As expected, bacula took the volume from the Scratch Pool
and moved it to the other pool. Backup ran without errors, "list volumes"
show
Hi folks,
I have two pools, namely "LocalDisk" and "OffSite". I run daily backups
on my LocalDisk pool, and once weekly I copy the last full backup to the
OffSite pool, for offsite storage. All works well there, and I'm using
Bacula 5.0.1.
To keep human interaction to a minimal, I wrote a scri
Am Mon, 29 Mar 2010 12:18:18 +0200 schrieb My Name:
> Hi,
>
> What: What I want to do is to replicate specific backup jobs to another
> bacula system (other Director & Storage). Is there a good solution to do
> this?
>
> Why: Very important backups should be replicated to another backup
> system
On 03/29/10 06:18, My Name wrote:
> At
> http://bacula.git.sourceforge.net/git/gitweb.cgi?p=bacula/bacula;a=blob;f=bacula/projects;hb=HEAD
> I have read about "Item 36: Job migration between different SDs". Will
> this also be possible with "Copy Jobs"? Would this be the proper
> solution? It woul
Hi,
What: What I want to do is to replicate specific backup jobs to
another bacula system (other Director & Storage). Is there a good
solution to do this?
Why: Very important backups should be replicated to another backup
system so that one system (director & storage) may crash (->
redundancy for
Hello,
we are using bacula configured for a disk-to-disk-to-tape backup as
described in the v3.0-documentation
(http://bacula.org/3.0.x-manuals/en/concepts/concepts/New_Features_in_3_0_0.html#SECTION0052).
That means we have the newest backup on a "disk-pool" and older versions
I was wondering...
Could I achieve what I want if I were to change the pools to refer to the
individual drives in the auto-changer tape library instead?
eg.
# Default pool definition used by incremental backups.
# We wish to be able to restore files for any day for at least 2 weeks, so
Hi.
I now have weekday incremental backups to a tape library working well.
Then at the end of the week the incremental backups are consolidated into full
backups via the VirtualFull feature.
All of this is happening using the one auto-changer tape library with two
drives in it.
Now I am trying
I updated like you said...
take a look
Directors conf - http://pastebin.ca/1601756
Thank you
2009/10/7 Nicolae Mihalache
> Why does it say "Read Pool: "FullBackupsVirtualPool" (From
> Job resource)"?
> Are you sure you updated the job to use the Default pool? Can you please
> send
Why does it say "Read Pool: "FullBackupsVirtualPool" (From
Job resource)"?
Are you sure you updated the job to use the Default pool? Can you please
send again your config.
nicolae
Pedro Bordin Hoffmann wrote:
> Hello again!
>
> Im getting the same error as before changing:
>
> Y
Hello again!
Im getting the same error as before changing:
You have messages.
*mes
07-Out 11:44 bacula.belgamatrizes.com.br-d JobId 484: No JobIds found to
copy.
*mes
You have no messages.
*mes
07-Out 11:44 bacula.belgamatrizes.com.br-d JobId 484: Bacula
bacula.belgamatrizes.com.br-d 3.0.2 (1
Wow!! Ill give a try on that!! Thank you so much!!!
Can u explain why I have to use this on my default pool (the one that i use
to file backup) NextPool = FullBackupsTapePool*
Thanks!!
[]s
2009/10/6 Nicolae Mihalache
> Maybe something like this.
>
>
> JobDefs {
> Name = CopyDiskToTape
> Type
Maybe something like this.
JobDefs {
Name = CopyDiskToTape
Type = Copy
Messages = Standard
Client = None
FileSet = None
Selection Type = PoolUncopiedJobs
Maximum Concurrent Jobs = 10
SpoolData = No
Allow Duplicate Jobs = Yes
Allow Higher Duplicates = No
Cancel Queued Duplica
What I like to do is this.
I have a file backup... using the Default pool.
Than I need to put this backup (from Default) on the tape. By using the copy
job function.
How does the Copy Job should look like?
Do I have to make changes on my Default pool? In my File Backup?
Thanks so much for your h
Pedro Bordin Hoffmann wrote:
> There is no Volumes in the Pool: FullBackupsVirtualPool
> *list media Pool=FullBackupsVirtualPool
> No results to list.
>
> Should I try to list the Default pool ?
Your Copy Job is copying files from this Pool, that's why it doesn't
copy anything.
If you want it in
There is no Volumes in the Pool: FullBackupsVirtualPool
*list media Pool=FullBackupsVirtualPool
No results to list.
Should I try to list the Default pool ?
Thanks!
2009/10/6 Nicolae Mihalache
> Sorry, I made a mistake, please first list the Volumes in the Pool using
> list media Pool=FullBacku
Sorry, I made a mistake, please first list the Volumes in the Pool using
list media Pool=FullBackupsVirtualPool
and then use the query to see the jobs written on those Volumes.
nicolae
Pedro Bordin Hoffmann wrote:
> I got this:
>
>
> Choose a query (1-16): 14
> Enter Volume name: FullBackupsVi
I got this:
Enter a period to cancel a command.
*query
The defined Catalog resources are:
1: MyCatalog
2: BackupDB
Select Catalog resource (1-2): 1
Using Catalog "MyCatalog"
Available queries:
1: List up to 20 places where a File is saved regardless of the
directory
2: List whe
If you run query in bconsole, then select option "14: List Jobs stored
for a given Volume name" and then enter FullBackupsVirtualPool, what do
you get?
Pedro Bordin Hoffmann wrote:
> Follows my director's conf:
> http://pastebin.ca/1597631
>
> I dont need to specify a job to copy right? It can
Follows my director's conf:
http://pastebin.ca/1597631
I dont need to specify a job to copy right? It can copy all of one pool ?
Thanks for the help!
Regards
Pedro
2009/10/6 Nicolae Mihalache
> Show us your job definition to see why bacula hasn't selected any job to
> copy.
>
>
> Pedro Bordin
Show us your job definition to see why bacula hasn't selected any job to
copy.
Pedro Bordin Hoffmann wrote:
> Im using the new function of Bacula Copy jobs... but Im getting this
> when I run the job:
> What can be that ? How to fix?
> thanks
>
>
> 05-Out 11:34 bacula.belgamatrizes.com.br-d JobI
Im using the new function of Bacula Copy jobs... but Im getting this
when I run the job:
What can be that ? How to fix?
thanks
05-Out 11:34 bacula.belgamatrizes.com.br-d JobId 429: No JobIds found to
copy.
05-Out 11:34 bacula.belgamatrizes.com.br-d JobId 429: Bacula
bacula.belgamatrizes.com.br
I would'nt complain about the documentation.
"Migration is only implemented for a single Storage daemon. You cannot
read on one Storage daemon and write on another."
Is a direct quote from the docs. See link below.
On Tue, 2009-06-16 at 22:54 -0400, Phil Stracchino wrote:
> Dirk Bartley wrot
On Tue, 2009-06-16 at 17:37 -0400, Phil Stracchino wrote:
> I'm trying to set up my first Copy job, and running into a problem. I
> don't know whether it's a configuration issue, a documentation
> shortfall, a Bacula limitation, or a combination of the three.
>
>
> I have two SDs on two differen
I'm trying to set up my first Copy job, and running into a problem. I
don't know whether it's a configuration issue, a documentation
shortfall, a Bacula limitation, or a combination of the three.
I have two SDs on two different machines. Altogether, four pools exist,
three disk pools on one mac
Hi,
27.04.2009 19:39, Michael Lang wrote:
> Hi,
>
> i didn't find any restriction about copying Jobs (Type "C") between two
> storage resources located on different SD's.
> I tried but the job always get canceled telling the the SD is not aware
> of the resource (original pool storage)
>
> i a
Hi,
i didn't find any restriction about copying Jobs (Type "C") between two
storage resources located on different SD's.
I tried but the job always get canceled telling the the SD is not aware
of the resource (original pool storage)
i already found that the copy jobs cant be on the same SD with
Hello,
I've been discussing with Eric how we might handle Copy jobs in our
development version. Currently, Copy jobs are implemented, and they work
much like Migration jobs (share 99% of the code). The difference is that
Migration jobs purge the original backup job and keep only the Migrated
68 matches
Mail list logo