On 5/20/25 20:59, Eric Bollengier wrote:
Hello Andrea,
Hello.
Maybe you can share the job output where the errors and warnings are
printed, it will help to understand the situation.
Just today I got the log I'm pasting at the end of the message (with
sensitive data redacted).
The original
Hello Andrea,
Maybe you can share the job output where the errors and warnings are printed, it
will help to understand the situation.
Thanks,
Best Regards,
Eric
On 4/9/25 12:38, Andrea Venturoli wrote:
Hello.
I'm seeing a behaviour I'm not sure is correct; at least I cannot find any
motiv
Apparently so, since the job(s) completed normally last night. I do
appreciate the feedback, though.
In trying to get all the parameters to match between both systems, I had
set the device name in the bacula-sd.conf file incorrectly.
On the primary, this is from bacula-dir.conf:
Storage {
Na
On 9/6/2018 1:17 PM, Brendan Martin wrote:
I have configured a copy job on my primary backup server. The copy
volumes will be on a secondary system running another storage daemon.
Both systems are running Bacula 7.4.4 on Debian 9.
From the primary console, this command:
status storage=
Thanks; I suspected that would eventually be the case.
On 10/22/2015 8:24 PM, Heitor Faria wrote:
>> I am running Bacula 5.2.6 on Debian 8.1. My backups are disk-based.
>>
>> I have a local primary backup server and a remote secondary backup
>> server. The secondary is intended to contain copies
> I am running Bacula 5.2.6 on Debian 8.1. My backups are disk-based.
>
> I have a local primary backup server and a remote secondary backup
> server. The secondary is intended to contain copies of the backups on
> the primary. Each is running its own storage daemon, which, according
> to the
Hello Lukas,
I forgot to mention that you can have two copy jobs running and sending to
different pools (in this case, you would need more than one pool per
client, but this way you will have your backups replicated to two different
groups of volumes) setting the next pool in the schedule resource
Hello Lukas,
Regarding your first post, could you give more details about "is it
possible to setup copy job for a single client so that all the jobs data
is stored on two different volumes
"? Do you mean the same data being replicated into two different volumes
in the same pool? Sorry, I cannot s
On Tue, Aug 11, 2015 at 01:01:35AM -0300, Ana Emília M. Arruda wrote:
> Hello Lukas,
>
> I was wondering if this could be solved using cloned copy jobs :)
can you give me some pointers what are cloned copy jobs and how to setup them?
otherwise, correct solution is to have separated pools and cop
Hello Lukas,
I was wondering if this could be solved using cloned copy jobs :)
Best regards,
Ana
On Mon, Aug 10, 2015 at 11:36 AM, Lukas Hejtmanek
wrote:
> Hello,
>
> is it possible to setup copy job for a single client so that all the jobs
> data
> is stored on two different volumes? But with
Hello Brendan,
In each pool you are going to copy jobs from, you need a "Next Pool"
directive pointing to the pool to where you are going to copy jobs. I do
not see this configuration in your CSG Daily Copy, for example. Also,
regarding to the "No Next Pool for pool File" message, you need to
conf
On 2014-07-03 04:03 PM, Luis Aparicio wrote:
> 2014-07-03 15:51 GMT-03:00 Dan Langille :
>
>> On 2014-07-02 07:35 PM, Brady, Mike wrote:
>>
>>> I can't view the Issue. Can you change the permissions on it
>>> please.
>>
>> Luis:
>>
>> Before you unlock that ticket: be aware that you have posted
Ok.
That's because I posted it as private.
Don't worry i'll change the passwords.
Can you unlock the ticket please?
Thanks
Luis
2014-07-03 15:51 GMT-03:00 Dan Langille :
> On 2014-07-02 07:35 PM, Brady, Mike wrote:
>
>> I can't view the Issue. Can you change the permissions on it please.
>>
>
>
On 2014-07-02 07:35 PM, Brady, Mike wrote:
> I can't view the Issue. Can you change the permissions on it please.
Luis:
Before you unlock that ticket: be aware that you have posted your
configuration files in that ticket. It appears you have included your
passwords in the uploaded files. If y
I can't view the Issue. Can you change the permissions on it please.
On 2014-07-02 09:36, Luis Aparicio wrote:
> Ready! Issue ID 0002072
>
> Thanks Luis
>
> 2014-06-14 7:10 GMT-03:00 Kern Sibbald :
>
> Hello,
>
> Please submit a bug report on this, but be sure to include your
> bacula-
Ready!
Issue ID 0002072
Thanks
Luis
2014-06-14 7:10 GMT-03:00 Kern Sibbald :
> Hello,
>
> Please submit a bug report on this, but be sure to include your
> bacula-dir.conf file, and the two emails that were sent, and finally the
> complete job report output if you can.
>
> Best regards,
> Kern
Hello,
Please submit a bug report on this, but be sure to include your
bacula-dir.conf file, and the two emails that were sent, and
finally the complete job report output if you can.
Best regards,
Kern
On 06/11/2014 06:22 PM, Luis
On 2014-06-12 04:22, Luis Aparicio wrote:
> I run around 150 bkp jobs at night and at daytime I copy those jobs to
> another media pool.
>
> Since upgraded to 7.x every time a Copy job finished send 2 (two) e-mails
> instead of 1 (one) like in 5.2.13.
>
> In 5.2.13 only one email saying "C
On Sun, Jun 16, 2013 at 08:23:11PM +0100, Gary Cowell wrote:
> I'm using a DTDTT scenario, with the copy jobs being sent to USB disk via
> vchanger.
>
> This works fine, but I have a questio
>
> When my COPY job runs in the schedule, I get a COPY job, and also an
> Incremental job.
>
I think th
On 30 Jan 2012, at 16:05, Adrian Reyer wrote:
> On Mon, Jan 30, 2012 at 12:18:30PM +, Joe Nyland wrote:
>> When run manually this morning, I was given a little more information:
>> run job=FileServer1_Copy pool=FileServer1_Full storage=FileServer1_Full
>> 30-Jan 06:55 FileServer1-dir JobId 0:
On Mon, Jan 30, 2012 at 12:18:30PM +, Joe Nyland wrote:
> When run manually this morning, I was given a little more information:
> run job=FileServer1_Copy pool=FileServer1_Full storage=FileServer1_Full
> 30-Jan 06:55 FileServer1-dir JobId 0: Fatal error: No Next Pool specification
> found in
On Aug 15, 2011, at 12:16 PM, Jérôme Blion wrote:
> Hello,
>
> I'm trying to copy jobs from one server to another.
> I defined 2 storage daemons:
>
>
[snip]
>
> In this link:
> http://www.bacula.org/manuals/en/concepts/concepts/Migration_Copy.html
> It's written that "Migration is only im
I would also like to know the answer to Jerome's initial question of whether
the limitation still exists, which stops us from being able to copy between
storage daemons.
Copying data between SD's would be the 'icing on the cake' to my new Bacula
system!
Kind regards,
Joe
On 15 Aug 2011, at
hi,
on sunday we are running an end-of-week script
which sets all tapes in loader to status used
and runs release on both drives.
release storage=Tape drive=0
release storage=Tape drive=1
on monday morning copy jobs are scheduled:
CopyDiskToTape into pool DiskCopy
CopyDiskToExtClone into pool E
>Hello, I am able to perform a copy job to tape, but I am unable to find any
>information on how to restore this data. Bacula's documentation talks more
>about migration than copy. I didn't want to migrate the data, I wanted to copy
>so I have it. Is there any detailed information on restoring
2010/5/6
> Hi,
> I'm having a problem copying jobs. I labeled a new volume in the pool
> "DailyScratch" with the label command. Then I ran a job using the pool
> "ThursdayOD". As expected, bacula took the volume from the Scratch Pool and
> moved it to the other pool. Backup ran without errors, "l
Am Mon, 29 Mar 2010 12:18:18 +0200 schrieb My Name:
> Hi,
>
> What: What I want to do is to replicate specific backup jobs to another
> bacula system (other Director & Storage). Is there a good solution to do
> this?
>
> Why: Very important backups should be replicated to another backup
> system
On 03/29/10 06:18, My Name wrote:
> At
> http://bacula.git.sourceforge.net/git/gitweb.cgi?p=bacula/bacula;a=blob;f=bacula/projects;hb=HEAD
> I have read about "Item 36: Job migration between different SDs". Will
> this also be possible with "Copy Jobs"? Would this be the proper
> solution? It woul
I was wondering...
Could I achieve what I want if I were to change the pools to refer to the
individual drives in the auto-changer tape library instead?
eg.
# Default pool definition used by incremental backups.
# We wish to be able to restore files for any day for at least 2 weeks, so
I updated like you said...
take a look
Directors conf - http://pastebin.ca/1601756
Thank you
2009/10/7 Nicolae Mihalache
> Why does it say "Read Pool: "FullBackupsVirtualPool" (From
> Job resource)"?
> Are you sure you updated the job to use the Default pool? Can you please
> send
Why does it say "Read Pool: "FullBackupsVirtualPool" (From
Job resource)"?
Are you sure you updated the job to use the Default pool? Can you please
send again your config.
nicolae
Pedro Bordin Hoffmann wrote:
> Hello again!
>
> Im getting the same error as before changing:
>
> Y
Hello again!
Im getting the same error as before changing:
You have messages.
*mes
07-Out 11:44 bacula.belgamatrizes.com.br-d JobId 484: No JobIds found to
copy.
*mes
You have no messages.
*mes
07-Out 11:44 bacula.belgamatrizes.com.br-d JobId 484: Bacula
bacula.belgamatrizes.com.br-d 3.0.2 (1
Wow!! Ill give a try on that!! Thank you so much!!!
Can u explain why I have to use this on my default pool (the one that i use
to file backup) NextPool = FullBackupsTapePool*
Thanks!!
[]s
2009/10/6 Nicolae Mihalache
> Maybe something like this.
>
>
> JobDefs {
> Name = CopyDiskToTape
> Type
Maybe something like this.
JobDefs {
Name = CopyDiskToTape
Type = Copy
Messages = Standard
Client = None
FileSet = None
Selection Type = PoolUncopiedJobs
Maximum Concurrent Jobs = 10
SpoolData = No
Allow Duplicate Jobs = Yes
Allow Higher Duplicates = No
Cancel Queued Duplica
What I like to do is this.
I have a file backup... using the Default pool.
Than I need to put this backup (from Default) on the tape. By using the copy
job function.
How does the Copy Job should look like?
Do I have to make changes on my Default pool? In my File Backup?
Thanks so much for your h
Pedro Bordin Hoffmann wrote:
> There is no Volumes in the Pool: FullBackupsVirtualPool
> *list media Pool=FullBackupsVirtualPool
> No results to list.
>
> Should I try to list the Default pool ?
Your Copy Job is copying files from this Pool, that's why it doesn't
copy anything.
If you want it in
There is no Volumes in the Pool: FullBackupsVirtualPool
*list media Pool=FullBackupsVirtualPool
No results to list.
Should I try to list the Default pool ?
Thanks!
2009/10/6 Nicolae Mihalache
> Sorry, I made a mistake, please first list the Volumes in the Pool using
> list media Pool=FullBacku
Sorry, I made a mistake, please first list the Volumes in the Pool using
list media Pool=FullBackupsVirtualPool
and then use the query to see the jobs written on those Volumes.
nicolae
Pedro Bordin Hoffmann wrote:
> I got this:
>
>
> Choose a query (1-16): 14
> Enter Volume name: FullBackupsVi
I got this:
Enter a period to cancel a command.
*query
The defined Catalog resources are:
1: MyCatalog
2: BackupDB
Select Catalog resource (1-2): 1
Using Catalog "MyCatalog"
Available queries:
1: List up to 20 places where a File is saved regardless of the
directory
2: List whe
If you run query in bconsole, then select option "14: List Jobs stored
for a given Volume name" and then enter FullBackupsVirtualPool, what do
you get?
Pedro Bordin Hoffmann wrote:
> Follows my director's conf:
> http://pastebin.ca/1597631
>
> I dont need to specify a job to copy right? It can
Follows my director's conf:
http://pastebin.ca/1597631
I dont need to specify a job to copy right? It can copy all of one pool ?
Thanks for the help!
Regards
Pedro
2009/10/6 Nicolae Mihalache
> Show us your job definition to see why bacula hasn't selected any job to
> copy.
>
>
> Pedro Bordin
Show us your job definition to see why bacula hasn't selected any job to
copy.
Pedro Bordin Hoffmann wrote:
> Im using the new function of Bacula Copy jobs... but Im getting this
> when I run the job:
> What can be that ? How to fix?
> thanks
>
>
> 05-Out 11:34 bacula.belgamatrizes.com.br-d JobI
I would'nt complain about the documentation.
"Migration is only implemented for a single Storage daemon. You cannot
read on one Storage daemon and write on another."
Is a direct quote from the docs. See link below.
On Tue, 2009-06-16 at 22:54 -0400, Phil Stracchino wrote:
> Dirk Bartley wrot
On Tue, 2009-06-16 at 17:37 -0400, Phil Stracchino wrote:
> I'm trying to set up my first Copy job, and running into a problem. I
> don't know whether it's a configuration issue, a documentation
> shortfall, a Bacula limitation, or a combination of the three.
>
>
> I have two SDs on two differen
Hi,
27.04.2009 19:39, Michael Lang wrote:
> Hi,
>
> i didn't find any restriction about copying Jobs (Type "C") between two
> storage resources located on different SD's.
> I tried but the job always get canceled telling the the SD is not aware
> of the resource (original pool storage)
>
> i a
45 matches
Mail list logo