Thanks Bill, that's a good plan that I'll implement.
Chris
On Sat, 6 Apr 2024, 22:17 Bill Arlofski via Bacula-users, <
bacula-users@lists.sourceforge.net> wrote:
> On 4/6/24 10:53 AM, Chris Wilkinson wrote:
> > I am attempting to write a copy job to copy uncopied jobs from one SD to
> another.
On 4/6/24 10:53 AM, Chris Wilkinson wrote:
I am attempting to write a copy job to copy uncopied jobs from one SD to another. It seems that the client and fileset
directives are required or the syntax check will fail. The documentation (v9) is not explicit on this point.
Since the client is not
I am attempting to write a copy job to copy uncopied jobs from one SD to
another. It seems that the client and fileset directives are required or
the syntax check will fail. The documentation (v9) is not explicit on this
point.
Since the client is not involved in a copy job, it seems that these cl
I can second the 'copy job' route---it's great for disk-to-disk-to-tape
scenarios and means the data transfer is only between SDs.
Just use console command "list jobs copies" to see their IDs.
The manual's description of the "Copy" type is pretty good:
https://www.bacula.org/13.0.x-manuals/en/
On 5/18/23 18:37, Chris Wilkinson wrote:
I'm not sure I'm getting the motivation for using a copy job in
preference to a duplicate job to a second SD. This would also create a
second backup. The only reason I can think of is that a duplicate job
might be different if the files changed in betwee
I'm not sure I'm getting the motivation for using a copy job in preference
to a duplicate job to a second SD. This would also create a second backup.
The only reason I can think of is that a duplicate job might be different
if the files changed in between. That shouldn't be an issue.
I read that a
Thank you Bill, a comprehensive reply as always 🙏. I'll need to study this
some more.
-Chris-
On Thu, 18 May 2023, 16:45 Bill Arlofski via Bacula-users, <
bacula-users@lists.sourceforge.net> wrote:
> On 5/18/23 08:07, Chris Wilkinson wrote:
> > I have not used a copy job before so I thought I wo
On 5/18/23 08:07, Chris Wilkinson wrote:
I have not used a copy job before so I thought I would try one. I used Baculum
v11 to set one up that copies a job from a
local USB drive to a NAS. The setup was straightforward. I created a new pool
to receive the copy, defined the source and
destinatio
I have not used a copy job before so I thought I would try one. I used
Baculum v11 to set one up that copies a job from a local USB drive to a
NAS. The setup was straightforward. I created a new pool to receive the
copy, defined the source and destination job, pool and SD/storage. This
worked just
On 5/27/22 05:27, Pierre Bernhardt wrote:
Hello,
I create a full backup each 1st Sunday, diff backups of rest of Sundays,
incremental backups each day.
I would create a copy after the first day of a month backup has been made. The
data should be copied
from the last backup however it has bee
On 2022-05-27 11:27, Pierre Bernhardt wrote:
Hello,
I create a full backup each 1st Sunday, diff backups of rest of
Sundays, incremental backups each day.
I would create a copy after the first day of a month backup has been
made. The data should be copied
from the last backup however it has bee
Hello,
I create a full backup each 1st Sunday, diff backups of rest of Sundays,
incremental backups each day.
I would create a copy after the first day of a month backup has been made. The
data should be copied
from the last backup however it has been made.
So for the moment I create a rest
On 12/25/21 12:54, Phil Stracchino wrote:
Aha! There WAS a permissions problem! *GROUP* root had only RO access.
I can now restore.
And at this moment I have Bacula compiling under Solaris 11.4.
--
Phil Stracchino
Babylon Communications
ph...@caerllewys.net
p...@co.ordinate.org
L
Aha! There WAS a permissions problem! *GROUP* root had only RO access.
I can now restore.
--
Phil Stracchino
Babylon Communications
ph...@caerllewys.net
p...@co.ordinate.org
Landline: +1.603.293.8485
Mobile: +1.603.998.6958
___
Bacul
On Saturday 2021-12-25 00:12:09 Phil Stracchino wrote:
> For the first time I'm trying to run a restore directly from a copy job,
> after a complete failure of my NAS. I've got a second storage daemon
> temporarily installed on my workstation, I have the external disk
> chassis that holds my rotat
For the first time I'm trying to run a restore directly from a copy job,
after a complete failure of my NAS. I've got a second storage daemon
temporarily installed on my workstation, I have the external disk
chassis that holds my rotating archive copy sets attached to a temporary
server runnin
On 30/05/2021 13:44, Bill Arlofski via Bacula-users wrote:
‐‐‐ Original Message ‐‐‐
On Friday, May 28, 2021 2:09 AM, Diogo Neves wrote:
I Have raised and commented the maximum concurrent jobs in the copy
job resource and I still couldn't connect to bconsole.
Hello Diogo,
You need to
‐‐‐ Original Message ‐‐‐
On Friday, May 28, 2021 2:09 AM, Diogo Neves wrote:
> I Have raised and commented the maximum concurrent jobs in the copy job
> resource and I still couldn't connect to bconsole.
Hello Diogo,
You need to make sure the MaximumConcurrentJobs is also set in the "Di
I Have raised and commented the maximum concurrent jobs in the copy job
resource and I still couldn't connect to bconsole.
On Fri, 28 May 2021 at 07:08, Radosław Korzeniewski <
rados...@korzeniewski.net> wrote:
> Hello,
>
> czw., 27 maj 2021 o 14:00 Diogo Neves napisał(a):
>
>> Hi,
>> I am a new
Hello,
czw., 27 maj 2021 o 14:00 Diogo Neves napisał(a):
> Hi,
> I am a newbie to Bacula. I tried to find out what is happening and could
> not
> get right answer. Probably I am not searching with the right question or
> looking at the right place.
> Bacula works just fine for me, but I´ve tried
Hi,
I am a newbie to Bacula. I tried to find out what is happening and could not
get right answer. Probably I am not searching with the right question or
looking at the right place.
Bacula works just fine for me, but I´ve tried to add a copy job and I no
longer can connect to bconsole. If I comment
Hello Heitor,
2018-07-18 19:42 GMT+02:00 Heitor Faria :
> Dear Users,
>
> I'm planning to deploy a Copy Job for Geographical Redundancy Disaster
> Recovery (Site A, Site B).
> Failover site (B) has a secondary Bacula Director, Catalog and Storage
> Daemon.
>
So far, so good. :)
> Do you think
>> Dan,
Hello Jari, Dan,
>>
>> I'm planning to deploy a Copy Job for Geographical Redundancy Disaster
>> Recovery
>> (Site A, Site B).
>> Failover site (B) has a secondary Bacula Director, Catalog and Storage
>> Daemon.
>> Do you think it is possible to perform Cop
> Kern Sibbald kirjoitti 21.7.2018 kello 8.02:
>
> Dan,
>
> Oops. It looks like I made a major improvement but the documentation never
> made it back into the manual. I will correct that one -- thanks for pointing
> it out Dan. I wonder how many more of these new features have not been put
Dan,
Oops. It looks like I made a major improvement but the documentation
never made it back into the manual. I will correct that one --
thanks for pointing it out Dan. I wonder how many more of these new
features have not been put back into the main body of the ma
> On Jul 20, 2018, at 6:26 PM, Dan Langille wrote:
>
> Signed PGP part
>> On Jul 19, 2018, at 1:41 PM, Jari Fredriksson wrote:
>>
>> Signed PGP part
>>
>>
>>> Heitor Faria kirjoitti 18.7.2018 kello 20.42:
>>>
>>> Dear Users,
>>>
>>> I'm planning to deploy a Copy Job for Geographical Redund
> On Jul 19, 2018, at 1:41 PM, Jari Fredriksson wrote:
>
> Signed PGP part
>
>
>> Heitor Faria kirjoitti 18.7.2018 kello 20.42:
>>
>> Dear Users,
>>
>> I'm planning to deploy a Copy Job for Geographical Redundancy Disaster
>> Recovery (Site A, Site B).
>> Failover site (B) has a secondary B
> Heitor Faria kirjoitti 18.7.2018 kello 20.42:
>
> Dear Users,
>
> I'm planning to deploy a Copy Job for Geographical Redundancy Disaster
> Recovery (Site A, Site B).
> Failover site (B) has a secondary Bacula Director, Catalog and Storage Daemon.
> Do you think it is possible to perform Cop
> Dear Users,
Hello Me,
> I'm planning to deploy a Copy Job for Geographical Redundancy Disaster
> Recovery
> (Site A, Site B).
> Failover site (B) has a secondary Bacula Director, Catalog and Storage Daemon.
> Do you think it is possible to perform Copy jobs from Site A to Site B, using
> the
Dear Users,
I'm planning to deploy a Copy Job for Geographical Redundancy Disaster Recovery
(Site A, Site B).
Failover site (B) has a secondary Bacula Director, Catalog and Storage Daemon.
Do you think it is possible to perform Copy jobs from Site A to Site B, using
the failover Catalog as th
Never mind. Problem was caused by an error in my firewall settings :P
Job {
Name = Copy_ell-bacula-sd01
Type = Copy
Pool = Full-Pool
Selection Type = PoolUncopiedJobs
Client = ns-bacula-fd
FileSet = none
Storage = ns-bacula-sd01
Maximum Concurrent Jobs = 5
Messa
> Make sense...Sorry!
> Now i have another problem... My SD cannot communicate with the fake Client.
> (Copy Job Copy_ell-bacula-sd01.2015-12-08_20.08.39_40 waiting for Client
> connection.)
Hello Luc: I've noticed that your copy job lack the source and destination
Storage definitions. Normally
Make sense...Sorry!
Now i have another problem... My SD cannot communicate with the fake Client.
(Copy Job Copy_ell-bacula-sd01.2015-12-08_20.08.39_40 waiting for Client
connection.)
2015-12-08 17:12 GMT+01:00 Heitor Faria :
> I'm having an issue when I execute a Copy Job that copies a Volu
> I'm having an issue when I execute a Copy Job that copies a Volume to an
> offsite
> Storage Daemon.
> If I run the command status dir after starting the Copy Job the following is
> showed:
> 58 Copy Full 0 0 Copy_ell-bacula-sd01 is waiting on max Job jobs
> What does this mean?
Hello Luc: i
I'm having an issue when I execute a Copy Job that copies a Volume to an
offsite Storage Daemon.
If I run the command status dir after starting the Copy Job the following
is showed:
58 Copy Full 0 0 Copy_ell-bacula-sd01 is waiting on max
Job jobs
What does this mean?
Here is
On 2015-10-30 02:38 PM, Jerry Lowry wrote:
> Hi,
>
> Centos 5.11 64bit OS
> Bacula 5.2.6 on all directors and clients
>
> I have run across a problem with one of my copy jobs. The job is
> setup with the PoolUncopiedJobs parameter. The jobs are failing with
> the following:
> 30-Oct 11:05 distr
Hello Jerry,
On Fri, Oct 30, 2015 at 6:38 PM, Jerry Lowry wrote:
> Hi,
>
> Centos 5.11 64bit OS
> Bacula 5.2.6 on all directors and clients
>
> I have run across a problem with one of my copy jobs. The job is setup
> with the PoolUncopiedJobs parameter. The jobs are failing with the
> followin
Hi,
Centos 5.11 64bit OS
Bacula 5.2.6 on all directors and clients
I have run across a problem with one of my copy jobs. The job is setup
with the PoolUncopiedJobs parameter. The jobs are failing with the
following:
30-Oct 11:05 distress JobId 28325: Error: block.c:291 Volume data
error at 3:8
> I'm testing Bacula for off-site backup and I found that after executing
> Copy Job for Full or Diff pool volumes are created on off-site storage
> but all of them have Incremental Label. Is it normal behavior or I
> missed something in configuration?
Hello Jakubek: the "incremental label" comes
Hello Bacula Users,
I'm testing Bacula for off-site backup and I found that after executing
Copy Job for Full or Diff pool volumes are created on off-site storage
but all of them have Incremental Label. Is it normal behavior or I
missed something in configuration?
fd configuration: http://paste.d
Hi all,
I want to use copy job to copy volumes to a secondary site used as disaster
recovery.
Can I use this copy job to restore a server in the second site?
Can I use Copy Job create Virtual Full Backup?
Thank you
--
One
Hi,
I try to use copy-jobs with an neo200s (2 drives,with logical partition:
2 jukeboxes with one drive each).
Bacula is version 5.2.10.
If I run an copyjob with 4.8 TB the job fails during volume-change on
the normal (source) backup-jukebox.
Both jobs failed at the same time (2:00):
### c
echnical Chemical Company*
>
>
>
> /For support, please email us at supp...@technicalchemical.com./
>
>
>
> *From:*Kern Sibbald [mailto:k...@sibbald.com]
> *Sent:* Saturday, January 11, 2014 1:16 AM
> *To:* Paul De Audney
> *Cc:* bacula-users@lists.sourceforge
d.com]
Sent: Saturday, January 11, 2014 1:16 AM
To: Paul De Audney
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Copy job limit is 100?
Hello,
The reason this artificial limit is there is that because I saw several
cases of copy/migrate jobs starting on the order of 600 jobs,
Hello,
The reason this artificial limit is there is that because I saw several
cases of copy/migrate jobs starting on the order of 600 jobs, which
caused the systems in question to totally choke up. It was probably
a combination of insufficient hardware for 600 jobs, and much too
high limits plac
On 9 January 2014 12:54, Steven Hammond wrote:
> I missed a couple of days (holidays) backing up from disk to tape (we
> backup disk to disk every night) so when I went to run the job to copy disk
> to tape it only grabbed 100 jobs. This seems sort of artificial (what if I
> had more than 100 wor
I missed a couple of days (holidays) backing up from disk to tape (we backup
disk to disk every night) so when I went to run the job to copy disk to tape it
only grabbed 100 jobs. This seems sort of artificial (what if I had more than
100 workstations/servers I was backing up?). I was wonderin
On Tue, Dec 10, 2013 at 04:45:55PM +, Steven Hammond wrote:
> We back our servers to disk. We then follow up with a job to copy
> them to tape. The backup to disk uses compression. My question is
> this: When the COPY job runs, does it decompress the job on disk
> before sending it to th
We back our servers to disk. We then follow up with a job to copy them to
tape. The backup to disk uses compression. My question is this: When the COPY
job runs, does it decompress the job on disk before sending it to the tape or
does it just copy it straight to tape (which means the hardware
Hello folks
I have a copy job which copies my full backups to a USB drive using
vchanger once a week.
Recently, the backups have been exceeding the 1TB size of a single USB
drive, which would be fine, I'll just use more than one drive.
What isn't fine is that bacula is recycling its volumes too
On 11/9/2012 3:23 PM, Chris Adams wrote:
> I am setting up a disk-to-disk backup system with a weekly copy-to-tape
> job (for offsite backups). I have a couple of questions:
>
> - I have a tape library with 2 drives. Is there an easy way to get the
>copy job to use both? I have set them up
I am setting up a disk-to-disk backup system with a weekly copy-to-tape
job (for offsite backups). I have a couple of questions:
- I have a tape library with 2 drives. Is there an easy way to get the
copy job to use both? I have set them up in an autochanger resource,
but it looks like a si
On 12-07-30 06:57 AM, Uwe Schuerkamp wrote:
> On Thu, Jul 26, 2012 at 08:09:38PM -0600, NetCetera Lists wrote:
>> I am working on setting up a Copy job to a tape autoloader - and am
>> having an issue with the Next Pool directive requirement.
>>
>> I am picking the jobs to copy - last Full for a nu
On Thu, Jul 26, 2012 at 08:09:38PM -0600, NetCetera Lists wrote:
> I am working on setting up a Copy job to a tape autoloader - and am
> having an issue with the Next Pool directive requirement.
>
> I am picking the jobs to copy - last Full for a number of existing
> clients - using an SQL Query.
I am working on setting up a Copy job to a tape autoloader - and am
having an issue with the Next Pool directive requirement.
I am picking the jobs to copy - last Full for a number of existing
clients - using an SQL Query.
The query is working fine and providing the correct job ids as input.
The
Hi,
i've configured Bacula with a copy job that will copy our backups
to a tape.
I've made this configuration:
Job {
...
Type = Backup
Level = Incremental
Full Backup Pool = x-FullPool
Incremental Backup Pool = x-IncrementalPool
Differential Backup Pool = x-DifferentialPool
S
> On Wed, 21 Mar 2012 07:35:26 +0530, Rushdhi Mohamed said:
>
> hi,
>
> i have previously tested copy jobs and it works fine.
>
> but now i am getting this error when run a copy job.
>
> Run Copy job
> JobName: Copy-TueVol-1
> Bootstrap: *None*
> Client:fd-backup01.hnbassu
hi,
i have previously tested copy jobs and it works fine.
but now i am getting this error when run a copy job.
Run Copy job
JobName: Copy-TueVol-1
Bootstrap: *None*
Client:fd-backup01.hnbassurance.com
FileSet: FileSet-test
Pool: DailyTue (From Job resource)
Read
Le 20/03/2012 22:55, Tim Krieger a écrit :
> Hey All,
Hi Tim,
>
>
>
> I am attempting to configure a copy job to create an set of backup tapes
> I can ship offsite but everytime a job starts, the base offsite job sits
> waiting for the Tape resource, and the child copy job sites waiting for
>
hi,
i have previously tested copy jobs and it works fine.
but now i am getting this error when run a copy job.
Run Copy job
JobName: Copy-TueVol-1
Bootstrap: *None*
Client:fd-backup01.hnbassurance.com
FileSet: FileSet-test
Pool: DailyTue (From Job resource)
Read
Hey All,
I am attempting to configure a copy job to create an set of backup tapes I can
ship offsite but everytime a job starts, the base offsite job sits waiting for
the Tape resource, and the child copy job sites waiting for the File
resource... it seems like they are deadlocking each other b
> Notice the Incremental Level of the Job? Why is that?
> That's not so good for me because while the Copy Job is running, I have
> other Incremental Jobs that can be run because they don't use either of the
> Pools used by this Copy Job...
>
BTW, the "normal" Incremental Backup that are ran after
Hello everyone.
I'm running Copy Jobs from my Full Backups, here's the config (the parts
that matter):
*Pool {
Name = pool.full
Pool Type = Backup
Storage = st.tpc
Volume Use Duration = 1 month
Volume Retention = 6 months
Scratch Pool = scratch.tpc
RecyclePool = scratc
Hello,
You don't restore from Copy jobs. If somehow your original volume is lost,
bacula will automatically restore from the copy volume (thus, never simply
delete your copied volumes).
I don't have any ideas about the copy job size issue, and I can only check this
tomorrow, maybe someone can
Hello everyone.
I'm testing with Copy Jobs and I want to check if my results are actually
the expected ones.
First, the Bacula log when running the Copy Job:
01-Abr 16:06 dir.ptibacula-dir JobId 2109: The following 1 JobId was chosen
> to be copied: 2107
> 01-Abr 16:06 dir.ptibacula-dir JobId 21
Am 01.04.2011 13:39, schrieb Polonkai Gergely:
> 2011. 04. 1, péntek keltezéssel 11.55-kor J. Echter ezt írta:
>> Am 01.04.2011 11:02, schrieb Polonkai Gergely:
>>> Hello list,
>>>
>>> I am trying to create a copy job that runs every day after all the
>>> other backups, and copies all the backups
2011. 04. 1, péntek keltezéssel 11.55-kor J. Echter ezt írta:
> Am 01.04.2011 11:02, schrieb Polonkai Gergely:
> > Hello list,
> >
> > I am trying to create a copy job that runs every day after all the
> > other backups, and copies all the backups from our on-site storage
> > to an off-site stora
Am 01.04.2011 11:02, schrieb Polonkai Gergely:
Hello list,
I am trying to create a copy job that runs every day after all the
other backups, and copies all the backups from our on-site storage to
an off-site storage. When typing "messages" in the console, I see this:
01-Apr 10:17 brokernet-d
Hello list,
I am trying to create a copy job that runs every day after all the other
backups, and copies all the backups from our on-site storage to an
off-site storage. When typing "messages" in the console, I see this:
01-Apr 10:17 brokernet-director JobId 6802: The following 1 JobId was
chosen
This comes up periodically on the list. Check this thread for more
settings you'll need to tweak to get more write speed:
http://marc.info/?t=12899980386&r=1&w=2
> Subject: [Bacula-users] Copy Job performance issues
> To: bacula-users
> Message-ID:
> Content-
2010/12/7 Christoph Litauer
>
> I can see no bottleneck at all ... but "status storage" still reports
> 40MB/s write performance.
>
>
AFAIK, when bacula-sd run a copy job it perform this by single thread,
synchronous.
So, bacula-sd read a block from source volume, then write it into
destination v
Am 07.12.2010 um 02:53 schrieb Dan Langille:
> On 12/6/2010 11:13 AM, Christoph Litauer wrote:
>> Dear bacula users,
>>
>> I have a problem concerning the speed of copy jobs. My setup is:
>> - bacula server is OpenSuSE 11.3, version 5.0.3 using postgres, 2 Xeons (4
>> cores) and 8 GB memory.
>>
On 12/6/2010 11:13 AM, Christoph Litauer wrote:
> Dear bacula users,
>
> I have a problem concerning the speed of copy jobs. My setup is:
> - bacula server is OpenSuSE 11.3, version 5.0.3 using postgres, 2 Xeons (4
> cores) and 8 GB memory.
> - Attached is an iSCSI-RAID containing File devices
> -
Dear bacula users,
I have a problem concerning the speed of copy jobs. My setup is:
- bacula server is OpenSuSE 11.3, version 5.0.3 using postgres, 2 Xeons (4
cores) and 8 GB memory.
- Attached is an iSCSI-RAID containing File devices
- Copy Jobs run to a Quantum Scalar 50 Tapelibrary with 2 LTO4
On 12/4/2010 10:03 AM, Jon Schewe wrote:
> On 12/03/2010 09:59 PM, Dan Langille wrote:
>> On 12/3/2010 8:20 AM, Jon Schewe wrote:
>>
>>> On 12/02/2010 06:37 PM, Phil Stracchino wrote:
>>>
On 12/02/10 19:22, Dan Langille wrote:
> An interesting situation arose today. I had the Co
On 12/03/2010 09:59 PM, Dan Langille wrote:
> On 12/3/2010 8:20 AM, Jon Schewe wrote:
>
>> On 12/02/2010 06:37 PM, Phil Stracchino wrote:
>>
>>> On 12/02/10 19:22, Dan Langille wrote:
>>>
>>>
An interesting situation arose today. I had the Copy Job and the
original Job wit
On 12/3/2010 8:20 AM, Jon Schewe wrote:
> On 12/02/2010 06:37 PM, Phil Stracchino wrote:
>> On 12/02/10 19:22, Dan Langille wrote:
>>
>>> An interesting situation arose today. I had the Copy Job and the
>>> original Job with running on the same schedule but different priorities.
>>>My goal: co
On 12/02/2010 06:37 PM, Phil Stracchino wrote:
> On 12/02/10 19:22, Dan Langille wrote:
>
>> An interesting situation arose today. I had the Copy Job and the
>> original Job with running on the same schedule but different priorities.
>> My goal: copy the original jobs to tape right after th
On 12/2/2010 7:37 PM, Phil Stracchino wrote:
> On 12/02/10 19:22, Dan Langille wrote:
>> An interesting situation arose today. I had the Copy Job and the
>> original Job with running on the same schedule but different priorities.
>>My goal: copy the original jobs to tape right after the origin
On 12/02/10 19:22, Dan Langille wrote:
> An interesting situation arose today. I had the Copy Job and the
> original Job with running on the same schedule but different priorities.
> My goal: copy the original jobs to tape right after the original jobs run.
>
> However, with duplicate job con
An interesting situation arose today. I had the Copy Job and the
original Job with running on the same schedule but different priorities.
My goal: copy the original jobs to tape right after the original jobs run.
However, with duplicate job control:
Allow Higher Duplicates = no
Allow D
On 9/20/10 8:03 AM, Yuri Timofeev wrote:
> Nobody knows? I also need this feature.
>
This is my solution for now.
I've got a JobDefs for my copy job:
JobDefs {
Name = "CopyJob"
Type = Copy
Priority = 40 # after catalog
Pool = Full-Pool # ignored when using SQLQuery
Maximum Concurrent Job
Nobody knows? I also need this feature.
2010/8/2 Jon Schewe :
> I want to setup bacula so that when a job finishes (one that writes to
> disk) it then spawns a job that copies itself to tape. I know about
> PoolUncopiedJobs and I don't want to do that, as it can cause the same
> job to be queued
On 23/08/2010 7:39 PM, James Harper wrote:
> If I schedule a copy job immediately after a backup job (eg like 1
> second after), when does the selection actually get done? I want the
> copy to be of the job that is running now, but I think that the
> selection happens when the job is queued not wh
If I schedule a copy job immediately after a backup job (eg like 1
second after), when does the selection actually get done? I want the
copy to be of the job that is running now, but I think that the
selection happens when the job is queued not when it is ready to run so
it would not see the job th
I want to setup bacula so that when a job finishes (one that writes to
disk) it then spawns a job that copies itself to tape. I know about
PoolUncopiedJobs and I don't want to do that, as it can cause the same
job to be queued multiple times and will pull all of my old backup jobs,
which I really
Hello!
Currently, in the evenings my backup system backups first to disk and
then in the mornings I scheduled a copy job to copy the jobs to tape (LTO2).
It worked fine so far, but now I have a new machine and have to do a
full backup with the size of ~330GB. (The other machines are <200GB)
Bacul
Hi All!
I'm trying to create a Copy Job, but some copies ending in error.
The CJ reads from "Tape0" (/dev/nst0) of "ChangerA" and write on "Tape0" of
"ChangerB" (/dev/nst6). All Chenger are Virtual one on FC (I have no
phisical device).
Looking in /var/log/messages i found the following msg that
bacula-2.5.28-b1
2009/2/10 Yuri Timofeev :
> Hi.
>
> The test was conducted on a clean, empty database. And use an empty tape.
>
> Run the job.
> Job Type = Copy , from disk to tape (without autochangers).
> The label is recorded on new clean tape. But in the database record is
> not happening.
>
Hi.
The test was conducted on a clean, empty database. And use an empty tape.
Run the job.
Job Type = Copy , from disk to tape (without autochangers).
The label is recorded on new clean tape. But in the database record is
not happening.
Running with option -d100 see below :
10-Фев 22:44 main.
90 matches
Mail list logo