Hi there,
In order to follow the 3-2-1 backups strategy, I need to create a second
copy type job to send backups to s3. The current client definition have two
jobs, one for the main backup (bacula server), and a copy type job that,
using the Next Pool directive in the original Pool, sends the back
Hi there,
I'm getting errors on the jobs configured with cloud s3:
06-Jan 09:03 mainbackupserver-sd JobId 4030: Error:
serverJob-CopyToS3-0781/part.16state=error retry=1/10 size=2.064 MB
duration=0s msg= S3_put_object ERR=Content-MD5 OR x-amz-checksum- HTTP
header is required for Put Object
I have confirmed this behaviour:
The PoolUncopiedJobs select type for Copy jobs, determines that if the
second copy job will return not jobIDs to copy.
1) Backup job (Backup on main backup server's bacula SD), does a backup.
2) First copy job (Copy to 2nd backup server's bacula SD), does the copy
lem of the two copies of one job as well.
Two solutions with one shot.
Hope this helps.
Thank you all !
On Tue, Jan 17, 2023 at 5:47 PM Bill Arlofski via Bacula-users <
bacula-users@lists.sourceforge.net> wrote:
> On 1/17/23 06:05, Ivan Villalba via Bacula-users wrote:
> >
> &
Hi,
I'm having issues on the volume configuration. We're currently sending all
files in the bacula-storage directory to s3 with objectLock using aws cli
with a bash script on the Run AfteR Job directive on the server's self job
copy. So when every job is finished, it uploads everything on the
dire
Hi,
Sorry, I forgot to add the error I get from Backup jobs after we reach the
31 volumes limit.
22_05.00.02_57 is waiting. Cannot find any appendable volumes.
> Please use the "label" command to create a new Volume for:
> Storage: "FileChgr1-Dev6" (/backup/bacula-storage)
>
My apologies