Hi there, I am struggling with the Amazon cloud driver. I have defined the Max 
vol. bytes as 350G and not defined Max Part Size, but due to volume size of 
350G, no part will be bigger than 350G, I suppose. Whether this makes sense 
when transferring over the Internet is another story… what would be a 
recommended part size?

No problems occur with small backups / part sizes (like say smaller than a 1G).

The problem I am encountering is with a backup of part size slightly over 200G. 
In the cache folder the part file is called “part.3”, but during transfer I can 
see on the S3 backend (minIO, self-hosted) that the filename of the part is 
“part<timestamp>.3” where timestamp consists of year, month, day, time, all 
concatenated with out any separators. After the transfer I find the following 
error message in the log:

"An error occurred (InvalidArgument) when calling the UploadPart operation: 
Part number must be an integer between 1 and 10000, inclusive Child exited with 
code 2”

It seems that Bacula does no like “part<timestamp>.3” and wants “part.3” - but 
why - why on earth - would the file being transferred use this filename 
including the timestamp? Or is this a transitory file name used during transfer 
and being renamed after complete transfer?

Has anyone an idea what is going wrong here?




_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to