The implicit default of 1G for Maximum File Size does not do anything, it needs 
to be put in expllicitly in the cloud device. Then it is being used.

> On 30. Oct 2024, at 01:21, Justin Case <jus7inc...@gmail.com> wrote:
> 
> Despite Maximum Part Size now being 1G and Maximum File Size using the 
> default of 1G, the part files in the volumes are of size 10G. Since the pool 
> is defined to have Maximum Volume Bytes 10G, there are just 2 parts per 
> volume, part 1 contains the volume metadata and part 2 contains the backup 
> data.
> 
> I am really wondering why a lot of this does not seem to work as documented...
> or what my mistake here is on my end.
> 
> 
>> On 29. Oct 2024, at 23:10, Justin Case <jus7inc...@gmail.com> wrote:
>> 
>> MinIO has a part limit of 5G, but a part in MinIO lingo is very likely not 
>> the same as a part in Bacula lingo. Apart from that the limits are crazy 
>> high considering this will be transferred over the public Internet.
>> 
>> I now have set Max Part Size to 1G and Max Volume Bytes to 10G and will see 
>> how well that works for me.
>> 
>> Any further advice and ideas are of course still welcome!
>> 
>>> On 29. Oct 2024, at 22:31, Robert Heller <hel...@deepsoft.com> wrote:
>>> 
>>> I am using 9.6.7-7 (adapted from the Debian 12 distributed version) on both
>>> Debian 12/ARM64 (Raspberry Pi 5) and Debian 12/AMD64 (a VPS), with a max 
>>> part
>>> size of 1G and a max vol size of 5G.  This works well for me.
>>> 
>>> It would not supprize me if the Amazon S3 API has some upper limit on file
>>> size.  I know that Amazon Glazier has a limit on the transfer size for large
>>> files (they must be uploaded in parts).
>>> 
>>> At Tue, 29 Oct 2024 21:35:35 +0100 Justin Case <jus7inc...@gmail.com> wrote:
>>> 
>>>> 
>>>> Hi there, I am struggling with the Amazon cloud driver. I have defined the 
>>>> Max vol. bytes as 350G and not defined Max Part Size, but due to volume 
>>>> size of 350G, no part will be bigger than 350G, I suppose. Whether this 
>>>> makes sense when transferring over the Internet is another story… what 
>>>> would be a recommended part size?
>>>> 
>>>> No problems occur with small backups / part sizes (like say smaller than a 
>>>> 1G).
>>>> 
>>>> The problem I am encountering is with a backup of part size slightly over 
>>>> 200G. In the cache folder the part file is called “part.3”, but during 
>>>> transfer I can see on the S3 backend (minIO, self-hosted) that the 
>>>> filename of the part is “part<timestamp>.3” where timestamp consists 
>>>> of year, month, day, time, all concatenated with out any separators. After 
>>>> the transfer I find the following error message in the log:
>>>> 
>>>> "An error occurred (InvalidArgument) when calling the UploadPart 
>>>> operation: Part number must be an integer between 1 and 10000, inclusive 
>>>> Child exited with code 2”
>>>> 
>>>> It seems that Bacula does no like “part<timestamp>.3” and wants 
>>>> “part.3” - but why - why on earth - would the file being transferred 
>>>> use this filename including the timestamp? Or is this a transitory file 
>>>> name used during transfer and being renamed after complete transfer?
>>>> 
>>>> Has anyone an idea what is going wrong here?
>>>> 
>>>> 
>>>> 
>>>> 
>>>> _______________________________________________
>>>> Bacula-users mailing list
>>>> Bacula-users@lists.sourceforge.net
>>>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>>> 
>>>> 
>>> 
>>> --
>>> Robert Heller             -- Cell: 413-658-7953 GV: 978-633-5364
>>> Deepwoods Software        -- Custom Software Services
>>> http://www.deepsoft.com/  -- Linux Administration Services
>>> hel...@deepsoft.com       -- Webhosting Services
>>> 
>>> 
>> 
> 



_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to