I’m working to reduce the memory burden of the S3 device currently, as well as 
to (hopefully) clean up and speed up the upload streaming.

Our ZMC product currently (because we don’t know what users tend to want) 
depends mostly on creating “cloud files” and splitting backups between them as 
needed, if they grow too large.    (A limit exists because each of 10,000 
uploaded parts to AWS requires a checksum-in-memory before it is uploaded… 
limiting its size by buffer memory).

How many users depend on the non-multipart style of cloud storage?    Show of 
hands?   Who uses the cloud storage where Amanda creates blobs with 
“f000000001-b000000000000000001234.data” to enumerate fixed size chunks in 
order.   I’ve been working on making this mode able to upload without using 
full-size memory buffers every time, but it seems our ZMC product doesn’t use 
that mode nearly ever.

Does anyone?

n  CH
Confidentiality Notice | The information transmitted by this email is intended 
only for the person or entity to which it is addressed. This email may contain 
proprietary, business-confidential and/or privileged material. If you are not 
the intended recipient of this message, be aware that any use, review, 
re-transmission, distribution, reproduction or any action taken in reliance 
upon this message is strictly prohibited. If you received this in error, please 
contact the sender and delete the material from all computers.

Reply via email to