Andrew Olson created HADOOP-16900: ------------------------------------- Summary: Very large files can be truncated when written through S3AFileSystem Key: HADOOP-16900 URL: https://issues.apache.org/jira/browse/HADOOP-16900 Project: Hadoop Common Issue Type: Bug Components: fs/s3 Reporter: Andrew Olson
If a written file size exceeds 10,000 * {{fs.s3a.multipart.size}}, a corrupt truncation of the S3 object will occur as the maximum number of parts in a multipart upload is 10,000 as specific by the S3 API and there is an apparent bug where this failure is not fatal. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org