gerlowskija commented on code in PR #1545: URL: https://github.com/apache/solr/pull/1545#discussion_r1219631985
########## solr/modules/s3-repository/src/test/com/adobe/testing/s3mock/util/AwsChunkedDecodingInputStream.java: ########## @@ -0,0 +1,144 @@ +/* + * Copyright 2017-2022 Adobe. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package com.adobe.testing.s3mock.util; + +import java.io.BufferedInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.nio.ByteBuffer; +import java.nio.charset.StandardCharsets; + +/** + * Skips V4 style signing metadata from input streams. + * <p>The original stream looks like this (newlines are CRLF):</p> + * + * <pre> + * 5;chunk-signature=7ece820edcf094ce1ef6d643c8db60b67913e28831d9b0430efd2b56a9deec5e + * 12345 + * 0;chunk-signature=ee2c094d7162170fcac17d2c76073cd834b0488bfe52e89e48599b8115c7ffa2 + * </pre> + * + * <p>The format of each chunk of data is:</p> + * + * <pre> + * [hex-encoded-number-of-bytes-in-chunk];chunk-signature=[sha256-signature][crlf] + * [payload-bytes-of-this-chunk][crlf] + * </pre> + * + * @see + * <a href="http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/AwsChunkedEncodingInputStream.html"> + * AwsChunkedEncodingInputStream</a> + */ +public class AwsChunkedDecodingInputStream extends InputStream { + + /** + * That's the max chunk buffer size used in the AWS implementation. + */ + private static final int MAX_CHUNK_SIZE = 256 * 1024; + + private static final byte[] CRLF = "\r\n".getBytes(StandardCharsets.UTF_8); + + private static final byte[] DELIMITER = ";".getBytes(StandardCharsets.UTF_8); + + private final InputStream source; + + private int remainingInChunk = 0; + + private final ByteBuffer byteBuffer = ByteBuffer.allocate(MAX_CHUNK_SIZE); + + /** + * Constructs a new {@link AwsChunkedDecodingInputStream}. + * + * @param source The {@link InputStream} to wrap. + */ + public AwsChunkedDecodingInputStream(final InputStream source) { + // Remove this class after TODO open issue with s3mock + // Buffer the source InputStream since this class only implements read() so + // pass off the actual buffering to the BufferedInputStream to read bigger + // chunks at once. This avoids a lot of single byte reads. + this.source = new BufferedInputStream(source); Review Comment: No change from last month afaict. Kevin's s3mock PR is still outstanding. There are a few S3 mock related failures on fucit, but it's pretty far down the list in terms of our flaky tests. Given that the fucit failures haven't flared up, it seems fine to keep waiting on s3mock if we don't want to duplicate the code locally. I'll set another reminder for a month from now to check back in... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org For additional commands, e-mail: issues-h...@solr.apache.org