ConfX created HADOOP-19339:
------------------------------

             Summary: OutofBounds Exception due to assumption about buffer size 
in BlockCompressorStream
                 Key: HADOOP-19339
                 URL: https://issues.apache.org/jira/browse/HADOOP-19339
             Project: Hadoop Common
          Issue Type: Bug
          Components: common
    Affects Versions: 3.4.1
            Reporter: ConfX
            Assignee: ConfX


h3. What Happened: 

Got an OutofBounds exception when io.compression.codec.snappy.buffersize is set 
to 7. BlockCompressorStream assumes that the buffer size will always be greater 
than the compression overhead, and consequently MAX_INPUT_SIZE will always be 
greater than or equal to 0. 
h3. Buggy Code: 

When io.compression.codec.snappy.buffersize is set to 7, compressionOverhead is 
33 and MAX_INPUT_SIZE is -26. 

 
{code:java}
public BlockCompressorStream(OutputStream out, Compressor compressor, 
                             int bufferSize, int compressionOverhead) {
  super(out, compressor, bufferSize);
  MAX_INPUT_SIZE = bufferSize - compressionOverhead; // -> Assumes bufferSize 
is always greater than compressionOverhead and MAX_INPUT_SIZE is non-negative. 
} {code}
h3. Stack Trace: 
{code:java}
java.lang.ArrayIndexOutOfBoundsException
        at 
org.apache.hadoop.io.compress.snappy.SnappyCompressor.setInput(SnappyCompressor.java:86)
        at 
org.apache.hadoop.io.compress.BlockCompressorStream.write(BlockCompressorStream.java:112)
 {code}
h3. How to Reproduce: 

(1) Set io.compression.codec.snappy.buffersize to 7

(2) Run test: org.apache.hadoop.io.compress.TestCodec#testSnappyMapFile

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Reply via email to