Hi,

usage of ensureBufferSize() seems to be 'asymptotic' safe (despite the fact that a line separator might be much bigger than DEFAULT_BUFFER_SIZE, see below, and that no body else use the function as he is told by description..).

/** Increases our buffer by the {@link #DEFAULT_BUFFER_RESIZE_FACTOR}. */
    private void resizeBuffer(int size) {
        //size = 0;
        if (buffer == null) {
            int nSize = getDefaultBufferSize();
            if(nSize < size )
                nSize = size;
            buffer = new byte[nSize];
            pos = 0;
            readPos = 0;
        } else {
            int nSize = buffer.length * DEFAULT_BUFFER_RESIZE_FACTOR;
            if( 0 > nSize ) // should be tested with huge data
                nSize = Integer.MAX_VALUE;
            else if (nSize < size)
                nSize = size;
            byte[] b = new byte[nSize];
            System.arraycopy(buffer, 0, b, 0, pos);
            buffer = b;
        }
    }

    /**
     * Ensure that the buffer has room for <code>size</code> bytes
     *
     * @param size minimum spare space required
     */
    protected void ensureBufferSize(int size){
        if ((buffer == null) || (buffer.length < pos + size)){
            resizeBuffer(pos+size);
        }
    }

The test case is


    /**
     * lineSeparator much bigger than DEFAULT_BUFFER_SIZE.
     */
    @Test
    public void testDEFAULT_BUFFER_SIZE() {
byte[] baLineSeparator = new byte[BaseNCodec_DEFAULT_BUFFER_SIZE*4-3]; Base64 b64 = new Base64(Base64_BYTES_PER_ENCODED_BLOCK, baLineSeparator);
        String strOriginal = "Hello World";
String strDecoded = new String(b64.decode(b64.encode(StringUtils.getBytesUtf8(strOriginal)))); assertTrue("testDEFAULT_BUFFER_SIZE", strOriginal.equals(strDecoded));
    }
    private int BaseNCodec_DEFAULT_BUFFER_SIZE = 8192;
    private int Base64_BYTES_PER_ENCODED_BLOCK = 4;

It's astonishing how big the lineSeparator must be to see the error. I hope I have nothing overseen ('verschlimmbessern'). The code might work with huge data (not tested).


My personal view is, that the code is quite complicated. The loop

            for (int i = 0; i < inAvail; i++) {
                ensureBufferSize(encodeSize);
                modulus = (modulus+1) % BYTES_PER_UNENCODED_BLOCK;
                int b = in[inPos++];
                if (b < 0) {
                    b += 256;
                }
                bitWorkArea = (bitWorkArea << 8) + b; //  BITS_PER_BYTE
if (0 == modulus) { // 3 bytes = 24 bits = 4 * 6 bits to extract

with constant 'encodeSize' might slow down the algorithm. The estimated length is calculated

        long len = b64.getEncodedLength(binaryData);
        if (len > maxResultSize) {

but the value is not further used. The result array is doubled

        byte[] buf = new byte[pos - readPos];
        readResults(buf, 0, buf.length);
        return buf;

which is memory wasting in case input is very large.

Regards

Andreas

--


Andreas Menke

Diplom-Informatiker Univ.
Software-Entwicklung

Fon  0151 5081 1173
Mail asme...@snafu.de


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org

Reply via email to