unsubscribe

On Mon, Apr 24, 2017 at 3:30 AM, Marcel Hylkema <
marcel.hylkema....@gmail.com> wrote:

> Hi,
>
> We are loading SSTable files of different sizes (1 to a few 100,000
> records each) into Cassandra (v3.9 on CentOS 7) using JMX bulkload.
>
> Part of the files result in the following root cause exception:
>
> Caused by: java.lang.AssertionError
>         at org.apache.cassandra.cache.ChunkCache$CachingRebufferer.<
> init>(ChunkCache.java:223)
>         at org.apache.cassandra.cache.ChunkCache.wrap(ChunkCache.java:176)
>         at org.apache.cassandra.cache.ChunkCache.maybeWrap(
> ChunkCache.java:184)
>         at org.apache.cassandra.io.util.BufferedSegmentedFile.
> createRebufferer(BufferedSegmentedFile.java:42)
>         at org.apache.cassandra.io.util.BufferedSegmentedFile.<init>(
> BufferedSegmentedFile.java:27)
>         at org.apache.cassandra.io.util.BufferedSegmentedFile$Builder.
> complete(BufferedSegmentedFile.java:50)
>         at org.apache.cassandra.io.util.SegmentedFile$Builder.
> complete(SegmentedFile.java:181)
>         at org.apache.cassandra.io.util.SegmentedFile$Builder.
> buildIndex(SegmentedFile.java:207)
>         at org.apache.cassandra.io.sstable.format.SSTableReader.
> openForBatch(SSTableReader.java:441)
>
>
> Following the traces I see the following:
> - In org.apache.cassandra.io.util.SegmentedFile$Builder.buffersize(line
> 226) Cassandra creates a buffer based on the Index file size with a
> size that is a multiple of 4k larger than the file size with a max of
> 64k.
> - At org.apache.cassandra.cache.ChunkCache$CachingRebufferer.<
> init>(ChunkCache.java:223),
> there is a check if the buffer size is a multiple of 2.
>
> When trying to load an SSTable with an Index file that is say between
> 9000 and 10000 bytes, the buffer size may become 12k.
> But this is not a multiple of 2 and hence the exception.
>
> Does anyone have a resolution for this or should I report a bug?
>
> Regards,
> Marcel
>

Reply via email to