Hi Felipe,

It looks like the fetch response may, in some cases, contain a null
ByteBuffer for a partition instead of the expected empty byte buffer. This
code changed a lot in trunk so it may have already been fixed. Any chance
you could test trunk to see if the problem persists? In any case, please
file a ticket in JIRA for tracking purposes.

Thanks,
Ismael

On Mon, Dec 26, 2016 at 4:09 PM, Felipe Santos <felip...@gmail.com> wrote:

> I am using kafka 0.10.1.0 some times on the client I've got null pointer
> exception:
>
> " java.lang.NullPointerException
>     at
> org.apache.kafka.common.record.ByteBufferInputStream.
> read(org/apache/kafka/common/record/ByteBufferInputStream.java:34)
>     at
> java.util.zip.CheckedInputStream.read(java/util/zip/CheckedInputStream.
> java:59)
>     at
> java.util.zip.GZIPInputStream.readUByte(java/util/zip/
> GZIPInputStream.java:266)
>     at
> java.util.zip.GZIPInputStream.readUShort(java/util/zip/
> GZIPInputStream.java:258)
>     at
> java.util.zip.GZIPInputStream.readHeader(java/util/zip/
> GZIPInputStream.java:164)
>     at
> java.util.zip.GZIPInputStream.<init>(java/util/zip/
> GZIPInputStream.java:79)
>     at
> java.util.zip.GZIPInputStream.<init>(java/util/zip/
> GZIPInputStream.java:91)
>     at
> org.apache.kafka.common.record.Compressor.wrapForInput(org/apache/kafka/
> common/record/Compressor.java:280)
>     at
> org.apache.kafka.common.record.MemoryRecords$RecordsIterator.<init>(org/
> apache/kafka/common/record/MemoryRecords.java:247)
>     at
> org.apache.kafka.common.record.MemoryRecords$RecordsIterator.makeNext(org/
> apache/kafka/common/record/MemoryRecords.java:316)
>     at
> org.apache.kafka.common.record.MemoryRecords$RecordsIterator.makeNext(org/
> apache/kafka/common/record/MemoryRecords.java:222)
>     at
> org.apache.kafka.common.utils.AbstractIterator.
> maybeComputeNext(org/apache/kafka/common/utils/AbstractIterator.java:79)
>     at
> org.apache.kafka.common.utils.AbstractIterator.hasNext(org/
> apache/kafka/common/utils/AbstractIterator.java:45)
>     at
> org.apache.kafka.clients.consumer.internals.Fetcher.
> parseFetchedData(org/apache/kafka/clients/consumer/
> internals/Fetcher.java:679)
>     at
> org.apache.kafka.clients.consumer.internals.Fetcher.
> fetchedRecords(org/apache/kafka/clients/consumer/
> internals/Fetcher.java:425)
>     at
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(org/apache/kafka/
> clients/consumer/KafkaConsumer.java:1021)
>     at
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(
> org/apache/kafka/clients/consumer/KafkaConsumer.java:979)
>     at java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:498)
>     at
> RUBY.thread_runner(/opt/logstash/vendor/local_gems/
> 90fefca7/logstash-input-kafka-6.2.0/lib/logstash/inputs/kafka.rb:246)
>     at java.lang.Thread.run(java/lang/Thread.java:745)
>
> --
> Felipe Santos
>

Reply via email to