This problem was solved by upgrading from 0.10 to 0.11 (broker + client).
Thanks for your feedback.
On Thu, Nov 30, 2017 at 10:03 AM, Tom van den Berge <
tom.vandenbe...@gmail.com> wrote:
> The consumers are using default settings, which means that
> enable.auto.com
I'm using a (java) consumer with default configuration, so auto-commit is
enabled. The consumer is reading from 5 partitions of a single topic. The
consumer processes one message at a time (synchronously). Sometimes, large
numbers of messages are posted to the topic, and the consumer will have to
w
ually or setting
> enable.auto.commit and auto.commit.interval.ms?
>
> On Wed, Nov 29, 2017 at 11:15 PM, Tom van den Berge <
> tom.vandenbe...@gmail.com> wrote:
>
> > I'm using Kafka 0.10.0.
> >
> > I'm reading messages from a single topic (20 partition
t" that seems to decrease, I have no idea.
>
> Isabelle Giguère
> Computational Linguist and Java Developer
> Linguiste informaticienne et développeur Java
>
> _
> Open Text
> The Content Experts
>
> -Original Message-
> From: Tom van den Berge [
I'm using Kafka 0.10.0.
I'm reading messages from a single topic (20 partitions), using 4 consumers
(one group), using a standard java consumer with default configuration,
except for the key and value deserializer, and a group id; no other
settings.
We've been experiencing a serious problem a few
Hi,
I'm running a two-node kafka cluster. When I (gracefully) shut down one of
the kafka servers, the application that publishes message to the cluster
keeps giving this error message:
org.apache.kafka.common.errors.TimeoutException: Batch containing 1
record(s) expired due to timeout while reque