We are using the Kafka version in the Confluent 3.0.0, so I think it should
be 0.10.0.0-cp1.
We need to get the Flume out of the timeout in order to get back to work
again. Any advice?
On Fri, Sep 29, 2017 at 10:34 PM, Matt Sicker wrote:
> What version of Kafka broker are you using? Up until on
What version of Kafka broker are you using? Up until one of the 0.10.x
releases (forget which), you have to use the same version or earlier of the
client library from what I remember. Compatibility is getting better from
0.11 onward (especially by the 1.0 release), but it's still rather
confusing.
by the way, according to https://issues.apache.org/jira/browse/KAFKA-3409 ,
we tried to upgrade the client package of kafka to 0.10.0.0, but the
confluent failed to startup.
It seemed it's an issue in the compatibility.
On Fri, Sep 29, 2017 at 11:37 AM, wenxing zheng
wrote:
> Thanks to Ferenc.
>
Thanks to Ferenc.
We have do various adjustment on those settings. And we found that the case
was due to Saturation of network bandwidth, and no matter what we set, it
will get timeout.
But the problem is after the network restored, Flume will not continue to
work.
On Thu, Sep 28, 2017 at 8:40 PM
Dear Wenxing,
If I guess correctly you have time periods with very few messages and that
is when the issue happen.
If that is the case:
try to increase
kafka.consumer.heartbeat.interval.ms
and
kafka.consumer.session.timeout.ms
(session.timeout have to be more than the heartbeat interval)
or lower