Hi,
I would like some help/information on what client versions are compatible
with what broker versions in kafka.
Some table like this would be good
server
client 0.80.9 0.10 0.11
0.8 yes ? ??
0.9 ?yes ??
0.10
Have you seen this: http://kafka.apache.org/documentation.html#upgrade
Starting with version 0.10.2, Java clients (producer and consumer)
have acquired the ability to communicate with older brokers. Version
0.11.0 clients can talk to version 0.10.0 or newer brokers. However,
if your brokers ar
Hi,
This gives me some information but still not the complete picture.
It says:
0.10.2, Java clients have acquired the ability to communicate with older
brokers.
It also says
Version 0.11.0 brokers support 0.8.x and newer clients
Question is does 0.10.2 broker support 0.8.x clients?
This may so
All broker versions support all older client versions
On Tue, Jul 18, 2017 at 10:15 AM, Sachin Mittal wrote:
> Hi,
> This gives me some information but still not the complete picture.
>
> It says:
> 0.10.2, Java clients have acquired the ability to communicate with older
> brokers.
>
> It also s
OK.
Just a doubt I have is that my broker is 0.10.2 and producer is also of the
same version and writes to a topic.
Then I have a client from version 0.8.2 trying to fetch these messages and
what I see is that all messages are getting dropped.
However since this is an older client I tried to fetc
Hi Everyone,
I personally found that the 0.8.x clients do not work with 0.10.0. We
upgraded our clients (KafkaSpout and custom consumers) to 0.9.0.1 and then
Kafka produce/consume worked fine.
--John
On Tue, Jul 18, 2017 at 6:36 AM, Sachin Mittal wrote:
> OK.
>
> Just a doubt I have is that my
Hi all,
0.8.x clients should work with 0.9.x, 0.10.x and 0.11.x brokers. We have
system tests for all the relevant combinations. One thing to be careful
about is that Scala consumers and Java consumers store offsets and group
management information differently and the Java consumer was only
introd
This is a really common issue with windows. You can try turning off
backups/virus scanning of that folder and sometimes this relieves the
issue. (I assume there is a file attribute you can alter.) It's probably
the case that Kafka itself has the file open, as linux does not have issues
if you mo
Hi,
Sorry for the delay, couldn't get to answer more early. I do understand
your point perfectly.
I just have a different perspective on what is going on. The most
crucial piece of abstraction, the KTable is falling apart
and that materializes (no pun intended) itself into many problems.
1. T
[2017-07-15 08:45:19,071] WARN [ReplicaFetcherThread-0-3], Error in fetch
kafka.server.ReplicaFetcherThread$FetchRequest@60192273
(kafka.server.ReplicaFetcherThread)
java.io.IOException: Connection to 3 was disconnected before the response was
read
at
kafka.utils.NetworkClientBlockingOps$$anon
I saw this recently as well. This could result from either really long GC
pauses or slow Zookeeper responses. The former can result from too big of a
memory heap or sub-optimal GC algorithm/GC configuration.
--John
On Tue, Jul 18, 2017 at 3:18 AM, Mackey star wrote:
> [2017-07-15 08:45:19,071]
Hi,
This is not really a crash, it just means that a connection to the leader
was disconnected. The follower will try to reconnect periodically. If the
leader is really down, the Controller will elect a new leader and the
following will stop trying to reconnect to the old leader.
Hope this helps.
Is this from 0.10.2.1? I havebeen running on both Windows and Linux but
cannot see any issues.
Anyone else?
On Tue, 18 Jul 2017 at 3:31 pm, John Yost wrote:
> I saw this recently as well. This could result from either really long GC
> pauses or slow Zookeeper responses. The former can result fr
IIUC, we are having similar issues with 0.10.2.1.
already asked on another thread.
On 18/07/17 16:49, M. Manna wrote:
Is this from 0.10.2.1? I havebeen running on both Windows and Linux but
cannot see any issues.
Anyone else?
On Tue, 18 Jul 2017 at 3:31 pm, John Yost wrote:
I saw this rec
Hi
On a 3 brokers cluster when one of the broker come back after a restart
group rebalancing happens on our 2 consumers which make them restart to
consume from an old offset which is not the earliest. Looking at the
consumer offsets through kafka tools when running commits look good but on
rebalan
Hi
One of our Kafka brokers was running out of disk space and when we checked
the file size in the kafka log dir we observed the following
$ du -h . --max-depth=2 | grep '__consumer_offsets'
4.0K./kafka-logs/__consumer_offsets-16
4.0K./kafka-logs/__consumer_offsets-40
35G ./kafka-logs
Hi,
I agree with some of Jan's points here. Interactive queries are a nice to have,
but not worth sacrificing clean interfaces over.
It's not the main use case of Kafka Streams and implementing it via a
getQueryHandle on KTables means the related logic doesn't spread everywhere but
instead trul
Dear Team
I am trying to implement exactly-once for one of the Bank usecase. I have
written code referring to javadoc of kafka website.
program hangs while calling producer.initTransactions(), doesn't proceed to
next steps.
I have asked this question here:
https://stackoverflow.com/questions/45
This is similar to a problem I am also grappling with. We store the
processed offset for each partition in state store. And after restarts we
see that sometimes the start offset that Kafka Streams uses is a few
thousands to a couple million behind per partition. To compound it, this is
not repeatab
Hi -
I have a Kafka Streams application that generates Avro records in a topic,
which is being read by a Kafka Connect process that uses HDFS Sink
connector. The topic has around 1.6 million messages. And the Kafka Connect
script is as follows ..
bin/connect-standalone
> etc/schema-registry/conne
Hi,
This issue was reported earlier in this post , for Kafka 0.9:
https://stackoverflow.com/questions/41177614/kafka-0-9-java-consumer-skipping-offsets-during-application-restart
However, I see the same issue with the Kafka 0.10 as well In summary:
1. we start our consumer app, and in the eve
Hello,
I am having trouble to get the data from old offsets. I'm using the version
0.10.2.1, and I need any assistance to recover this data.
This is my consumer class:
String topicName = "test";
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
Hello,
I am having trouble to get the data from old offsets. I'm using the version
0.10.2.1, and I need any assistance to recover this data.
This is my consumer class:
String topicName = "test";
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
Hi Jason,
I updated the KIP based on your earlier suggestions:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-175%3A+Additional+%27--describe%27+views+for+ConsumerGroupCommand
The only thing I am wondering at this point is whether it's worth to have
a `--describe --offsets` option that be
Hi Debasish,
> flush.size=3
this means every 3 messages in that topic will end up in its own HDFS file,
which is probably why you end up with so many files that ls hurts.
You should flush a bigger batch or after a high enough interval.
> tasks.max=1
Unless you have a single partition topic, you
Look into these 2 props:
rotate.schedule.interval.ms
flush.size
On Tue, Jul 18, 2017 at 2:46 PM, Abdoulaye Diallo
wrote:
> Hi Debasish,
>
>
> > flush.size=3
> this means every 3 messages in that topic will end up in its own HDFS
> file, which is probably why you end up with so many files that ls
It's possible that the log-cleaning thread has crashed. That is the thread that
implements log compaction.
Look in the log-cleaner.log file in your kafka debuglog directory to see if
there is any indication that it has crashed (error messages, stack traces, etc).
What version of kafka are you u
Hi,
I was integrating Kafka with Spark, using DirectStream, when my
authentication fail, the stream just blocked. No log, no exceptions were
thrown. Could some one help to address such situtation
Thanks .. it worked!
On Wed, Jul 19, 2017 at 3:17 AM, Abdoulaye Diallo
wrote:
> Look into these 2 props:
> rotate.schedule.interval.ms
> flush.size
>
> On Tue, Jul 18, 2017 at 2:46 PM, Abdoulaye Diallo
> wrote:
>
>> Hi Debasish,
>>
>>
>> > flush.size=3
>> this means every 3 messages in that top
29 matches
Mail list logo