Hi,
I have two doubts for changes in different versions of Kafka available:
*Doubt1:*
I am using Kafka from the version kafka-0.7.1-incubating-src.tgz.
The latest stable version is kafka_2.8.0-0.8.0.tar.gz.
But there is propertyName difference in config/server.properties file of
different versi
Hi,
I have two doubts for changes in different versions of Kafka available:
*Doubt1:*
I am using Kafka from the version kafka-0.7.1-incubating-src.tgz.
The latest stable version is kafka_2.8.0-0.8.0.tar.gz.
But there is propertyName difference in config/server.properties file of
different versi
Monika changes to Kafka are tracked in JIRA and you will see this almost
always in the commit log (so you can find the file, review the history, do
a git blame and often (if not always) see the JIRA ticket number that
caused the change... go to JIRA and see why).
e.g: the answer to your first ques
Hi Joel - The kind of error I am thinking about is when there is a
networking issue where the consumer is completely cut-off from the
cluster. In that scenario, the consuming application has no way of knowing
whether there is an actual problem or there are just no messages to
consume. In the case o
Does not look like it has been updated for 0.8, but you may want to
check with the author directly.
On Tue, Jan 07, 2014 at 08:38:04PM -0500, Ray Rodriguez wrote:
> Will the current kafka-s3-consumer (
> https://github.com/razvan/kafka-s3-consumer) work with 0.8.0?
>
> Ray Rodriguez
> Medidata So
Yes, it's happening continuously, at the moment (although I'm expecting the
consumer to catch up soon)
It seemed to start happening after I refactored the consumer app to use
multiple consumer connectors in the same process (each one has a separate
topic filter, so should be no overlap between
thinking/typing out loud here not sure if this is the problem but could be
so figure I throw it out there
the ZookeeperConsumerConnector has a messageStreamCreated atomic boolean
stopping more than one consumer connector being created for messages stream
by filter at once...
do you have separate
Thank you for your answers
@Guozhang: I can't find the "ack value" in my console...
@Marc: I'm testing some stuff on 0.8 before migrating 0.7 to 0.8, that's
why I'm killing it instead of controlled shutdown.
@Jun: I create it using this command : *bin/kafka-create-topic.sh
--zookeeper localhost:
Hi There,
in case anyone is interested,we created a node.js binding using librdkafka [1].
Publish only so far, for kafka_2.8.0-0.8.0-beta1+
For more info, check out the github page [2], and for an example on how to use
the library [3].
Happy to answer questions, or accept contributions.
Than
Joe,
I'm creating separate connectors, and creating separate streams, with a
separate thread pool to process them, for each connector.
This appears to be working well (e.g. each connector seems to be correctly
processing data).
The only difference is the extra ERROR log message I'm seeing on the
Hi,
I would like to check to see if other people are seeing duplicate records with
Kafka 0.7. I read the Jira's and I believe that duplicates are still possible
when using message compression on Kafka 0.7. I'm seeing duplicate records from
the range of 6-13%. Is this normal?
If you're using K
11 matches
Mail list logo