The consumer has a config property called consumer.timeout.ms. By setting
the value to a positive integer, a timeout exception is thrown to the
consumer if no message is available for consumption after the specified
timeout value.
Thanks,
Jun
On Fri, Aug 9, 2013 at 9:25 AM, Jan Rudert wrote:
>
Hi Ken,
I am also working on making the Camus fit for Non Avro message for our
requirement.
I see you mentioned about this patch
(https://github.com/linkedin/camus/commit/87917a2aea46da9d21c8f67129f6463af52f7aa8)
which supports custom data writer for Camus. But this patch is not pulled into
Have you read the docs? They are well written. It's all there, including the
paths.
Philip
On Aug 9, 2013, at 3:24 PM, Vadim Keylis wrote:
> I am trying to setup kafka service and connect to zookeeper that would be
> shared with Other projects. Can someone advice how to configure namespace
I am trying to setup kafka service and connect to zookeeper that would be
shared with Other projects. Can someone advice how to configure namespace in
kafka and zookeeper.
Thanks so much
Sent from my iPad
I just checked and that patch is in .8 branch. Thanks for working on back
porting it Andrew. We'd be happy to commit that work to master.
As for the kafka contrib project vs Camus, they are similar but not quite
identical. Camus is intended to be a high throughput ETL for bulk
ingestion of Kaf
For the last 6 months, we've been using this:
https://github.com/wikimedia-incubator/kafka-hadoop-consumer
In combination with this wrapper script:
https://github.com/wikimedia/kraken/blob/master/bin/kafka-hadoop-consume
It's not great, but it works!
On Aug 9, 2013, at 2:06 PM, Felix GV wrot
I think the answer is that there is currently no strong community-backed
solution to consume non-Avro data from Kafka to HDFS.
A lot of people do it, but I think most people adapted and expanded the
contrib code to fit their needs.
--
Felix
On Fri, Aug 9, 2013 at 1:27 PM, Oleg Ruchovets wrote:
Yes , I am definitely interested with such capabilities. We also using
kafka 0.7.
Guys I already asked , but nobody answer: what community using to
consume from kafka to hdfs?
My assumption was that if Camus support only Avro it will not be suitable
for all , but people transfer from kafka to ha
Hi,
I have an consumer application where I have a message stream per topic and
one thread per stream.
I will do a commitOffsets() when a global shared message counter is
reaching a limit.
I think I need to make sure that no thread is consuming while I call
commitOffsets() to ensure that no concu
Dibyendu,
According to the pull request: https://github.com/linkedin/camus/pull/15 it
was merged into the camus-kafka-0.8 branch. I have not checked if the code
was subsequently removed, however, two at least one the important files
from this patch
(camus-api/src/main/java/com/linkedin/camus/etl/Re
10 matches
Mail list logo