I agree with you. We are looking for a simple solution for data from Kafka
to Hadoop. I have tried using Camus earlier (Non-Avro) and documentation is
lacking to make it work correctly, as we do not need to introduce another
component to the solution. In the meantime, can the Kafka Hadoop
Consu
its nice information and it is used for us.123trainings also provides hadoop
online trainings
http://123trainings.com/it-hadoop-bigdata-online-training.html";>hadoop
online training in india
com" ;
> "ao...@wikimedia.org" ; Felix GV ;
> Cosmin Lehene ; "users@kafka.apache.org"
>
> Sent: Tuesday, August 13, 2013 1:03 PM
> Subject: Re: Kafka/Hadoop consumers and producers
>
> > What installs all the kafka dependencies under /usr/sh
"dibyendu.bhattacha...@pearson.com" ;
> "camus_...@googlegroups.com" ;
> "ao...@wikimedia.org" ; Felix GV ;
> Cosmin Lehene ; "users@kafka.apache.org"
>
> Sent: Monday, August 12, 2013 7:00 PM
> Subject: Re: Kafka/Hadoop consumers and pr
_...@googlegroups.com" ;
> "ao...@wikimedia.org" ; Felix GV ;
> Cosmin Lehene ; "users@kafka.apache.org"
>
> Sent: Monday, August 12, 2013 7:00 PM
> Subject: Re: Kafka/Hadoop consumers and producers
>
> We've done a bit of work over at Wikimedia to
yendu.bhattacha...@pearson.com" ;
"camus_...@googlegroups.com" ;
"ao...@wikimedia.org" ; Felix GV ;
Cosmin Lehene ; "users@kafka.apache.org"
Sent: Monday, August 12, 2013 7:00 PM
Subject: Re: Kafka/Hadoop consumers and producers
We've done a bit of work over at
;
"camus_...@googlegroups.com" ;
"ao...@wikimedia.org" ; Felix GV ;
Cosmin Lehene
Sent: Monday, August 12, 2013 8:20 PM
Subject: Re: Kafka/Hadoop consumers and producers
Kam,
I am perfectly fine if you pick this up. After thinking about it for a
while, we are going to up
>> To: Ken Goodhope
>> Cc: Andrew Psaltis ;
>> dibyendu.bhattacha...@pearson.com; "camus_...@googlegroups.com"
>> ; "ao...@wikimedia.org"
>> ; Felix GV ; Cosmin Lehene
>> ; "d...@kafka.apache.org" ;
>> "users@kafka.apache.org&
;
>; "ao...@wikimedia.org"
>; Felix GV ; Cosmin Lehene
>; "d...@kafka.apache.org" ;
>"users@kafka.apache.org"
>Sent: Saturday, August 10, 2013 3:30 PM
>Subject: Re: Kafka/Hadoop consumers and producers
>
>
>So guys, just to throw my 2 cents in:
>
>1.
oglegroups.com"
> ; "ao...@wikimedia.org" ;
> Felix GV ; Cosmin Lehene ;
> "d...@kafka.apache.org" ; "users@kafka.apache.org"
>
> Sent: Saturday, August 10, 2013 3:30 PM
> Subject: Re: Kafka/Hadoop consumers and producers
>
> So guys, j
...@kafka.apache.org" ; "users@kafka.apache.org"
Sent: Saturday, August 10, 2013 3:30 PM
Subject: Re: Kafka/Hadoop consumers and producers
So guys, just to throw my 2 cents in:
1. We aren't deprecating anything. I just noticed that the Hadoop contrib
package wasn't getti
So guys, just to throw my 2 cents in:
1. We aren't deprecating anything. I just noticed that the Hadoop contrib
package wasn't getting as much attention as it should.
2. Andrew or anyone--if there is anyone using the contrib package who would
be willing to volunteer to kind of adopt it that would
Hi Ken,
I am also working on making the Camus fit for Non Avro message for our
requirement.
I see you mentioned about this patch
(https://github.com/linkedin/camus/commit/87917a2aea46da9d21c8f67129f6463af52f7aa8)
which supports custom data writer for Camus. But this patch is not pulled into
I just checked and that patch is in .8 branch. Thanks for working on back
porting it Andrew. We'd be happy to commit that work to master.
As for the kafka contrib project vs Camus, they are similar but not quite
identical. Camus is intended to be a high throughput ETL for bulk
ingestion of Kaf
For the last 6 months, we've been using this:
https://github.com/wikimedia-incubator/kafka-hadoop-consumer
In combination with this wrapper script:
https://github.com/wikimedia/kraken/blob/master/bin/kafka-hadoop-consume
It's not great, but it works!
On Aug 9, 2013, at 2:06 PM, Felix GV wrot
I think the answer is that there is currently no strong community-backed
solution to consume non-Avro data from Kafka to HDFS.
A lot of people do it, but I think most people adapted and expanded the
contrib code to fit their needs.
--
Felix
On Fri, Aug 9, 2013 at 1:27 PM, Oleg Ruchovets wrote:
Yes , I am definitely interested with such capabilities. We also using
kafka 0.7.
Guys I already asked , but nobody answer: what community using to
consume from kafka to hdfs?
My assumption was that if Camus support only Avro it will not be suitable
for all , but people transfer from kafka to ha
Dibyendu,
According to the pull request: https://github.com/linkedin/camus/pull/15 it
was merged into the camus-kafka-0.8 branch. I have not checked if the code
was subsequently removed, however, two at least one the important files
from this patch
(camus-api/src/main/java/com/linkedin/camus/etl/Re
Felix,
The Camus route is the direction I have headed for allot of the reasons
that you described. The only wrinkle is we are still on Kafka 0.7.3 so I am
in the process of back porting this patch:
https://github.com/linkedin/camus/commit/87917a2aea46da9d21c8f67129f6463af52f7aa8
that
is descri
The contrib code is simple and probably wouldn't require too much work to
fix, but it's a lot less robust than Camus, so you would ideally need to do
some work to make it solid against all edge cases, failure scenarios and
performance bottlenecks...
I would definitely recommend investing in Camus
We also have a need today to ETL from Kafka into Hadoop and we do not currently
nor have any plans to use Avro.
So is the official direction based on this discussion to ditch the Kafka
contrib code and direct people to use Camus without Avro as Ken described or
are both solutions going to surv
Hi Andrew,
Camus can be made to work without avro. You will need to implement a message
decoder and and a data writer. We need to add a better tutorial on how to do
this, but it isn't that difficult. If you decide to go down this path, you can
always ask questions on this list. I try to make
I am also interested with hadoop+kafka capabilities. I am using kafka 0.7 ,
so my question : What is the best way to consume contect from kafka and
write it to hdfs? At this time I need the only consuming functionality.
thanks
Oleg.
On Wed, Aug 7, 2013 at 7:33 PM, wrote:
> Hi all,
>
> Over at
Hi all,
Over at the Wikimedia Foundation, we're trying to figure out the best way to do
our ETL from Kafka into Hadoop. We don't currently use Avro and I'm not sure
if we are going to. I came across this post.
If the plan is to remove the hadoop-consumer from Kafka contrib, do you think
we s
Vadim,
The advantages of Camus compared to the contrib consumer are the following
(but perhaps I'm forgetting some) :
- The ability to fetch all/many topics in one job (Map Reduce can
otherwise introduce a lot of overhead for small topics).
- Smarter load balancing of topic partitions ac
We can easily make a Camus configuration that would mimic the functionality
of the hadoop consumer in contrib. It may require the addition of a
BinaryWritable decoder, and a couple minor code changes. As for the
producer, we don't have anything in Camus that does what it does. But
maybe we shoul
I guess I am more concerned about the long term than the short term. I
think if you guys want to have all the Hadoop+Kafka stuff then we should
move the producer there and it sounds like it would be possible to get
similar functionality from the existing consumer code. I am not in a rush I
just wan
IMHO, I think Camus should probably be decoupled from Avro before the
simpler contribs are deleted.
We don't actually use the contribs, so I'm not saying this for our sake,
but it seems like the right thing to do to provide simple examples for this
type of stuff, no...?
--
Felix
On Wed, Jul 3,
If the Hadoop consumer/producers use-case will remain relevant for Kafka
(I assume it will), it would make sense to have the core components (kafka
input/output format at least) as part of Kafka so that it could be built,
tested and versioned together to maintain compatibility.
This would also make
Jay,
What is the difference between this project and Camus? Which advantages to use
for loading log entries from kafka into Hadoop ?
Vadim
Sent from my iPhone
On Jul 2, 2013, at 5:01 PM, Jay Kreps wrote:
> We currently have a contrib package for consuming and producing messages
> from mapredu
We currently have a contrib package for consuming and producing messages
from mapreduce (
https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tree;f=contrib;h=e53e1fb34893e733b10ff27e79e6a1dcbb8d7ab0;hb=HEAD
).
We keep running into problems (e.g. KAFKA-946) that are basically due to
the fact tha
31 matches
Mail list logo