We will be using Kafka to transport access logs from frontend cache servers to 
HDFS.  But!  We aren't doing this in production yet, so I wouldn't quite yet 
feel comfortable saying we are powered by Kafka.  I hope to get this into 
production more in a few weeks*, and at that time we'll write a nice little 
blogpost describing the whole thing.

-Andrew

*a few weeks could really mean anything in busy nonprofit time :)




On Nov 6, 2013, at 9:37 AM, Joe Stein <joe.st...@stealth.ly> wrote:

> Yes, thank you Andrew this is great stuff.
> 
> May I ask how Wikimedia is using Kafka so I can update the powered by
> https://cwiki.apache.org/confluence/display/KAFKA/Powered+By
> 
> /*******************************************
> Joe Stein
> Founder, Principal Consultant
> Big Data Open Source Security LLC
> http://www.stealth.ly
> Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
> ********************************************/
> 
> 
> On Wed, Nov 6, 2013 at 9:29 AM, Neha Narkhede <neha.narkh...@gmail.com>wrote:
> 
>> Cool, thanks for sharing this!
>> 
>> -Neha
>> On Nov 6, 2013 6:16 AM, "Andrew Otto" <o...@wikimedia.org> wrote:
>> 
>>> Hi,
>>> 
>>> I just got jmxtrans set up with Kafka 0.8 over at Wikimedia.  We're using
>>> puppet, and the https://github.com/wikimedia/puppet-kafka module to
>>> install and set up Kafka.  This module now comes with a
>>> kafka::server::jmxtrans class which will automatically set up jmxtrans
>> with
>>> some nice Kafka queries.
>>> 
>>> If you aren't using puppet, you can use the rendered example jmxtrans
>> JSON
>>> config here:
>>> 
>>> 
>>> 
>> https://github.com/wikimedia/puppet-kafka/blob/master/kafka-jmxtrans.json.md
>>> 
>>> The exact queries will probably change as we get more operational
>>> experience and think of more things we want to monitor.
>>> 
>>> Hope this is useful to someone!
>>> 
>>> -Andrew Otto
>>> 
>>> 
>>> 
>> 

Reply via email to