memory requirement for kafka - how to estimate

2014-02-26 Thread Arjun
Hi, Can i know what all should i consider while calculating the memory requirements for the kafka broker. I had tried to search on net about this, but doulc not find nay. If anyone can please suggest or point to a link that would be helpful. Thanks Arjun Narasimha Kota

Puppet module for deploying Kafka released

2014-02-26 Thread Michael G. Noll
Hi everyone, I have released a Puppet module to deploy Kafka 0.8 in case anyone is interested. The module uses Puppet parameterized classes and as such decouples code (Puppet manifests) from configuration data -- hence you can use Puppet Hiera to configure the way Kafka is deployed without having

Re: Puppet module for deploying Kafka released

2014-02-26 Thread pushkar priyadarshi
i have been using one from here. https://github.com/whisklabs/puppet-kafka but had to fix few small problem like when this starts kafka as upstart service it does not provide log path so kafka logs never appear since as a service they dont have default terminal. Thanks for sharing.Will start usi

Question about "00000000.index" file on kafka

2014-02-26 Thread 손정민
Hello I'm a kafka user~ As i know... "000.index" file is created by kafka during kafka working~ when i open to see the file, it seemed that there is no way to understand what it is Additionally.. I don't know well a spec of the file. Briefly described, i'd like to know what the fil

Re: Puppet module for deploying Kafka released

2014-02-26 Thread Neha Narkhede
Thanks for sharing! I added this to our ecosystem page - https://cwiki.apache.org/confluence/display/KAFKA/Ecosystem On Wed, Feb 26, 2014 at 2:36 AM, pushkar priyadarshi < priyadarshi.push...@gmail.com> wrote: > i have been using one from here. > > https://github.com/whisklabs/puppet-kafka > but

Re: Question about "00000000.index" file on kafka

2014-02-26 Thread Neha Narkhede
The doc at the beginning of OffsetIndex.scala explains the format of an index file. To inspect the contents of an index file, you can use the DumpLogSegmenttool On Tue, Feb 25, 2014 at 11:42 PM, 손정민 wrote:

Re: Kafka-0.8 Log4j Appender

2014-02-26 Thread Neha Narkhede
I think this is a bug with the KafkaLog4jAppender that is triggered when the message send logs an error that in turn tries to resend the message and gets into this infinite loop. Could you file a JIRA? On Tue, Feb 25, 2014 at 9:51 PM, 김동경 wrote: > Dear all. > > Are there anyone who tried runnin

KafkaStream hasNext() blocking and I'm not sure why

2014-02-26 Thread Josh
Hi all, I have created 6 Kafka topics each with 2 partitions, and now I want to consume from all partitions in a Java app. I have created a Executors.newScheduledThreadPool(12) for this, and then submit my KafkaConsumer implementations to this thread pool. In KafkaConsumer, I do: ConsumerIterato

Re: Consumer group ID for high level consumer

2014-02-26 Thread Martin Kleppmann
Hi Binita, The consumer group (group.id) is a mechanism for sharing the load of consuming a high-volume topic between multiple consumers. If you don't set a group ID, each consumer consumes all the partitions of a topic. If you set several consumers to the same group ID, the partitions of the t

Re: Puppet module for deploying Kafka released

2014-02-26 Thread Andrew Otto
Oh so many puppet modules! https://github.com/wikimedia/puppet-kafka This one requires a Kafka .deb built from https://github.com/wikimedia/operations-debs-kafka/tree/debian/debian, which can be found prebuilt here: http://apt.wikimedia.org/wikimedia/pool/universe/k/kafka/ :) On Feb 26,

Re: How ot had scala libraries to path

2014-02-26 Thread Jun Rao
This just means that you exceed the max request size. You can change the config on the broker. Thanks, jun On Tue, Feb 25, 2014 at 10:23 PM, David Montgomery < davidmontgom...@gmail.com> wrote: > In the kafka llgs I get this entry from the clientis this a client or > kafja server issue? i

Re: KafkaStream hasNext() blocking and I'm not sure why

2014-02-26 Thread Jun Rao
Could it be that one of those threads died? Java threadpool is notorious for eating exceptions. So, you probably want to try/catch the whole iteration block. Thanks, Jun On Wed, Feb 26, 2014 at 3:56 AM, Josh wrote: > Hi all, > > I have created 6 Kafka topics each with 2 partitions, and now I

Producer fails when old brokers are replaced by new

2014-02-26 Thread Christofer Hedbrandh
Hi all, I ran into a problem with the Kafka producer when attempting to replace all the nodes in a 0.8.0 Beta1 Release Kafka cluster, with 0.8.0 Release nodes. I started a producer/consumer test program to measure the clusters performance during the process, I added new brokers, I ran kafka-reassi

Re: Puppet module for deploying Kafka released

2014-02-26 Thread Michael G. Noll
Andrew, I am actually referencing the WM puppet module (notably because it targets Debian whereas ours is currently focused on RHEL). I really like that your module already supports Kafka mirroring and jmxtrans. :-) --Michael On 02/26/2014 03:41 PM, Andrew Otto wrote: > Oh so many puppet modul

Re: Producer fails when old brokers are replaced by new

2014-02-26 Thread Guozhang Wang
Hello Chris, The broker.metadata.list, once read in at start up time, will not be changed. In other words, during the life time of a producer it has two lists of brokers: 1. The current brokers in the cluster that is returned in the metadata request response, which is dynamic 2. The broker list

Re: New Consumer API discussion

2014-02-26 Thread Robert Withers
Neha, what does the use of the RebalanceBeginCallback and RebalanceEndCallback look like? thanks, Rob On Feb 25, 2014, at 3:51 PM, Neha Narkhede wrote: > How do you know n? The whole point is that you need to be able to fetch the > end offset. You can't a priori decide you will load 1m message

Strange issue with Camel Kafka integration

2014-02-26 Thread Raj Vaida
Hi I am trying to integrate Camel with Kafka. I have implemented following route. * kafka.serializer.StringEncoderorg.vaida.esb.camel.orderdispatcher.OrderDispatcher.SimplePartitioner localhost:9093,localhost:9094 http://camel.apache.

Re: Producer fails when old brokers are replaced by new

2014-02-26 Thread Christofer Hedbrandh
Thanks for your response Guozhang. I did make sure that new meta data is fetched before taking out the old broker. I set the topic.metadata.refresh.interval.ms to something very low, and I confirm in the producer log that new meta data is actually fetched, after the new broker is brought up, and b

Re: Puppet module for deploying Kafka released

2014-02-26 Thread Andrew Otto
Ah cool, missed that, thanks! I’m equally envious of the Hiera support in yours :) On Feb 26, 2014, at 12:00 PM, Michael G. Noll wrote: > Andrew, > > I am actually referencing the WM puppet module (notably because it > targets Debian whereas ours is currently focused on RHEL). I really > l

Unable to consume Snappy compressed messages with Simple Consumer

2014-02-26 Thread Dan Hoffman
Publisher (using librdkafka C api) has sent both gzip and snappy compressed messages. I find that the java Simple Consumer ( https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example#) is unable to read the snappy ones, while the High Level one is. Is this expected? Is ther

Problems consuming snappy compressed messages via SimpleConsumer

2014-02-26 Thread Dan Hoffman
Publisher (using librdkafka C api) has sent both gzip and snappy compressed messages. I find that the java Simple Consumer ( https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example#) is unable to read the snappy ones, while the High Level one is. Is this expected? Is ther

Re: Problems consuming snappy compressed messages via SimpleConsumer

2014-02-26 Thread Neha Narkhede
Do you see the same issue if you send snappy data using the console producer instead of librdkafka? On Wed, Feb 26, 2014 at 5:58 PM, Dan Hoffman wrote: > Publisher (using librdkafka C api) has sent both gzip and snappy compressed > messages. I find that the java Simple Consumer ( > > https://c

Re: Producer fails when old brokers are replaced by new

2014-02-26 Thread Guozhang Wang
kafka-preferred-replica-election.sh is only used to move leaders between brokers, as long as the broker in the broker.metadata.list, i.e. the second broker list I mentioned in previous email is still alive then the producer can learn the leader change from it. In terms of broker discovery, I think

Re: Problems consuming snappy compressed messages via SimpleConsumer

2014-02-26 Thread Dan Hoffman
I haven't tried that yet. But since the high level consumer can consume it, should it matter who published it? On Wednesday, February 26, 2014, Neha Narkhede wrote: > Do you see the same issue if you send snappy data using the console > producer instead of librdkafka? > > > On Wed, Feb 26, 2014

Re: Problems consuming snappy compressed messages via SimpleConsumer

2014-02-26 Thread Neha Narkhede
Actually I meant the simple consumer shell that ships with kafka in the bin directory. On Wed, Feb 26, 2014 at 6:17 PM, Dan Hoffman wrote: > I haven't tried that yet. But since the high level consumer can consume it, > should it matter who published it? > > On Wednesday, February 26, 2014, Neha

Re: Problems consuming snappy compressed messages via SimpleConsumer

2014-02-26 Thread Dan Hoffman
The kafka-console-consumer is fine. On Wednesday, February 26, 2014, Neha Narkhede wrote: > Actually I meant the simple consumer shell that ships with kafka in the bin > directory. > > > On Wed, Feb 26, 2014 at 6:17 PM, Dan Hoffman wrote: > > > I haven't tried that yet. But since the high level

Re: Unable to consume Snappy compressed messages with Simple Consumer

2014-02-26 Thread Jun Rao
Are you using a fetch size larger than the whole compressed unit? Thanks, Jun On Wed, Feb 26, 2014 at 5:40 PM, Dan Hoffman wrote: > Publisher (using librdkafka C api) has sent both gzip and snappy compressed > messages. I find that the java Simple Consumer ( > > https://cwiki.apache.org/conf

Re: Unable to consume Snappy compressed messages with Simple Consumer

2014-02-26 Thread Dan Hoffman
I'm not sure what you mean - could you be more specific in terms what I might need to adjust in the simple consumer example code? On Thu, Feb 27, 2014 at 12:24 AM, Jun Rao wrote: > Are you using a fetch size larger than the whole compressed unit? > > Thanks, > > Jun > > > On Wed, Feb 26, 2014 a

Re: Kafka-0.8 Log4j Appender

2014-02-26 Thread 김동경
Actually, I am quite newbie to this. What do you exactly want me to do? You want me to raise an issue for this? Then which JIRA can I access and what I should do? 2014-02-26 20:48 GMT+09:00 Neha Narkhede : > I think this is a bug with the KafkaLog4jAppender that is triggered when > the message s

Reg Partition and Replica?

2014-02-26 Thread Balasubramanian Jayaraman (Contingent)
Hi, I have a doubt regarding to Partition and replica. What is the difference between them? I created a topic "test1" with 5 partition and 3 replicas. I am sending a message to the topic "test1". I see that different partitions are present in the different Kafka brokers. Some questions on this

Re: Unable to consume Snappy compressed messages with Simple Consumer

2014-02-26 Thread Jun Rao
Try making the last parameter in the following call larger (say to 1,000,000). .addFetch(a_topic, a_partition, readOffset, 10) Thanks, Jun On Wed, Feb 26, 2014 at 9:32 PM, Dan Hoffman wrote: > I'm not sure what you mean - could you be more specific in terms what I > might need to adjust