Re: Kafka 0.8.0 in maven

2013-11-05 Thread Chris Bedford
Hi, Edward.. yup .. you are correct.. when we get to a little over 1000 messages the program was failing with the exception stack trace i included below. I fixed the test so it passes as long as the consumer gets all messages sent by the producer.. even if an exception is thrown during shut dow

Re: Handling Scala exceptions from ListTopicsCommand in Java

2013-11-05 Thread Jun Rao
We don't have an official client api for creating topics yet. You can try instantiating kafka.admin.CreateTopicCommand$ and use the following method: createTopic(zkClient: ZkClient, topic: String, numPartitions: Int = 1, replicationFactor: Int = 1, replicaAssignmentStr: String = "") To instantiat

Re: kafka and log retention policy when disk is full

2013-11-05 Thread Kane Kane
I've checked it, it's per partition, what i'm talking about is more of global log size limit, like if i have only 200gb, i want to set global limit on log size, not per partition, so i won't have to change it later if i add topics/partitions. Thanks. On Tue, Nov 5, 2013 at 8:37 PM, Neha Narkhede

Re: kafka and log retention policy when disk is full

2013-11-05 Thread Neha Narkhede
You are probably looking for log.retention.bytes. Refer to http://kafka.apache.org/documentation.html#brokerconfigs On Tue, Nov 5, 2013 at 3:10 PM, Kane Kane wrote: > Hello, > > What would happen if disk is full? Does it make sense to have > additional variable to set the maximum size for all l

Re: Topic creation on restart

2013-11-05 Thread Neha Narkhede
>. Could a fetch request from a consumer cause a Topic creation request (seems implausible). Yes, that seems like a way the broker can get into this situation. Thanks, Neha On Tue, Nov 5, 2013 at 4:58 PM, Jason Rosenberg wrote: > I don't know if I have a way to see the access logs on the

Re: Topic creation on restart

2013-11-05 Thread Jason Rosenberg
I don't know if I have a way to see the access logs on the LB..(still trying to track that down). One thing I do see though, is that there are fetch requests from consumers, that are then followed by these Topic creation log messages, e.g. (replaced some of the specific strings in this log li

kafka and log retention policy when disk is full

2013-11-05 Thread Kane Kane
Hello, What would happen if disk is full? Does it make sense to have additional variable to set the maximum size for all logs combined? Thanks.

Re: Message Width

2013-11-05 Thread Guozhang Wang
Shafaq is right, except the default value is 100 bytes, so a little bit less than 1MB. On Tue, Nov 5, 2013 at 2:31 PM, Shafaq wrote: > The message is sent as serialized over wire so there is no concept of width > but only length. > > The message size is specified in the broker config and de

Re: 0.8 High-level consumer commit offset question

2013-11-05 Thread Guozhang Wang
Hi Shafaq, "Yes" to your first question. For second question that depends on what would you like to do when your request's offset is not valid. This is possible, for example, if your logs gets deleted according to retention policy and your last committed offset in ZK is within that deleted range.

Re: Message Width

2013-11-05 Thread Shafaq
The message is sent as serialized over wire so there is no concept of width but only length. The message size is specified in the broker config and default is 10 MB off the top of my head to prevent OOM. On Nov 5, 2013 1:06 PM, "Jason Pohl" wrote: > I am new to pub-sub architecture and am looki

Re: Handling Scala exceptions from ListTopicsCommand in Java

2013-11-05 Thread sgg
I tried catching RuntimeException as well as catching TopicExistsException, but the problem seems to be that the catch block is not being invoked at all. Is it possible that the scala code needs to have an @throws annotation? sgg On Nov 5, 2013, at 2:24 PM, wrote: > Hi All: > I am trying t

Re: Handling Scala exceptions from ListTopicsCommand in Java

2013-11-05 Thread Edward Capriolo
Actually that will not help. The main probably swallows the exception. On Tue, Nov 5, 2013 at 4:13 PM, Edward Capriolo wrote: > Maybe try catching RuntimeException > > > On Tue, Nov 5, 2013 at 2:24 PM, wrote: > >> Hi All: >> I am trying to programmatically create Topics from a Java client. >> >

Re: compiling with 2.10

2013-11-05 Thread Ngu, Bob
Thanks for clarifying, I see what you are saying. I am able to cross-publish all the listed Scala versions to my local Maven repo with “sbt +publish" crossScalaVersions := Seq("2.8.0","2.8.2", "2.9.1", "2.9.2", "2.10.1"), Bob On 11/5/13, 11:05 AM, "Joe Stein" wrote: >When we are talking ab

Re: Handling Scala exceptions from ListTopicsCommand in Java

2013-11-05 Thread Edward Capriolo
Maybe try catching RuntimeException On Tue, Nov 5, 2013 at 2:24 PM, wrote: > Hi All: > I am trying to programmatically create Topics from a Java client. > > I am using a suggestion from > http://stackoverflow.com/questions/16946778/how-can-we-create-a-topic-in-kafka-from-the-ide-using-api/18480

Message Width

2013-11-05 Thread Jason Pohl
I am new to pub-sub architecture and am looking at Kafka to be a data hub. My question is there any limit to how wide a message should be in terms of number of fields. Not so much an absolute limit or a theoretical one, but more of a practical limit. What is the "widest" message that you typi

Handling Scala exceptions from ListTopicsCommand in Java

2013-11-05 Thread sggraham
Hi All: I am trying to programmatically create Topics from a Java client. I am using a suggestion from http://stackoverflow.com/questions/16946778/how-can-we-create-a-topic-in-kafka-from-the-ide-using-api/18480684#18480684 Essentially invoking the CreateTopicCommand.main(). String [] ar

Re: JMXTrans not sending kafka 0.8 metrics to Ganglia

2013-11-05 Thread Paul Mackles
In our case, jmxtrans returns: ReplicaManager.ISRShrinksPerSec.Count ReplicaManager.ISRShrinksPerSec.MeanRate ReplicaManager.ISRExpandsPerSec.OneMinuteRate ReplicaManager.ISRExpandsPerSec.Count ReplicaManager.ISRExpandsPerSec.MeanRate ReplicaManager.ISRExpandsPerSec.OneMinuteRate ReplicaManager.

Re: Why would one choose a partition when producing?

2013-11-05 Thread Niek Sanders
Using a custom partitioner lets you do a "gather" step and exploit data locality. Example use case: topic messages consumer splits message by customer id. Each customer id has their own database table. With a custom partitioner, you can send all data for a given customer id to same partition and

Why would one choose a partition when producing?

2013-11-05 Thread Philip O'Toole
We use 0.72 -- I am not sure if this matters with 0.8. Why would one choose a partition, as opposed to a random partition choice? What design pattern(s) would mean choosing a partition? When is it a good idea? Any feedback out there? Thanks, Philip

Re: compiling with 2.10

2013-11-05 Thread Joe Stein
When we are talking about the setting in Build.scala there are two uses for it and we need to think of them separately. The first is building and running the broker. The default for this is 2.8.0 The second is for building your client library (for producers/consumers) to work with the Kafka brok

Re: compiling with 2.10

2013-11-05 Thread Kane Kane
Attached patch to upgrade sbt-assembly. On Tue, Nov 5, 2013 at 10:59 AM, Ngu, Bob wrote: > Just to clarify, I also already have it working with 2.10.2 via this > setting change in Build.scala > scalaVersion := “2.10.2" > > My question is if this should be the default setting going on forth. A

Re: compiling with 2.10

2013-11-05 Thread Ngu, Bob
Just to clarify, I also already have it working with 2.10.2 via this setting change in Build.scala scalaVersion := “2.10.2" My question is if this should be the default setting going on forth. Also since I am using Maven and not sbt, I am not sure that the cross publishing works as expected, d

Re: Flush producer queue

2013-11-05 Thread Neha Narkhede
The 0.7.2 producer should already do that on a clean shutdown of the producer. On Tue, Nov 5, 2013 at 9:09 AM, Sriram Ramarathnam wrote: > I am using kafka 0.7.2. I want to make sure that the producer queue is > flushed before I can call a close on the producer. Is there a way to > achieve this?

Flush producer queue

2013-11-05 Thread Sriram Ramarathnam
I am using kafka 0.7.2. I want to make sure that the producer queue is flushed before I can call a close on the producer. Is there a way to achieve this? Thanks, Sriram

0.8 High-level consumer commit offset question

2013-11-05 Thread Shafaq
Hi, I wanted to control the commitOffset signal to ZK from High-level kafka consumer. This will enable to process the message by consumer and incase of failure the offset is not moved. If I do auto.commit.enable= false and then use commitallOffset Api of high-level consumer, is that way to a

Re: kafka in production environment

2013-11-05 Thread Neha Narkhede
You might find this helpful - http://kafka.apache.org/documentation.html#operations On Mon, Nov 4, 2013 at 11:00 PM, Oleg Ruchovets wrote: > Hello. > We are planning to go production in a couple of months. > > Can the community share the best practices. > What should be configured? What we ne

Re: Commit Offset per topic

2013-11-05 Thread Roman Garcia
Thanks Neha! I guess auto-commit it is for now... On Tue, Nov 5, 2013 at 5:08 AM, Neha Narkhede wrote: > Currently, the only way to achieve that is to use the SimpleConsumer API. > We are considering the feature you mentioned for the 0.9 release - > > https://cwiki.apache.org/confluence/display/

Re: High Level Consumer commit offset

2013-11-05 Thread Vadim Keylis
That I can manage. Thanks so much. On Tue, Nov 5, 2013 at 6:46 AM, Neha Narkhede wrote: > Yes, it will commit offsets only for the partitions that the consumer owns. > But over time, the set of partitions that a consumer owns can change. > > Thanks, > Neha > On Nov 5, 2013 12:17 AM, "Vadim Keyli

Re: compiling with 2.10

2013-11-05 Thread Jun Rao
Kane, Would you mind attaching a patch to https://issues.apache.org/jira/browse/KAFKA-1116 to enable cross compilation to scala 2.10.2? Thanks, Jun On Mon, Nov 4, 2013 at 9:53 PM, Kane Kane wrote: > I'm using it with scala 2.10.2. > > On Mon, Nov 4, 2013 at 9:41 PM, Jun Rao wrote: > > Do yo

Re: Topic creation on restart

2013-11-05 Thread Neha Narkhede
Ok, so this can happen, even if the node has not been placed back into rotation, at the metadata vip? Hmmm... if the broker is not placed in the metadata vip, how did it end up receiving metadata requests? You may want to investigate that by checking the public access logs. Thanks, Neha On Nov 4,

Re: Topic creation on restart

2013-11-05 Thread Neha Narkhede
Can this unintentional topic creation be avoided by setting auto.create.topics.enable=false? Yes, but that is not the right fix. Thanks, Neha On Nov 4, 2013 10:21 PM, "Priya Matpadi" wrote: > Can this unintentional topic creation be avoided by setting > auto.create.topics.enable=false? > > > On

Re: High Level Consumer commit offset

2013-11-05 Thread Neha Narkhede
Yes, it will commit offsets only for the partitions that the consumer owns. But over time, the set of partitions that a consumer owns can change. Thanks, Neha On Nov 5, 2013 12:17 AM, "Vadim Keylis" wrote: > I am using creating Consumer.createJavaConsumerConnector(kafka 0.8) for > each topic/p

Re: kafka-reassign-partitions.sh --status-check-json-file not working

2013-11-05 Thread Joseph Lawson
Guozhang, I'll try to come up with some steps to repicate today. I didn't notice anything obvious in the logs. Joe Sent from my Droid Charge on Verizon 4G LTE Guozhang Wang wrote: Hello Joe, Do you see any exceptions in the controller or state-change logs? Where are these two topics original

Re: High Level Consumer commit offset

2013-11-05 Thread Vadim Keylis
I am using creating Consumer.createJavaConsumerConnector(kafka 0.8) for each topic/partition. Would it be safe to assume that commit offset will apply only to stream/partition managed by that connector? Thanks, Vadim On Mon, Nov 4, 2013 at 8:43 PM, Neha Narkhede wrote: > You need to set "auto