Re: Stuck consumer with new consumer API in 0.9

2016-01-25 Thread Han JU
Ok I'll create a JIRA issue on this. Thanks! 2016-01-23 21:47 GMT+01:00 Bruno Rassaerts : > +1 here > > As a workaround we seek to the current offset which resets the current > clients internal states and everything continues. > > Regards, > Bruno Rassaerts | Freelance Java Developer > > Novazon

Re: SimpleConsumer.getOffsetsBefore() in 0.9 KafkaConsumer

2016-01-25 Thread Robert Metzger
Thank you for your replies Gwen and Jason. Lets continue the discussion in the JIRA. On Fri, Jan 22, 2016 at 8:37 PM, Jason Gustafson wrote: > The offset API is one of the known gaps in the new consumer. There is a > JIRA (KAFKA-1332), but we might need a KIP to make that change now that the >

re-consuming last offset

2016-01-25 Thread Franco Giacosa
Hi, I am facing the following issue: When I start my consumer I get that the offset for one of the partitions is going to be reset to the last committed offset 14:55:28.788 [pool-1-thread-1] DEBUG o.a.k.c.consumer.internals.Fetcher - Resetting offset for partition t1-4 to the committed offset 20

Re: Stuck consumer with new consumer API in 0.9

2016-01-25 Thread Han JU
Hi Bruno, Can you tell me a little bit more about that? A seek() in the `onPartitionAssigned`? Thanks. 2016-01-25 10:51 GMT+01:00 Han JU : > Ok I'll create a JIRA issue on this. > > Thanks! > > 2016-01-23 21:47 GMT+01:00 Bruno Rassaerts : > >> +1 here >> >> As a workaround we seek to the curren

exception while reading offsets

2016-01-25 Thread Shushant Arora
Hi I have kafka version 0.8.2 installed on my cluster. I am able to create topics and write messages to it. But when I fetched lates offsets of brokers using kafka-run-class kafka.tools.GetOffsetShell --broker-list brokerAddr:9092 --topic topicname --time -1 I got offsets of few partitions and t

Re: Stuck consumer with new consumer API in 0.9

2016-01-25 Thread Han JU
Issue created: https://issues.apache.org/jira/browse/KAFKA-3146 2016-01-25 16:07 GMT+01:00 Han JU : > Hi Bruno, > > Can you tell me a little bit more about that? A seek() in the > `onPartitionAssigned`? > > Thanks. > > 2016-01-25 10:51 GMT+01:00 Han JU : > >> Ok I'll create a JIRA issue on this.

Re: Stuck consumer with new consumer API in 0.9

2016-01-25 Thread Ismael Juma
Thanks! Ismael On Mon, Jan 25, 2016 at 4:03 PM, Han JU wrote: > Issue created: https://issues.apache.org/jira/browse/KAFKA-3146 > > 2016-01-25 16:07 GMT+01:00 Han JU : > > > Hi Bruno, > > > > Can you tell me a little bit more about that? A seek() in the > > `onPartitionAssigned`? > > > > Thanks

Behaviour of KafkaConsumer.poll(long)

2016-01-25 Thread Peter Schrott
Hi Kafka-Gurus, Using Kafka in one of my projects, the question arose, how the records are provided using KafkaCosumer.poll(long). Is the entire map of records copied into the clients memory, or does poll(..) work on an iterator-based model? I am asking this, as I face the following scenario: The

Re: Stuck consumer with new consumer API in 0.9

2016-01-25 Thread Jason Gustafson
Apologies for the late arrival to this thread. There was a bug in the 0.9.0.0 release of Kafka which could cause the consumer to stop fetching from a partition after a rebalance. If you're seeing this, please checkout the 0.9.0 branch of Kafka and see if you can reproduce this problem. If you can,

Re: Stuck consumer with new consumer API in 0.9

2016-01-25 Thread Rajiv Kurian
Hi Jason, Was this a server bug or a client bug? Thanks, Rajiv On Mon, Jan 25, 2016 at 11:23 AM, Jason Gustafson wrote: > Apologies for the late arrival to this thread. There was a bug in the > 0.9.0.0 release of Kafka which could cause the consumer to stop fetching > from a partition after a

Re: Stuck consumer with new consumer API in 0.9

2016-01-25 Thread Guozhang Wang
Han, >From your logs it seems the thread which cannot fetch more data is rebalance-1, which is assigned with partitions [balance-11, balance-10, balance-9]; >From your consumer-group command the partitions that is lagging are [balance-6, balance-7, balance-8] which is not assigned to this process

Re: Stuck consumer with new consumer API in 0.9

2016-01-25 Thread Jason Gustafson
Hey Rajiv, the bug was on the client. Here's a link to the JIRA: https://issues.apache.org/jira/browse/KAFKA-2978. -Jason On Mon, Jan 25, 2016 at 11:42 AM, Rajiv Kurian wrote: > Hi Jason, > > Was this a server bug or a client bug? > > Thanks, > Rajiv > > On Mon, Jan 25, 2016 at 11:23 AM, Jason

Re: Stuck consumer with new consumer API in 0.9

2016-01-25 Thread Rajiv Kurian
Thanks Jason. We are using an affected client I guess. Is there a 0.9.0 client available on maven? My search at http://mvnrepository.com/artifact/org.apache.kafka/kafka_2.10 only shows the 0.9.0.0 client which seems to have this issue. Thanks, Rajiv On Mon, Jan 25, 2016 at 11:56 AM, Jason Gusta

Re: Memory records is not writable in MirrorMaker

2016-01-25 Thread Meghana Narasimhan
I have created a JIRA, https://issues.apache.org/jira/browse/KAFKA-3147 Thanks, Meghana On Tue, Jan 19, 2016 at 5:07 AM, Ismael Juma wrote: > Can you please file an issue in JIRA for this? > > Ismael > > On Tue, Jan 12, 2016 at 2:40 PM, Meghana Narasimhan < > mnarasim...@bandwidth.com> wrote: >

Re: Stuck consumer with new consumer API in 0.9

2016-01-25 Thread Han JU
Hi Guozhang, Sorry for that example. They does not come from the same run, just paste that to illustrate the problem. I'll try out what Jason suggests tomorrow and also retry the 0.9.0 branch. 2016-01-25 21:03 GMT+01:00 Rajiv Kurian : > Thanks Jason. We are using an affected client I guess. > >

how to set the Java heap of brokers

2016-01-25 Thread Florian Hussonnois
Hi, I'm looking for any recommendations on how to set the Java heap of brokers. I have found details on GC configuration but 6g looks very high fo me. -Xms6g -Xmx6g -XX:MetaspaceSize=96m -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M -XX:MinMe

Re: Memory records is not writable in MirrorMaker

2016-01-25 Thread Ismael Juma
Thanks. Ismael On Mon, Jan 25, 2016 at 8:28 PM, Meghana Narasimhan < mnarasim...@bandwidth.com> wrote: > I have created a JIRA, https://issues.apache.org/jira/browse/KAFKA-3147 > > Thanks, > Meghana > > On Tue, Jan 19, 2016 at 5:07 AM, Ismael Juma wrote: > > > Can you please file an issue in JI

Re: re-consuming last offset

2016-01-25 Thread Franco Giacosa
When doing poll() when there is no current position on the consumer, the position returned is the one of the last offset then? (I though that it will return that position + 1 because it was already commited) 2016-01-25 15:07 GMT+01:00 Franco Giacosa : > Hi, > > I am facing the following is

Re: Controlled shutdown not relinquishing leadership of all partitions

2016-01-25 Thread Luke Steensen
Ok, I've reproduced this again and made sure to grab the broker logs before the instance are terminated. I posted a writeup with what seemed like the relevant bits of the logs here: https://gist.github.com/lukesteensen/793a467a058af51a7047 To summarize, it looks like Gwen was correct and the broke

Re: Missing ISR On Some Partitions

2016-01-25 Thread Jason J. W. Williams
> > This is not an expected behavior.I think you may need to check > replica fetcher threads to see why partitions are not getting replicated. > I've been looking at the server.out logs, is there different place I should be looking for the replica fetcher threads? > Can you confirm that IS

Re: Missing ISR On Some Partitions

2016-01-25 Thread Jason J. W. Williams
There does appear to be a recurring issue with the node logging closed connections from the other 2 nodes: https://gist.github.com/williamsjj/0481600566f10e5593c4 The other nodes though show no errors. -J On Mon, Jan 25, 2016 at 2:35 PM, Jason J. W. Williams < jasonjwwilli...@gmail.com> wrote:

Re: re-consuming last offset

2016-01-25 Thread Jason Gustafson
It might be a little unintuitive, but the committed position should be the offset of the next message to consume. -Jason On Mon, Jan 25, 2016 at 1:26 PM, Franco Giacosa wrote: > When doing poll() when there is no current position on the consumer, the > position returned is the one of the last o

Tool to get consumer offset

2016-01-25 Thread Yifan Ying
Hi All, For Kafka 0.8.2, I was using kafka-consumer-offset-checker.sh to get a consumer group's offset . Now I am testing against Kafka 0.9 , but the same tool always gave me Exiting due to: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /consumers/ I saw "C

Re: Tool to get consumer offset

2016-01-25 Thread Guozhang Wang
Yifan, Offset checker has been deprecated in 0.9.0 (search for "kafka-consumer-groups.sh"): http://kafka.apache.org/documentation.html#upgrade If you are using the new consumer, then its metadata is not registered in ZK, so you should use the --bootstrap-server option instead of --zookeeper: ./

Re: Tool to get consumer offset

2016-01-25 Thread Yifan Ying
Thanks for reply, Guozhang. I got 'Missing required argument "[zookeeper]"' after running that. Yifan On Mon, Jan 25, 2016 at 6:05 PM, Guozhang Wang wrote: > Yifan, > > Offset checker has been deprecated in 0.9.0 (search for > "kafka-consumer-groups.sh"): > > http://kafka.apache.org/documentat

Re: Tool to get consumer offset

2016-01-25 Thread Guozhang Wang
I forgot to mention you need to add "--new-consumer" as well. Guozhang On Mon, Jan 25, 2016 at 7:11 PM, Yifan Ying wrote: > Thanks for reply, Guozhang. I got 'Missing required argument "[zookeeper]"' > after running that. > > > Yifan > > On Mon, Jan 25, 2016 at 6:05 PM, Guozhang Wang wrote: >

Getting very poor performance from the new Kafka consumer

2016-01-25 Thread Rajiv Kurian
We are using the new kafka consumer with the following config (as logged by kafka) metric.reporters = [] metadata.max.age.ms = 30 value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer group.id = myGroup.id partition.assignmen

Re: Getting very poor performance from the new Kafka consumer

2016-01-25 Thread Rajiv Kurian
Wanted to add that we are not using auto commit since we use custom partition assignments. In fact we never call consumer.commitAsync() or consumer.commitSync() calls. My assumption is that since we store our own offsets these calls are not necessary. Hopefully this is not responsible for the poor

Re: Getting very poor performance from the new Kafka consumer

2016-01-25 Thread Rajiv Kurian
The exception seems to be thrown here https://github.com/apache/kafka/blob/0.9.0/clients/src/main/java/org/apache/kafka/common/record/MemoryRecords.java#L236 Is this not expected to hit often? On Mon, Jan 25, 2016 at 9:22 PM, Rajiv Kurian wrote: > Wanted to add that we are not using auto commit

Re: Tool to get consumer offset

2016-01-25 Thread Yifan Ying
That works, thanks. Yifan On Mon, Jan 25, 2016 at 7:27 PM, Guozhang Wang wrote: > I forgot to mention you need to add "--new-consumer" as well. > > Guozhang > > On Mon, Jan 25, 2016 at 7:11 PM, Yifan Ying wrote: > > > Thanks for reply, Guozhang. I got 'Missing required argument > "[zookeeper]"