Java WebServer based on high level consumer not consuming messages, Socket Closure issue

2016-04-11 Thread NISHANT BULCHANDANI
Hi Everyone, We are using the Kafka High level consumer in our web server to consume messages. Sometimes, when i am running the message producer, Kafka & the webserver all 3 on my localhost ; the webserver does not recieve any messages & i see the below logs in *kafka.log* log file : [2016-04-11

Minor GC time becomes longer and longer

2016-04-11 Thread jinhong lu
Minor GC time in my cluster becomes longer and longer, and broker loss session with zk: 2016-04-11T17:33:06.559+0800: 875.917: [GC2016-04-11T17:33:06.559+0800: 875.917: [ParNew: 3363139K->10287K(3774912K), 0.0858010 secs] 3875141K->522289K(31037888K), 0.0860890 secs] [Times: user=1.46

Re: Minor GC time becomes longer and longer

2016-04-11 Thread Jakub Neubauer
Hi, Did you consider G1 collector? http://docs.oracle.com/javase/7/docs/technotes/guides/vm/G1.html Jakub N. On 11.4.2016 12:08, jinhong lu wrote: Minor GC time in my cluster becomes longer and longer, and broker loss session with zk: 2016-04-11T17:33:06.559+0800: 875.917: [GC2016-04-1

Re: Minor GC time becomes longer and longer

2016-04-11 Thread Jakub Neubauer
Hi, Did you consider G1 collector? http://docs.oracle.com/javase/7/docs/technotes/guides/vm/G1.html Jakub N. On 11.4.2016 12:08, jinhong lu wrote: Minor GC time in my cluster becomes longer and longer, and broker loss session with zk: 2016-04-11T17:33:06.559+0800: 875.917: [GC2016-04-1

Re: Kafka Connect concept question

2016-04-11 Thread Uber Slacker
Thanks for the explanations guys. It would be cool to see a section in the documentation that explicitly compares and contrasts Kafka Connect versus working directly with the producer and consumer APIs. That's just my perspective as a newb - perhaps it's clear to others. Thanks again! On Thu, Apr

Re: Minor GC time becomes longer and longer

2016-04-11 Thread Alexis Midon
Why such a gigantic heap? 30G. In my experience, Kafka broker does not have to deal with long-lived objects, it's all about many small, ephemeral objects. Most of the data is kept off heap. We've been happy with 5G heap, 2G being for the new generation. The server has 8 cores and 60GB of ram. Her

Re: Minor GC time becomes longer and longer

2016-04-11 Thread Alexis Midon
Any experience with G1 for Kafka? I didn't get a chance to try it out. On Mon, Apr 11, 2016 at 3:31 AM Jakub Neubauer wrote: > Hi, > Did you consider G1 collector? > http://docs.oracle.com/javase/7/docs/technotes/guides/vm/G1.html > Jakub N. > > On 11.4.2016 12:08, jinhong lu wrote: > > Minor G

Re: Issue with Kafka Broker

2016-04-11 Thread Tushar Agrawal
We are having the same issue. Also, I noticed too many files open exception is one of the broker that we bounced after crash. Thanks, Tushar (Sent from iPhone) > On Apr 11, 2016, at 12:27 AM, Mudit Agarwal > wrote: > > Hi Guys, > I have 3 node kafka setup.Version is 0.9.0.1.I bounced the kafk

Re: KafkaProducer block on send

2016-04-11 Thread Oleg Zhurakousky
Dana Thanks for the explanation, but it sounds more like a workaround since everything you describe could be encapsulated within the Future itself. After all it "represents the result of an asynchronous computation" executor.submit(new Callable() { @Override public RecordMetadata call

"Close the consumer, waiting indefinitely for any needed cleanup."

2016-04-11 Thread Oleg Zhurakousky
The subject line is from the javadoc of the new KafkaConsumer. Is this for real? I mean I am hoping the use of ‘indefinitely' is a typo. In any event if it is indeed true, how does one break out of indefinitely blocking consumer.close() invocation? Cheers Oleg

Re: "Close the consumer, waiting indefinitely for any needed cleanup."

2016-04-11 Thread Dana Powers
Not a typo. This happens because the consumer closes the coordinator, and the coordinator attempts to commit any pending offsets synchronously in order to avoid duplicate message delivery. The Coordinator method commitOffsetsSync will retry indefinitely unless a non-recoverable error is encountered

Re: "Close the consumer, waiting indefinitely for any needed cleanup."

2016-04-11 Thread Oleg Zhurakousky
Dana Everything your are saying does not answer my question of how to interrupt a potential deadlock artificially forced upon users of KafkaConsumer API. I may be OK with duplicate messages, I may be OK with data loss and I am OK with doing an extra work to do all kind of things. I am NOT OK with

Re: "Close the consumer, waiting indefinitely for any needed cleanup."

2016-04-11 Thread Dana Powers
If you wanted to implement a timeout, you'd need to wire it up in commitOffsetsSync and plumb the timeout from Coordinator.close() and Consumer.close(). That's your answer. Code changes required. -Dana On Mon, Apr 11, 2016 at 1:17 PM, Oleg Zhurakousky wrote: > Dana > Everything your are saying d

Re: "Close the consumer, waiting indefinitely for any needed cleanup."

2016-04-11 Thread Oleg Zhurakousky
Dana, I am sorry, but I can’t accept that as an answer. Regardless, the API exposed to the end user must never “block indefinitely”. And saying you have to move a few mountains to work around what most would perceive to be a design issue is not the acceptable answer. I’ll raise the JIRA Cheers

Consumers disappearing form __consumer_offsets

2016-04-11 Thread Morellato, Wanny
Hi, I am trying to figure out why some of my consumers disappears from the list of active consumers… This is happening in my QA environment where sometimes no messages get published over the weekend. I am wondering if it is related to the default 24 hours log.cleaner.delete.retention.ms If tha

Re: "Close the consumer, waiting indefinitely for any needed cleanup."

2016-04-11 Thread Dana Powers
If you pay me, I might write the code for you, too ;) -Dana On Mon, Apr 11, 2016 at 1:34 PM, Oleg Zhurakousky wrote: > Dana, I am sorry, but I can’t accept that as an answer. > Regardless, the API exposed to the end user must never “block indefinitely”. > And saying you have to move a few mount

Spikes in kafka bytes out (while bytes in remain the same)

2016-04-11 Thread Jorge Rodriguez
We are running a kafka cluster for our real-time pixel processing pipeline. The data is produced from our pixel servers into kafka, and then consumed by a spark streaming application. Based on this, I would expect that the bytes in vs bytes out should be roughly equal, as each message should be c

Re: Consumers disappearing form __consumer_offsets

2016-04-11 Thread James Cheng
This may be related to offsets.retention.minutes. offsets.retention.minutes Log retention window in minutes for offsets topic It defaults to 1440 minutes = 24 hours. -James > On Apr 11, 2016, at 1:36 PM, Morellato, Wanny > wrote: > > Hi, > > I am trying to figure out why some of my consumers

Re: Consumers disappearing form __consumer_offsets

2016-04-11 Thread Morellato, Wanny
Thanks James, That was exactly what I was looking for. Wanny On 4/11/16, 2:16 PM, "James Cheng" wrote: >This may be related to offsets.retention.minutes. > >offsets.retention.minutes >Log retention window in minutes for offsets topic > >It defaults to 1440 minutes = 24 hours. > >-James >

Re: Consumers disappearing form __consumer_offsets

2016-04-11 Thread Tom Brown
Related: Can the __consumer_offsets topic be configured to retain offsets forever no matter how the rest of the server is configured? --Tom On Mon, Apr 11, 2016 at 3:19 PM, Morellato, Wanny < wanny.morell...@concur.com> wrote: > Thanks James, That was exactly what I was looking for. > > Wanny >

New consumer: OutOfMemoryError: Direct buffer memory

2016-04-11 Thread Kanak Biscuitwala
Hi, I'm running Kafka's new consumer with message handlers that can sometimes take a lot of time to return, and combining that with manual offset management (to get at-least-once semantics). Since poll() is the only way to heartbeat with the consumer, I have a thread that runs every 500 millise

coding hangs there

2016-04-11 Thread Jiang Jacky
Hi, I just installed the kafka 0.9, and my code stuck on the following command for producer: it is stuck on producer.flush(); for consume, it is stuck on ConsumerRecords records = consumer.poll(200); I did directly downloaded the code from github, and tried some other resources for 0.9, nothin

Re: dumping JMX data

2016-04-11 Thread Christian Posta
Yah +1.. i was considering making it an option. And wrapping it with https://github.com/fabric8io/agent-bond if you want to run it alongside other agents. On Thu, Mar 31, 2016 at 9:21 PM, Gerard Klijs wrote: > Don't know if adding it to Kafka is a good thing. I assume you need some > java opts

Re: Java WebServer based on high level consumer not consuming messages, Socket Closure issue

2016-04-11 Thread NISHANT BULCHANDANI
Does anybody have any clues? Please let me know if more details are required. The problem is that the java web server based on High level consumer does not always receive messages but the publish and consume functionality is fine because the console consumer is consuming okay. Can it be something

Why my consumer sometimes not listening messages?

2016-04-11 Thread Ratha v
I use older kafka consumer 0.8V. *Steps* - Starting listener - Send 10 messages .Listener listens around 4 messages. - Send single message. Listener not listening - Again single message published. Listener is not listening. Can anyone explain this behaviour? -- -Ratha http://vvratha.