Re: KafkaSpout forceFromStart Issue

2015-12-02 Thread Rakesh Surendra
What happened to this issue ? Any updates ? I seem to be facing the same issue. Regards, Raki

Re: New consumer not fetching as quickly as possible

2015-12-02 Thread tao xiao
It does help with increasing the poll timeout to Long.MAX_VALUE. I got messages in every poll but just the time between each poll is long. That is how I discovered it was an network issue btw consumer and broker. I believe it will have the same effect as long as I set the poll timeout high enough,

Kafka Summit Registration and CFP

2015-12-02 Thread Jay Kreps
Hey Everyone, As you may have heard, Confluent is hosting the first ever Kafka Summit. It'll be in San Francisco on Tuesday, April 26, 2016. We'll be announcing open registration tomorrow, but I wanted to let everyone here know first, and also let you know there is a $50 community discount. To ge

Kafka unclean leader election (0.8.2)

2015-12-02 Thread Pablo Fischer
Howdy folks, If a host get into an unclean leader election, Kafka (via ZK) will assign a new leader to each partition/topic, however, is there a metric that shows how the replication is doing (aka what is going behind the scenes)? Thanks! -- Pablo

Re: Kafka 0.8.2.1 - how to read from __consumer_offsets topic?

2015-12-02 Thread Jason Gustafson
Hey Marina, My mistake, I see you're using 0.8.2.1. Are you also providing the formatter argument when using console-consumer.sh? Perhaps something like this: bin/kafka-console-consumer.sh --formatter kafka.server.OffsetManager\$OffsetsMessageFormatter --zookeeper localhost:2181 --topic __consume

Re: Kafka 0.8.2.1 - how to read from __consumer_offsets topic?

2015-12-02 Thread Jason Gustafson
Looks like you need to use a different MessageFormatter class, since it was renamed in 0.9. Instead use something like "kafka.coordinator.GroupMetadataManager\$OffsetsMessageFormatter". -Jason On Wed, Dec 2, 2015 at 10:57 AM, Dhyan Muralidharan < d.muralidha...@yottaa.com> wrote: > I have this s

Re: Trying to understand 0.9.0 producer and Consumer design

2015-12-02 Thread Jason Gustafson
The major changes in 0.9 are for the new consumer. At the moment, the design is spread across a couple documents: https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.9+Consumer+Rewrite+Design https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Client-side+Assignment+Proposal I'm trying

Re: Kafka 0.8.2.1 - how to read from __consumer_offsets topic?

2015-12-02 Thread Dhyan Muralidharan
I have this same problem . Can someone help ? --Dhyan On Wed, Nov 25, 2015 at 3:31 PM, Marina wrote: > Hello, > > I'm trying to find out which offsets my current High-Level consumers are > working off. I use Kafka 0.8.2.1, with **no** "offset.storage" set in the > server.properties of Kafka - w

Re: New consumer not fetching as quickly as possible

2015-12-02 Thread Guozhang Wang
Thanks for the updates Tao. Just wanted to make sure that there is no other potential issues when consumer and broker are remote, which is also quite common in practice: if you increase the timeout value in poll(timeout) to even larger values (say two times the average latency in your network) and

Re: New Consumer API + Reactive Kafka

2015-12-02 Thread Guozhang Wang
In the new API commitSync() handles retires and reconnecting, and will only throw an exception if it encounters a non-retriable error (e.g. it is been told that the partitions it wants to commit no longer belongs to itself) or a timeout has elapsed. You can find possible exceptions thrown from this

Re: New Consumer API + Reactive Kafka

2015-12-02 Thread Krzysztof Ciesielski
I see, that’s actually a very important point, thanks Jay. I think that we are very optimistic about updating Reactive Kafka now after getting all these details :) I have one more question: in the new client we only have to call commitSync(offsets). This is a ‘void’ method so I suspect that it co

Re: New Consumer API + Reactive Kafka

2015-12-02 Thread Jay Kreps
It's worth noting that both the old and new consumer are identical in the number of records fetched at once and this is bounded by the fetch size and the number of partitions you subscribe to. The old consumer held these in memory internally and waited for you to ask for them, the new consumer imme

Re: New Consumer API + Reactive Kafka

2015-12-02 Thread Gwen Shapira
On Wed, Dec 2, 2015 at 10:44 PM, Krzysztof Ciesielski < krzysztof.ciesiel...@softwaremill.pl> wrote: > Hello, > > I’m the main maintainer of Reactive Kafka - a wrapper library that > provides Kafka API as Reactive Streams ( > https://github.com/softwaremill/reactive-kafka). > I’m a bit concerned a

New Consumer API + Reactive Kafka

2015-12-02 Thread Krzysztof Ciesielski
Hello, I’m the main maintainer of Reactive Kafka - a wrapper library that provides Kafka API as Reactive Streams (https://github.com/softwaremill/reactive-kafka). I’m a bit concerned about switching to Kafka 0.9 because of the new Consumer API which doesn’t seem to fit well into this paradigm, c

Re: flush() vs close()

2015-12-02 Thread Muqtafi Akhmad
hello Ewen, so the flush() operation is already included in close() ? how about implementation before 0.9.0 version? Thank you, On Wed, Dec 2, 2015 at 2:43 PM, Ewen Cheslack-Postava wrote: > Kashif, > > The difference is that close() will also shut down the producer such that > it can no longe

Re: New consumer not fetching as quickly as possible

2015-12-02 Thread tao xiao
It turned out it was due to network latency btw consumer and broker. Once I moved the consumer to the same box of broker messages were returned in every poll. Thanks for all the helps. On Wed, 2 Dec 2015 at 15:38 Gerard Klijs wrote: > Another possible reason witch comes to me mind is that you