We have a couple open tickets to address these issues (see KAFKA-1894 and
KAFKA-2168). It's definitely something we want to fix.
On Wed, Jun 17, 2015 at 4:21 AM, Jan Stette wrote:
> Adding some more details to the previous question:
>
> The indefinite wait doesn't happen on calling subscribe() o
Hi Srividhya,
I'm a little confused about your setup. You have both clusters pointed to
the same zookeeper, right? You don't appear to be using the zookeeper
chroot option, so I think they would just form a single cluster.
-Jason
On Mon, Jun 22, 2015 at 3:50 PM, Srividhya Anantharamakrishnan <
s
zookeeper running.
>
> Datacenter B has the same set up.
>
> Now, I am trying to publish message from one of the nodes in A to the ZK in
> A and make one of the nodes in B consume the message by connecting to A's
> ZK.
>
>
>
> On Mon, Jun 22, 2015 at 4:25 PM, Jaso
Hey Mohit,
Unfortunately, I don't think there's any such configuration.
By the way, there are some pretty cool things you can do with keys in Kafka
(such as semantic partitioning and log compaction). I don't know if they
would help in your use case, but it might be worth checking out
http://kafka
Hi Shushant,
Write throughput on zoookeeper can be a problem depending on your commit
policy. Typically you can handle this by just committing less frequently
(with the obvious tradeoff). The consumer also supports storing offsets in
Kafka itself through the "offsets.storage" option (see
http://ka
Hey Kashif, to subscribe, send a message to users-subscr...@kafka.apache.org
.
-Jason
On Tue, Jun 30, 2015 at 1:16 AM, Kashif Hussain wrote:
> Hi,
> I want to subscribe Kafka users mailing list.
>
> Regards,
> Kashif
>
Hey Rajasekar,
Are you updating zookeeper itself or just the image? Either way, it's
probably best to preserve the data if possible. Usually people update
zookeeper using a rolling reboot to make sure no data is lost. You just
have to make sure you give the rebooted host has enough time to rejoin
There is also kafkacat (https://github.com/edenhill/kafkacat), which
exposes a few more knobs than the console producer.
-Jason
On Sat, Jul 11, 2015 at 6:40 PM, tsoli...@gmail.com
wrote:
> Thank you, Shayne.
>
> On Sat, Jul 11, 2015 at 6:35 PM, Shayne S wrote:
>
> > The console producer will r
Hey Stefan,
I only see a commit in the failure case. Were you planning to use
auto-commits otherwise? You'd probably want to handle all commits directly
or you'd always be left guessing. But even if you did, I think the main
problem is that your process could fail before a needed commit is sent to
Hey Stevo,
The new consumer doesn't have any threads of its own, so I think
construction should be fairly cheap.
-Jason
On Sun, Jul 19, 2015 at 2:13 PM, Stevo Slavić wrote:
> Hello Guozhang,
>
> It would be enough if consumer group could, besides at construction time,
> be set once only after
Hey Stevo,
I think ConsumerRecords only contains the partitions which had messages.
Would you mind creating a jira for the feature request? You're welcome to
submit a patch as well.
-Jason
On Tue, Jul 21, 2015 at 2:27 AM, Stevo Slavić wrote:
> Hello Apache Kafka community,
>
> New HLC poll ret
Hey Stevo,
That's a good point. I think the javadoc is pretty clear that this could
return no partitions when the consumer has no active assignment, but it may
be a little unintuitive to have to call poll() after subscribing before you
can get the assigned partitions. I can't think of a strong rea
Hey Stevo,
Thanks for the early testing on the new consumer! This might be a bug. I
wonder if it could also be explained by partition rebalancing. In the
current implementation, a rebalance will clear the old positions (including
those that were seeked to). I think it's debatable whether this beha
Hey Stevo,
I agree that it's a little unintuitive that what you are committing is the
next offset that should be read from and not the one that has already been
read. We're probably constrained in that we already have a consumer which
implements this behavior. Would it help if we added a method on
Hi Bhavesh,
I'm not totally sure I understand the expected behavior, but I think this
can work. Instead of seeking to the start of the range before the poll
loop, you should probably provide a ConsumerRebalanceCallback to get
notifications when group assignment has changed (e.g. when one of your
n
Hey Valibhav,
With only one partition, all of the consumers will end up hitting a single
broker (since partitions cannot be split). Whether it is possible to get
that number of consumers on a single broker may depend on the message load
through the topic. I think there has been some interest in al
Hey Simon,
The new consumer has the ability to forego group management and assign
partitions directly. Once assigned, you can seek to any offset you want.
-Jason
On Tue, Aug 4, 2015 at 5:08 AM, Simon Cooper <
simon.coo...@featurespace.co.uk> wrote:
> Reading on the consumer docs, there's no men
I couldn't find a jira for this, so I added KAFKA-2403.
-Jason
On Tue, Aug 4, 2015 at 9:36 AM, Jay Kreps wrote:
> Hey James,
>
> You are right the intended use of that was to have a way to capture some
> very small metadata about your state at the time of offset commit in an
> atomic way.
>
> T
end offset for each
> partition.
>
>
>
> Please do let us know your preference to support above simple use-case.
>
>
> Thanks,
>
>
> Bhavesh
>
> On Thu, Jul 30, 2015 at 1:23 PM, Jason Gustafson
> wrote:
>
> > Hi Bhavesh,
> >
> > I'm no
Hey Neville, I tried just now and the artifact seems accessible. Perhaps
you can post your full pom to the mailing list that Grant linked to above
and we can investigate a bit more?
-Jason
On Wed, Aug 5, 2015 at 3:36 PM, Grant Henke wrote:
> It looks like your usage lines up with the docs:
>
>
It looks like you might have bootstrap servers pointed to zookeeper. It
should point to the brokers instead since the new consumer doesn't use
zookeeper.
As for the hanging, it is a known bug that we're still working on.
-Jason
On Tue, Aug 18, 2015 at 3:02 AM, Krogh-Moe, Espen
wrote:
> Hi,
>
>
Hey Shashank,
If you'd like to get started with the new consumer, I urge you to checkout
trunk and take it for a spin. The API is still a little unstable, but I
doubt that changes from here on will be too dramatic. If you have any
questions or run into any issues, this mailing list is a great plac
Hey Hema,
I'm not too familiar with ZkClient, but I took a look at the code and it
seems like there may still be a race condition around reconnects which
could cause the NPE you're seeing. I left a comment on the github issue on
the slim chance I'm not wrong:
https://github.com/sgroschupf/zkclient
temporary workaround for this until its fixed? For now, we
> just restart the app server having this issue, but we keep seeing this
> issue time and again.
>
>
> -----Original Message-
> From: Jason Gustafson [mailto:ja...@confluent.io]
> Sent: Thursday, September 2
I added KAFKA-2691 as well, which improves client handling of authorization
errors.
-Jason
On Mon, Nov 2, 2015 at 10:25 AM, Becket Qin wrote:
> Hi Jun,
>
> I added KAFKA-2722 as a blocker for 0.9. It fixes the ISR propagation
> scalability issue we saw.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
>
Hey Zhuo,
I suspect the authorization errors are occurring when the producer tries to
fetch topic metadata. Since authorization wasn't supported in 0.8.2, it
probably ignores the errors silently and retries. I think this has been
fixed in the 0.9.0 branch if you want to give it a try.
Thanks,
Jas
Hey Luke,
I agree the null check seems questionable. I went ahead and created
https://issues.apache.org/jira/browse/KAFKA-2805. At least we should have a
comment clarifying why the check is correct.
-Jason
On Tue, Nov 10, 2015 at 2:15 PM, Luke Steensen <
luke.steen...@braintreepayments.com> wrot
Hi Siyuan,
Your understanding about assign/subscribe is correct. We think of topic
subscription as enabling automatic assignment as opposed to doing manual
assignment through assign(). We don't currently them to be mixed.
Can you elaborate on your findings with respect to using one thread per
bro
reads(consuming from 2
> different brokers concurrently). That seems a more optimal solution than
> another, right?
>
> On Tue, Nov 17, 2015 at 2:54 PM, Jason Gustafson
> wrote:
>
> > Hi Siyuan,
> >
> > Your understanding about assign/subscribe is correct. We think
Hi Martin,
Thanks for reporting this problem. I think maybe we're just not doing a
very good job of handling auto-commit errors internally and they end up
spilling into user logs. I added a JIRA to address this issue:
https://issues.apache.org/jira/browse/KAFKA-2860.
-Jason
On Wed, Nov 18, 2015
Hi Anatoly,
I spent a little time this afternoon updating the request types and error
codes. This wiki is getting a little difficult to manage, especially in
regard to error codes, so I opened KAFKA-2865 to hopefully improve the
situation. Probably we need to pull this documentation into the proje
Hey Siyuan,
The commit API should work the same regardless whether subscribe() or
assign() was used. Does this not appear to be working?
Thanks,
Jason
On Wed, Nov 18, 2015 at 4:40 PM, hsy...@gmail.com wrote:
> In the new API, the explicit commit offset method call only works for
> subscribe co
group.
-Jason
On Fri, Nov 20, 2015 at 3:41 PM, Jason Gustafson wrote:
> Hey Siyuan,
>
> The commit API should work the same regardless whether subscribe() or
> assign() was used. Does this not appear to be working?
>
> Thanks,
> Jason
>
> On Wed, Nov 18, 2015 a
The consumer metadata request was renamed to group coordinator request
since the coordinator plays a larger role in 0.9 for managing groups, but
its protocol format is exactly the same on the wire.
As Gwen suggested, I would recommend trying the new consumer API which
saves the trouble of accessin
Can you provide some more detail? What version of Kafka are you using?
Which consumer are you using? Are you getting errors in the consumer logs?
It would probably be helpful to see your consumer configuration as well.
-Jason
On Tue, Nov 24, 2015 at 7:18 AM, Kudumula, Surender <
surender.kudum...
Hey Martin,
At a glance, it looks like your consumer's session timeout is expiring.
This shouldn't happen unless there is a delay between successive calls to
poll which is longer than the session timeout. It might help if you include
a snippet of your poll loop and your configuration (i.e. any ove
opics.
>
> That is all fine, but it doesn't really explain why increasing poll timeout
> made the problem go away :-/
>
> Martin
>
> On 30 November 2015 at 19:30, Jason Gustafson wrote:
>
> > Hey Martin,
> >
> > At a glance, it looks like your consume
at 10:06 AM, Jason Gustafson wrote:
> Hi Martin,
>
> I'm also not sure why the poll timeout would affect this. Perhaps the
> handler is still doing work (e.g. sending requests) when the record set is
> empty?
>
> As a general rule, I would recommend longer poll timeouts
Hey Tao, other than high latency between the brokers and the consumer, I'm
not sure what would cause this. Can you turn on debug logging and run
again? I'm looking for any connection problems or metadata/fetch request
errors. And I have to ask a dumb question, how do you know that more
messages are
I commit
> offset manually so the lag should accurately reflect how many messages
> remaining.
>
> I will turn on debug logging and test again.
>
> On Wed, 2 Dec 2015 at 07:17 Jason Gustafson wrote:
>
> > Hey Tao, other than high latency between the brokers and the consumer,
The major changes in 0.9 are for the new consumer. At the moment, the
design is spread across a couple documents:
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.9+Consumer+Rewrite+Design
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Client-side+Assignment+Proposal
I'm trying
Looks like you need to use a different MessageFormatter class, since it was
renamed in 0.9. Instead use something like
"kafka.coordinator.GroupMetadataManager\$OffsetsMessageFormatter".
-Jason
On Wed, Dec 2, 2015 at 10:57 AM, Dhyan Muralidharan <
d.muralidha...@yottaa.com> wrote:
> I have this s
topic __consumer_offsets --from-beginning
You may also want to confirm that your consumers are using Kafka instead of
Zookeeper for offset storage. If you still don't see anything, we can
always look into the partition data directly...
-Jason
On Wed, Dec 2, 2015 at 11:13 AM, Jason Gustafson wrote
Hi Li,
I think reducing the client's complexity and improving performance were two
of the main reasons for the change. The rebalance protocol on top of
Zookeeper was difficult to implement correctly, and I think a number of
Kafka clients never actually got it working. Removing it as a dependence
a
Hi Kevin,
At the moment, the timeout parameter in poll() really only applies when the
consumer has an active partition assignment. In particular, it will block
indefinitely to get that assignment. If there are no brokers to connect to
or if you accidentally point it to an 0.8 cluster, it will prob
On Thu, Dec 10, 2015, 2:18 PM Jason Gustafson wrote:
>
> > Hi Kevin,
> >
> > At the moment, the timeout parameter in poll() really only applies when
> the
> > consumer has an active partition assignment. In particular, it will block
> > indefinitely to get tha
On Thu, Dec 10, 2015, 2:37 PM Jason Gustafson wrote:
>
> > And just to be clear, the broker is on 0.9? Perhaps you can enable debug
> > logging and send a snippet?
> >
> > -Jason
> >
> > On Thu, Dec 10, 2015 at 12:22 PM, Kevin Carr wrote:
> &
Hey Brian,
I think we've made these methods public again in trunk, but that won't help
you with 0.9. Another option would be to write a parser yourself since the
format is fairly straightforward. This would let you remove a dependence on
Kafka internals which probably doesn't have strong compatibi
At the moment, there is no direct way to do this, but you could use the
commit API to include metadata with each committed offset:
public void commitSync(final Map
offsets);
public OffsetAndMetadata committed(TopicPartition partition);
The OffsetAndMetadata object contains a metadata string field
Hey Jens,
I'm not sure I understand why increasing the session timeout is not an
option. Is the issue that there's too much uncertainly about processing
time to set an upper bound for each round of the poll loop?
One general workaround would be to move the processing into another thread.
For exam
es inline:
>
> On Tuesday, December 15, 2015, Jason Gustafson wrote:
>
> > Hey Jens,
> >
> > I'm not sure I understand why increasing the session timeout is not an
> > option. Is the issue that there's too much uncertainly about processing
> > time to s
Hey Rajiv,
I agree the Set/List inconsistency is a little unfortunate (another
annoying one is pause() which uses a vararg). I think we should probably
add the following variants:
assign(Collection)
subscribe(Collection)
pause(Collection)
I can open a JIRA to fix this. As for returning the unmod
s = new ArrayList<>(yetAnotherCopy);
> consumer.assign(wayTooManyCopies);
>
> Thanks,
> Rajiv
>
>
> On Tue, Dec 15, 2015 at 2:35 PM, Jason Gustafson
> wrote:
>
> > Hey Rajiv,
> >
> > I agree the Set/List inconsistency is a little unfortunate (anot
able to set the session timeout according to
the expected time to handle a single message. It'd be a bit more work to
implement this, but if the use case is common enough, it might be
worthwhile.
-Jason
On Tue, Dec 15, 2015 at 10:31 AM, Jason Gustafson
wrote:
> Hey Jens,
>
> The
many queueing solutions really seem like
> the absolute best solution to our problem as long we can overcome this
> issue.
>
> Thanks,
> Jens
>
> On Tuesday, December 15, 2015, Jason Gustafson > wrote:
>
> > I was talking with Jay this afternoon about this use case. T
Hey Pradeep,
Can you include the output from one of the ConsumerDemo runs?
-Jason
On Mon, Dec 21, 2015 at 9:47 PM, pradeep kumar
wrote:
> Can someone please help me on this.
>
> http://stackoverflow.com/questions/34405124/kafka-0-9-0-new-java-consumer-api-fetching-duplicate-records
>
> Thanks,
I took your demo code and ran it locally. So far I haven't seen any
duplicates. In addition to the output showing duplicates, it might be
helpful to include your producer code.
Thanks,
Jason
On Tue, Dec 22, 2015 at 11:02 AM, Jason Gustafson
wrote:
> Hey Pradeep,
>
> Can you incl
Hey Tao,
Interesting that you're seeing a lot of overhead constructing the new
consumer instance each time. Granted it does have to fetch topic metadata
and lookup the coordinator, but I wouldn't have expected that to be a big
problem. How long is it typically taking?
-Jason
On Mon, Jan 4, 2016
for the particular partitions and close the consumer. Is this solution
> viable?
>
> On Tue, 5 Jan 2016 at 09:56 Jason Gustafson wrote:
>
> > Hey Tao,
> >
> > Interesting that you're seeing a lot of overhead constructing the new
> > consumer instance each tim
> The reason we put the reset offset outside of the consumer process is that
> we can keep the consumer code as generic as possible since the offset reset
> process is not needed for all consumer logics.
>
> On Tue, 5 Jan 2016 at 11:18 Jason Gustafson wrote:
>
> > Ah, that m
Hi Ben,
The new consumer is single-threaded, so each instance should be given a
dedicated thread. Using multiple consumers in the same thread won't really
work as expected because poll() blocks while the group is rebalancing. If
both consumers aren't actively call poll(), then they won't be both b
rienced this yesterday and was wondering why Kafka allows commits to
> partitions from other consumers than the assigned one. Does any one know of
> the reasoning behind this?
>
> Martin
> On 5 Jan 2016 18:29, "Jason Gustafson" wrote:
>
> > Yes, in this case you s
Hi Rajiv,
Answers below:
i) How do I get the last log offset from the Kafka consumer?
To get the last offset, first call seekToEnd() and then use position().
ii) If I ask the consumer to seek to the beginning via the consumer
> .seekToBeginning(newTopicPartition) call, will it handle the case
eout parameter. I only
> use manual assignments so I am hoping that there is no consequence of
> infrequent heart beats etc through poll starvation.
>
> Thanks,
> Rajiv
>
>
>
> On Wed, Jan 6, 2016 at 1:58 PM, Jason Gustafson
> wrote:
>
> > Hi Rajiv,
> &g
din.com/in/runets> Twitter
> > <https://twitter.com/Areian>
> > *Copenhagen*
> > Falcon Social
> > H.C. Andersens Blvd. 27
> > 1553 Copenhagen
> > *Budapest*
> > Falcon Social
> > Colabs Startup Center Zrt
> > 1016 Budapest, Krisztina krt.
Hi Howard,
The offsets are persisted in the __consumer_offsets topic indefinitely.
Since you're using manual commit, have you ensured that auto.offset.reset
is disabled? It might also help if you provide a little more detail on how
you're verifying that offsets were lost.
-Jason
On Mon, Jan 11,
Sorry, wrong property, I meant enable.auto.commit.
-Jason
On Mon, Jan 11, 2016 at 9:52 AM, Jason Gustafson wrote:
> Hi Howard,
>
> The offsets are persisted in the __consumer_offsets topic indefinitely.
> Since you're using manual commit, have you ensured that auto.offset.re
.common.serialization.StringDeserializer");
> consumer = new KafkaConsumer<>(props);
>
>
>
>
> Thanks.
>
> Howard
>
> On 1/11/16, 12:55 PM, "Jason Gustafson" wrote:
>
> >Sorry, wrong property, I meant enable.auto.commit.
> >
>
Looks like you might have bootstrap.servers pointed at Zookeeper. It should
point to the Kafka brokers instead. The behavior of poll() currently is to
block until the group's coordinator is found, but sending the wrong kind of
request to Zookeeper probably results in a server-side disconnect. In th
Hey Richard,
Yeah, I think you're right. I think this is the same issue from KAFKA-2478,
which appears to have been forgotten about. I'll see if we can get the
patch merged.
-Jason
On Mon, Jan 11, 2016 at 4:27 PM, Richard Lee wrote:
> Apologies if this has been discussed already...
>
> The ‘Ma
Hi Franco,
The new consumer combines the functionality of the older simple and
high-level consumers. When used in simple mode, you have to assign the
partitions that you want to read from using assign(). In this case, the
consumer works alone and not in a group. Alternatively, if you use the
subsc
; does it renew the token?
> (2) What happens to the coordinator if all consumers die?
>
> Franco.
>
>
>
>
> 2016-01-15 19:30 GMT+01:00 Jason Gustafson :
>
> > Hi Franco,
> >
> > The new consumer combines the functionality of the older simple an
Hi Krzysztof,
This is definitely weird. I see the data in the broker's send queue, but
there's a delay of 5 seconds before it's sent to the client. Can you create
a JIRA?
Thanks,
Jason
On Thu, Jan 21, 2016 at 11:30 AM, Samya Maiti
wrote:
> +1, facing same issue.
> -Sam
> > On 22-Jan-2016, at
The offset API is one of the known gaps in the new consumer. There is a
JIRA (KAFKA-1332), but we might need a KIP to make that change now that the
API is released. For now, Gwen's suggestion is the only way to do it.
-Jason
On Thu, Jan 21, 2016 at 8:22 PM, Gwen Shapira wrote:
> Hi Robert!
>
>
finitely picking up messages with some delay.
>
> -Sam
>
>
> > On 22-Jan-2016, at 11:54 am, Jason Gustafson wrote:
> >
> > Hi Krzysztof,
> >
> > This is definitely weird. I see the data in the broker's send queue, but
> > there's a delay of
Apologies for the late arrival to this thread. There was a bug in the
0.9.0.0 release of Kafka which could cause the consumer to stop fetching
from a partition after a rebalance. If you're seeing this, please checkout
the 0.9.0 branch of Kafka and see if you can reproduce this problem. If you
can,
Hey Rajiv, the bug was on the client. Here's a link to the JIRA:
https://issues.apache.org/jira/browse/KAFKA-2978.
-Jason
On Mon, Jan 25, 2016 at 11:42 AM, Rajiv Kurian wrote:
> Hi Jason,
>
> Was this a server bug or a client bug?
>
> Thanks,
> Rajiv
>
> On Mon, Ja
It might be a little unintuitive, but the committed position should be the
offset of the next message to consume.
-Jason
On Mon, Jan 25, 2016 at 1:26 PM, Franco Giacosa wrote:
> When doing poll() when there is no current position on the consumer, the
> position returned is the one of the last o
Hey Krzysztof,
So far I haven't had any luck figuring out the cause of the 5 second pause,
but I've reproduced it with the old consumer on 0.8.2, so that rules out
anything specific to the new consumer. Can you tell me which os/jvm you're
seeing it with? Also, can you try changing the "receive.buf
Hey Rajiv,
Thanks for the detailed report. Can you go ahead and create a JIRA? I do
see the exceptions locally, but not nearly at the rate that you're
reporting. That might be a factor of the number of partitions, so I'll do
some investigation.
-Jason
On Wed, Jan 27, 2016 at 8:40 AM, Rajiv Kuria
Krzysztof
> On 26 January 2016 at 19:04:58, Jason Gustafson (ja...@confluent.io)
> wrote:
>
> Hey Krzysztof,
>
> So far I haven't had any luck figuring out the cause of the 5 second pause,
> but I've reproduced it with the old consumer on 0.8.2, so that rules out
>
Hi Pierre,
Thanks for your persistence on this issue. I've gone back and forth on this
a few times. The current API can definitely be annoying in some cases, but
breaking compatibility still sucks. We do have the @Unstable annotation on
the API, but it's unclear what exactly it means and I'm guess
Hey Tom,
Yes, it is possible that the poll() will rebalance and resume fetching for
a previously paused partition. First thought is to use a
ConsumerRebalanceListener to re-pause the partitions after the rebalance
completes.The rebalance listener offers two hooks: onPartitionsRevoked() is
called b
Hey Yifan,
As far as how the consumer works internally, there's not a big difference
between using a long timeout or a short timeout. Which you choose really
depends on the needs of your application. Typically people use a short
timeout in order to be able to break from the loop with a boolean fla
That is correct. KIP-19 has the details:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-19+-+Add+a+request+timeout+to+NetworkClient
.
-Jason
On Fri, Jan 29, 2016 at 3:08 AM, tao xiao wrote:
> Hi team,
>
> I want to understanding the meaning of request.timeout.ms that is used in
> produce
Most of the use cases of pause/resume that I've seen work only on single
partitions (e.g in Kafka Streams), so the current varargs method is kind of
nice. It would also be nice to be able to do the following:
consumer.pause(consumer.assignment());
Both variants seem convenient in different situat
Hey Rajiv,
Just to be clear, when you received the empty fetch response, did you check
the error codes? It would help to also include some more information (such
as broker and topic settings). If you can come up with a way to reproduce
it, that will help immensely.
Also, would you mind updating K
changes to the topic configuration while running
> >>> these tests. All the changes I have made are to the settings of my
> fetch
> >>> request i.e. min_bytes_per_fetch, max_wait_ms and
> max_bytes_per_partition.
> >>> I haven't exactly noted all
Hey Alexey,
The API of the new consumer is designed around an event loop in which all
IO is driven by the poll() API. To make this work, you need to call poll()
in a loop (see the javadocs for examples). So in this example, when you
call commitAsync(), the request is basically just queued up to be
tion of current status of resume/pause for
> client. Am I wrong? What about having such API?
>
> Also, you told about "one option". Is it another?
>
> Thanks!
>
> On Sat, Feb 6, 2016 at 6:13 AM, Jason Gustafson
> wrote:
>
> > Hey Alexey,
> >
>
Hey Jens,
The heartbeat response is used by the coordinator to tell group members
that the group needs to rebalance. For example, if a new member joins the
consumer group, then the coordinator will wait for the heartbeat from each
member and set a REBALANCE_NEEDED error code in the response. Hence
The new Java consumer in 0.9.0 will not work with 0.8.2 since it depends on
the group management protocol built into Kafka, but the older consumer
should still work.
-Jason
On Thu, Feb 11, 2016 at 2:44 AM, Joe San wrote:
> I have a 0.9.0 version of the Kafka consumer. Would that work against th
We have them in the Confluent docs:
http://docs.confluent.io/2.0.0/kafka/monitoring.html#new-consumer-metrics.
-Jason
On Thu, Feb 11, 2016 at 4:40 AM, Avi Flax wrote:
> On Thursday, December 17, 2015 at 18:08, Guozhang Wang wrote:
> > We should add a section for that. Siyuan can you file a JIRA
Hi Robin,
It would be helpful if you posted the full code you were trying to use. How
to seek largely depends on whether you are using new consumer in "simple"
or "group" mode. In simple mode, when you know the partitions you want to
consume, you should just be able to do something like the follow
Woops. Looks like Alex got there first. Glad you were able to figure it out.
-Jason
On Thu, Feb 18, 2016 at 9:55 AM, Jason Gustafson wrote:
> Hi Robin,
>
> It would be helpful if you posted the full code you were trying to use.
> How to seek largely depends on whether you a
Hi Gary,
The coordinator is a special broker which is chosen for each consumer group
to manage its state. It facilitates group membership, partition assignment
and offset commits. If the coordinator is shutdown, then Kafka will choose
another broker to assume the role. The log message might be a l
The consumer is single-threaded, so we only trigger commits in the call to
poll(). As long as you consume all the records returned from each poll
call, the committed offset will never get ahead of the consumed offset, and
you'll have at-lest-once delivery. Note that the implication is that "
auto.c
Hi Venkatesan,
Autocreation of topics happens when the broker receives a topic metadata
request. That should mean that both topics get created when the consumer
does the initial poll() since that is the first time that topic metadata
would be fetched (fetching topic metadata allows the consumer an
Tough to answer. Definitely the rate of reported bugs has fallen. Other
than the one Becket found a few weeks back, I haven't seen anything major
since the start of the year. My advice would probably be "proceed with
caution."
-Jason
On Fri, Feb 19, 2016 at 1:06 PM, allen chan
wrote:
> My compa
To clarify, the bug I mentioned has been fixed in 0.9.0.1.
-Jason
On Fri, Feb 19, 2016 at 1:33 PM, Ismael Juma wrote:
> Even though we did not remove the beta label, all significant bugs we are
> aware of have been fixed (thanks Jason!). I'd say you should try it out. :)
>
> Ismael
>
> On Fri,
1 - 100 of 204 matches
Mail list logo