The old implementation is taking upto 2.5 seconds.
new implementation is taking 10 seconds (10 polls for each poll have 500
records returning)
Authentication happens every time internally?is this delay expected? How to
explain this to management please?
On Thu, Jan 16, 2025 at 10:42 PM giri mu
; Namboodiri, Vishnu ;
Mudlapur, Rajesh ; Reddy, Thanmai
Subject: RE: Kafka consumer group crashing and not able to consume once service
is up
Hi Philip,
Are you expecting logs from subscriber from our service please do let me know.
Thanks,
Santhosh Aditya
From: Marigowda, Santhosh Aditya
Sent
, Thanmai
Subject: RE: Kafka consumer group crashing and not able to consume once service
is up
Hi Philip, Please find the service logs(Subscriber logs). We don’t see any
logs-related issues with consumer.
Kafka consumer configuration
kafka-consumer = {
server= "10.221.10
..@in.unisys.com>; Mudlapur, Rajesh <
> rajesh.mudla...@au.unisys.com>; Reddy, Thanmai ;
> Marigowda, Santhosh Aditya
> *Subject:* RE: Kafka consumer group crashing and not able to consume once
> service is up
>
>
>
> Hi Philip,
>
> Thanks for your queries, Please
; Namboodiri, Vishnu <
> vishnu.nambood...@in.unisys.com>; Mudlapur, Rajesh <
> rajesh.mudla...@au.unisys.com>; Reddy, Thanmai
> *Subject:* RE: Kafka consumer group crashing and not able to consume once
> service is up
>
>
>
> Thanks Matthias for your response. Sorr
: Matthias J. Sax mailto:mj...@apache.org>>
Sent: Wednesday, January 31, 2024 12:13 AM
To: dev@kafka.apache.org<mailto:dev@kafka.apache.org>
Subject: Re: Kafka consumer group crashing and not able to consume once service
is up
I am a not sure if I can follow completely. From the figures you sh
@kafka.apache.org
Subject: Re: Kafka consumer group crashing and not able to consume once service
is up
I am a not sure if I can follow completely. From the figures you show, you have
a topic with 4 partitions, and 4 consumer groups. Thus, each consumer group
should read all 4 partitions, but the
I am a not sure if I can follow completely.
From the figures you show, you have a topic with 4 partitions, and 4
consumer groups. Thus, each consumer group should read all 4 partitions,
but the figure indicate that each group would read a single partition only?
Can you clarify? Are you using
Thank you, Sophie. It helps a lot.
On Tue, May 11, 2021 at 11:58 PM Sophie Blee-Goldman
wrote:
> Hey Vipul,
>
> There's no "rebalance.protocol" config, as the protocol is determined by
> the supported partition
> assignors. The assignors can be configured with the config
> "partition.assignment.
Hey Vipul,
There's no "rebalance.protocol" config, as the protocol is determined by
the supported partition
assignors. The assignors can be configured with the config
"partition.assignment.strategy". I
suspect the "rebalance.protocol" comment was a leftover from an older
version of the proposal.
I
Hi,
Maybe it would be helpful if you can attach the logs for your consumers
when you notice they stopped consuming from some of your partitions
On Tue, 23 Jul 2019 at 16:56, Sergey Fedorov wrote:
> Hello. I was using Kafka 2.1.1 and facing a problem where our consumers
> sometimes intermittentl
You should read the message value as byte array rather than string .
Other Approach is , while producing you can use the kafka compression = GZIP to
have similar results.
-Original Message-
From: mayur shah
Sent: Monday, May 21, 2018 1:50 AM
To: us...@kafka.apache.org; dev@kafka.apache
Hi,
Any help here would be highly appreciated :)
Sagar.
On Wed, 27 Dec 2017 at 07:53, Sagar wrote:
> We have a use case where in we want to assign partitions manually for a
> set of topics to allow fine grained control of the records we are fetching.
>
> Basically what we are trying to achieve
I don't see values for the Consumer Properties.
Can you try out 0.11.0.1 ?
See
http://search-hadoop.com/m/Kafka/uyzND1qxYr5prjxv?subj=Incorrect+consumer+offsets+after+broker+restart+0+11+0+0
On Wed, Oct 11, 2017 at 11:37 AM, SenthilKumar K
wrote:
> Hi All , Recently we starting seeing Kafka Co
Can anyone please check this one?
Thanks
Achintya
-Original Message-
From: Ghosh, Achintya (Contractor)
Sent: Monday, August 08, 2016 9:44 AM
To: us...@kafka.apache.org
Cc: dev@kafka.apache.org
Subject: RE: Kafka consumer getting duplicate message
Thank you , Ewen for your response
ssage-
From: Ewen Cheslack-Postava [mailto:e...@confluent.io]
Sent: Saturday, August 06, 2016 1:45 AM
To: us...@kafka.apache.org
Cc: dev@kafka.apache.org
Subject: Re: Kafka consumer getting duplicate message
Achintya,
1.0.0.M2 is not an official release, so this version number is not particu
Achintya,
1.0.0.M2 is not an official release, so this version number is not
particularly meaningful to people on this list. What platform/distribution
are you using and how does this map to actual Apache Kafka releases?
In general, it is not possible for any system to guarantee exactly once
sema
Hi Sunny,
It would be helpful to know the processing logic after the records are
returned by poll(). How do you make sure that the polling is happening
within session timeout? Did you try to reduce the number of messages
returned by setting max.poll.records to a smaller value?
Thanks,
Liquan
On
Thanks for the added info. For the mean time we'll rely on the older
ConsumerOffsetChecker for our monitoring tools.
Thanks,
Cliff
On Fri, Jan 29, 2016 at 10:56 AM, Guozhang Wang wrote:
> Tao,
>
> You are right, ConsumerOffsetChecker can still get offsets from the offset
> manager in Kafka.
>
Tao,
You are right, ConsumerOffsetChecker can still get offsets from the offset
manager in Kafka.
Guozhang
On Thu, Jan 28, 2016 at 9:36 PM, tao xiao wrote:
> it first issues an offsetrequest to broker and check if offset is stored in
> broker if not it will queries zk
>
>
> https://github.com/
it first issues an offsetrequest to broker and check if offset is stored in
broker if not it will queries zk
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/tools/ConsumerOffsetChecker.scala#L171
On Fri, 29 Jan 2016 at 13:11 Guozhang Wang wrote:
> Tao,
>
> Hmm that is a bit
Tao,
Hmm that is a bit wired since ConsumerOffsetChecker itself does not talk to
brokers at all, but only through ZK.
Guozhang
On Thu, Jan 28, 2016 at 6:07 PM, tao xiao wrote:
> Guozhang,
>
> The old ConsumerOffsetChecker works for new consumer too with offset stored
> in Kafka. I tested it wi
Guozhang,
The old ConsumerOffsetChecker works for new consumer too with offset stored
in Kafka. I tested it with mirror maker with new consumer enabled. it is
able to show offset during mirror maker running and after its shutdown.
On Fri, 29 Jan 2016 at 06:34 Guozhang Wang wrote:
> Once the off
Once the offset is written to the log it is persistent and hence should
survive broker failures. And its retention policy is configurable.
It may be a bit misleading in saying "in-memory cache" in my previous
email: the brokers just keep the in-memory map of [group, partition] ->
latest_offset, wh
Hi Guozhang,
That looks like it might help but feels like there might be some gaps.
Would it be able to survive restarts of the kafka broker? How long would
it stay in the cache (and is that configurable)? If it expires from the
cache, what's the cache-miss operation look like? (yes, a lot of t
Hi Cliff,
Short answer to your question is it is just the current implementation.
The coordinator stores the offsets as messages in an internal topic and
also keeps the latest offset values in in-memory. It answers
ConsumerGroupRequest using its cached offset, and upon the consumer group
being re
Just following up on this concern. Is there a constraint that prevents
ConsumerGroupCommand from reporting offsets on a group if no members are
connected, or is this just the current implementation?
Thanks,
Cliff
On Mon, Jan 25, 2016 at 3:50 PM, Cliff Rhyne wrote:
> I'm running into a few chal
No the purpose of pause/resume was to be able to implement prioritization
of processing by partition. i.e. if you want to prioritize a given subset
of partitions without rebalancing them to another consumer you pause the
others and continue reading.
-Jay
On Fri, Jul 31, 2015 at 4:55 PM, Jun Rao
Hi Jun,
This is still debatable, but I think it makes the most sense to keep
pause/resume independent of assignment. Otherwise we still get into the
weird ordering problems that we were trying to resolve before. To me,
pause/resume expresses clearly the intent to suppress consumption from a
set of
Jason,
I guess that with the new setAssignment() api, we will also be getting rid
of pause() and resume()?
Thanks,
Jun
On Fri, Jul 31, 2015 at 11:29 AM, Jason Gustafson
wrote:
> I was thinking a little bit this morning about the subscription API and I
> have a few ideas on how to address some
For those who are interested in the ticket:
https://issues.apache.org/jira/browse/KAFKA-2388
On Fri, Jul 31, 2015 at 1:14 PM, Jiangjie Qin wrote:
> I like the idea as well. That is much clearer.
> Also agree with Jay on the naming.
>
> Thanks, Jason. I'll update the Jira ticket.
>
> Jiangjie (Be
Great ideas Jason!
On Fri, Jul 31, 2015 at 12:19 PM, Jay Kreps wrote:
> I like all these ideas.
>
> Our convention is to keep method names declarative so it should probably be
> subscribe(List topics, Callback c)
> assign(List
> The javadoc would obviously have to clarify the relationship be
I like the idea as well. That is much clearer.
Also agree with Jay on the naming.
Thanks, Jason. I'll update the Jira ticket.
Jiangjie (Becket) Qin
On Fri, Jul 31, 2015 at 12:19 PM, Jay Kreps wrote:
> I like all these ideas.
>
> Our convention is to keep method names declarative so it should p
I like all these ideas.
Our convention is to keep method names declarative so it should probably be
subscribe(List topics, Callback c)
assign(List
wrote:
> I was thinking a little bit this morning about the subscription API and I
> have a few ideas on how to address some of the concerns about
I was thinking a little bit this morning about the subscription API and I
have a few ideas on how to address some of the concerns about intuitiveness
and exception handling.
1. Split the current notion of topic/partition subscription into
subscription of topics and assignment of partitions. These
Hey Becket,
Yeah the high-level belief here is that it is possible to give something as
high level as the existing "high level" consumer, but this is not likely to
be the end-all be-all of high-level interfaces for processing streams of
messages. For example neither of these interfaces handles the
Thanks for the comments Jason and Jay.
Jason, I had the same concern for producer's callback as well before, but
it seems to be fine from some callbacks I wrote - user can always pass in
object in the constructor if necessary for synchronization.
Jay, I agree that the current API might be fine fo
Hi Jay,
Good points. A few remarks below.
On Wed, Jul 29, 2015 at 11:16 PM, Jay Kreps wrote:
>
> I suggest we focus on threading and the current event-loop style of api
> design since I think that is really the crux.
>
Agreed.
I think ultimately though what you need to think about is, does an
Some comments on the proposal:
I think we are conflating a number of things that should probably be
addressed individually because they are unrelated. My past experience is
that this always makes progress hard. The more we can pick apart these
items the better:
1. threading model
2. blockin
I think this proposal matches pretty well with what user's intuitively
expect the implementation to be. At a glance, I don't see any problems with
doing the liveness detection in the background thread. It also has the
advantage that the frequency of heartbeats (which controls how long
rebalancing t
Works now. Thanks Becket!
On Wed, Jul 29, 2015 at 1:19 PM, Jiangjie Qin wrote:
> Ah... My bad, forgot to change the URL link for pictures.
> Thanks for the quick response, Neha. It should be fixed now, can you try
> again?
>
> Jiangjie (Becket) Qin
>
> On Wed, Jul 29, 2015 at 1:10 PM, Neha Narkh
The images load for me well.
On Wed, Jul 29, 2015 at 1:10 PM, Neha Narkhede wrote:
> Thanks Becket. Quick comment - there seem to be a bunch of images that the
> wiki refers to, but none loaded for me. Just making sure if its just me or
> can everyone not see the pictures?
>
> On Wed, Jul 29, 20
Ah... My bad, forgot to change the URL link for pictures.
Thanks for the quick response, Neha. It should be fixed now, can you try
again?
Jiangjie (Becket) Qin
On Wed, Jul 29, 2015 at 1:10 PM, Neha Narkhede wrote:
> Thanks Becket. Quick comment - there seem to be a bunch of images that the
> wi
Thanks Becket. Quick comment - there seem to be a bunch of images that the
wiki refers to, but none loaded for me. Just making sure if its just me or
can everyone not see the pictures?
On Wed, Jul 29, 2015 at 12:00 PM, Jiangjie Qin wrote:
> I agree with Ewen that a single threaded model will be
I agree with Ewen that a single threaded model will be tricky to implement
the same conventional semantic of async or Future. We just drafted the
following wiki which explains our thoughts in LinkedIn on the new consumer
API and threading model.
https://cwiki.apache.org/confluence/display/KAFKA/Ne
On Tue, Jul 28, 2015 at 5:18 PM, Guozhang Wang wrote:
> I think Ewen has proposed these APIs for using callbacks along with
> returning future in the commit calls, i.e. something similar to:
>
> public Future commit(ConsumerCommitCallback callback);
>
> public Future commit(Map offsets,
> Consume
I think Ewen has proposed these APIs for using callbacks along with
returning future in the commit calls, i.e. something similar to:
public Future commit(ConsumerCommitCallback callback);
public Future commit(Map offsets,
ConsumerCommitCallback callback);
At that time I was slightly intending no
Hey Adi,
When we designed the initial version, the producer API was still changing.
I thought about adding the Future and then just didn't get to it. I agree
that we should look into adding it for consistency.
Thanks,
Neha
On Tue, Jul 28, 2015 at 1:51 PM, Aditya Auradkar
wrote:
> Great discuss
Great discussion everyone!
One general comment on the sync/async API's on the new consumer. I think
the producer tackles sync vs async API's well. For API's that can either be
sync or async, can we simply return a future? That seems more elegant for
the API's that make sense either in both flavors
I think if we recommend a longer session timeout, then we should expose the
heartbeat frequency in configuration since this generally controls how long
normal rebalances will take. I think it's currently hard-coded to 3
heartbeats per session timeout. It could also be nice to have an explicit
Leave
Kartik, on your second point about timeouts with poll() and heartbeats, the
consumer now handles this properly. KAFKA-2123 introduced a
DelayedTaskQueue and that is used internally to handle processing events at
the right time even if poll() is called with a large timeout. The same
mechanism is use
Hey Kartik,
Totally agree we don't want people tuning timeouts in the common case.
However there are two ways to avoid this:
1. Default the timeout high
2. Put the heartbeat in a separate thread
When we were doing the consumer design we discussed this tradeoff and I
think the conclusion we came
adding the open source alias. This email started off as a broader
discussion around the new consumer. I was zooming into only the aspect of
poll() being the only mechanism for driving the heartbeats.
Yes the lag is the effect of the problem (not the problem). Monitoring the
lag is important as
Hi,
1. Not sure if I understand your question.. could you elaborate?
2. Yes, and then the data for that topic will be distributed at the
granularity of partitions to your consumers.
3. The default value is set to 60 seconds I believe. You can read the
config docs for its semantics here:
http://ka
I didn't realize there was a commitOffset() method on the high level consumer
(the code is abstracted by the Spring Integration classes).
Yes, this actually suits my needs and I was able to get it to work for my use
case.
Thank you very much - that was extremely helpful.
In case it's of any use
Yes, the new consumer api will solve your probably better. Before that's
ready, another option is to use the commitOffset() api in the high level
consumer. It doesn't take any offset though. So, to prevent message loss
during consumer failure, you will need to make sure all iterated messages
are fu
Hi Jun,
Thanks for taking a look at my issue and also for updating the future release
plan Wiki page.
My use case is to use Kafka as if it were a JMS provider (messaging use case).
I'm currently using Kafka 0.8.1.1 with Java and specifically the Spring
Integration Kafka Inbound Channel Adapter
Hi Jun,
Thanks for taking a look at my issue and also for updating the future release
plan Wiki page.
My use case is to use Kafka as if it were a JMS provider (messaging use case).
I'm currently using Kafka 0.8.1.1 with Java and specifically the Spring
Integration Kafka Inbound Channel Adapter
The transactional work is currently just a prototype. The release wiki is a
bit outdated and I just updated it.
Do you have a particular transactional use case in mind?
Thanks,
Jun
On Mon, Nov 10, 2014 at 3:23 PM, Falabella, Anthony <
anthony.falabe...@citi.com> wrote:
> I've tried to search a
59 matches
Mail list logo