Hello, I have encountered a issue.Need your help check that.Thanks.
Problem description:
When the number of @ KafkaListeners in my SpringBoot code reaches a certain
level, the issue of rebarace will continue to occur
Error log:
2024-08-30 08:35:14.507 INFO 5000 --- [tainer#27-0-C-1]
o.a.k.c.
Hi,
I'm using Kafka 2.5.1 broker and Kafka Connect Confluent image 7.1.1. We
are using a sink connector to read from Kafka.
We occasionally see Fetch Position OutOfRange error like this
[2024-07-19 00:54:59,456] INFO [Consumer
> clientId=connector-consumer-CPSSectorRouterTestEventSinkConnector-1
Hi.
JoinGroup request is sent from the polling/user thread.
In your example, the consumer instance will be removed from the group
because it didn't join the group within the timeout.
So the partition will be assigned to another consumer and be processed.
2023年11月26日(日) 18:09 Debraj Manna :
> Can
Can someone let me know if the JoinRequest is sent by the consumer from the
polling/user thread or from the background heart-beat thread?
If JoinRequest is being sent from the polling/user thread then in this case
if the poll user thread takes more than max.poll.interval.secs then the
consumer wil
Right now sometime I am observing that after the above log is printed on
both the consumer instances then the machine on which the consumer
instances are running stops consuming any new messages. My understanding
was that after the above log is printed then the consumer instances will be
removed fr
Hi
Can someone let me know how a consumer is expected to behave after the
below log? Will the consumer be considered dead and a new instance will be
spawned due to consumer group rebalancing? How is this behaviour with
RangeAssignor and CooperativeStickyAssginer?
consumer poll timeout has expired
://stackoverflow.com/questions/76458064/apache-kafka-consumer-consumes-messages-with-partition-option-but-not-with-g),
where I didn't get any answer yet. Searching the web did not yield any helpful
reults either. Hence, I am addressing to this mailing list:
I am running a plain Apache Kafka s
This is a copy of a topic I posted in stackoverflow
(https://stackoverflow.com/questions/76458064/apache-kafka-consumer-consumes-messages-with-partition-option-but-not-with-g),
where I didn't get any answer yet. Searching the web did not yield any helpful
reults either. Hence, I am addre
Hi Team,
We have a scenario where our application is processing messages at a slower
rate. We want the consumer to stop fetching messages from the broker and
re-fetch when an application is ready to process again.
We have fetch.max.bytes but that doesn't manage the buffer memory. If my
understand
I'm not sure to follow you:
either mine
- https://github.com/ut0mt8/yakle or
- https://github.com/danielqsj/kafka_exporter or
- https://github.com/redpanda-data/kminion
export consumer-group lag metrics. All works.
best,
--
Raphael Mazelier
On 10/05/2023 22:47, Akshay Kumar wrote:
Hello te
as well.Sent from my Galaxy
> Original message From: Akshay Kumar
> Date: 5/9/23 20:14 (GMT+01:00) To:
> users@kafka.apache.org Subject: Kafka Consumer Lag Monitoring Hello
> team,I am using Zookeeper less Kafka (Kafka Kraft - version 3.3.1). I
> wanted to monitor consume
m.sun.management.jmxremote.local.only=false
> -Djava.rmi.server.hostname=
>
>
>1.
>
>Replace with the hostname or IP address of the Kafka broker
>or consumer machine.
>2.
>
>Start a monitoring system that supports JMX, such as Prometheus,
>
.only=false
-Djava.rmi.server.hostname=
1.
Replace with the hostname or IP address of the Kafka broker
or consumer machine.
2.
Start a monitoring system that supports JMX, such as Prometheus,
Grafana, or Datadog.
3.
Configure the monitoring system to collect Kafka con
@kafka.apache.org Subject: Kafka Consumer Lag Monitoring Hello team,I am
using Zookeeper less Kafka (Kafka Kraft - version 3.3.1). I wanted to monitor
consumer lag, so I was using Burrow for that, but I am unable to use Burrow
without Zookeeper.Does Burror work without Zookeeper?Or what is the better or
best
Hello team,
I am using Zookeeper less Kafka (Kafka Kraft - version 3.3.1). I wanted to
monitor consumer lag, so I was using Burrow for that, but I am unable to
use Burrow without Zookeeper.
Does Burror work without Zookeeper?
Or what is the better or best way to monitor consumer lag and lag hist
Hi all,
Is there a standardized way to implement a health check for a Kafka Consumer?
I.e. for an application that runs in Kubernetes w/liveness probes. There does
not seem to be an exposed API method for the Consumer’s current state or
anything similar.
The example issue we ran into was with
I want my consumers to process large batches, so I aim to have the consumer
listener "awake", say, on 1800mb of data or every 5min, whichever comes
first.
Mine is a kafka-springboot application, the topic has 28 partitions, and
this is the configuration I explicitly change:
| Parameter
UG:kafka.conn: [IPv4 ('127.0.0.1', 9094)]>: reconnect backoff
0.055656264781808004 after 1 failures
Traceback (most recent call last):
File "consumer-scram-ssl.py", line 6, in
consumer = KafkaConsumer('test-topic', bootstrap_servers='127.0.0.1:9094'
Hi Luke,
Thanks for the details... so from explanation above, it seems that both of
these scenarios, I won't be able to avoid duplicates processing, which is
main purpose that I was looking to achieve
scenario 1: consumer shuts down, and doesn't commit offsets of
already polled and processed batc
Hi
1. I was under impression, from documentation, that close method waits for
30 seconds to complete processing of any in-flight events and then commits
offsets of last poll. Isn't that true? what does timeout of 30 seconds mean?
-> 30 seconds timeout is to have a buffer for graceful closing, ex:
Thanks Luke..
1. I was under impression, from documentation, that close method waits for
30 seconds to complete processing of any in-flight events and then commits
offsets of last poll. Isn't that true? what does timeout of 30 seconds mean?
2. How does CoperativeStickyAssignor solve my problem wh
Hi Pushkar,
Here's the answer to your questions:
> 1. During scale-down operation, I am adding a shutdown hook to the Java
Runtime, and calling close on the consumer. As per kafka docs, close
provides 30 sec to commit current offsets if auto.commit is enabled: so, i
assume that it will process th
Hi All,
I am hosting kafka consumers inside microservice hosted as kubernetes pods,
3 consumers in a consumer group.
There is a requirement to add auto-scaling where there will be a single pod
which will be auto-scaled out or scaled-in based on the load on
microservice.
So, 1 pod can be scaled out
Hi,
I'm running some tests with Kafka (4 broker setup, version 3.2.0) using
kafka-consumer-perf-test.sh. After starting it multiple times in sequence with
e.g.
kafka-consumer-perf-test.sh --bootstrap-server :9092 --topic
test-topic --messages 5000 --show-detailed-stats --print-me
Hi there,
Just noticed from time to time, but not so often, Kafka consumer JMX incoming
byte rate to 0, meanwhile consumer is consuming as expected.
SELECT average(newrelic.timeslice.value)
FROM Metric
WHERE metricTimesliceName =
'MessageBroker/Kafka/Internal/consumer-node-metrics/inc
Hi -
depending on the rules for how to filter/drop incoming messages (and
depending on the mechanics of the library you use to consume the messages),
it might be possible to filter out messages based on message headers,
maybe? That way you would not need to deserialize the message key/value
before
Hi abdelali,
If you can’t get your producers to send the different types of events to
different topics (or you don’t want to) you could use Kafka streams to filter
the data in the topic to new topics that are subsets of the data.
I have also seen apache spark used to do similar.
Thanks,
Jamie
Hi All,
I started a couple of weeks ago learning Kafka, and my goal is to optimize an
existing architecture that uses Kafka in its components.
The problem is that there many microservices that produce messages/events to
the the kafka topic and in the other hand theres other microservices that
c
a is using SSL, and the truststore and keystore files are
> stored in buckets. I'm using Google Storage API to access the bucket, and
> store the file in the current working directory. The truststore and
> keystores are passed onto the Kafka Consumer/Producer. However - i'm
>
ile in the current working directory. The truststore and
keystores are passed onto the Kafka Consumer/Producer. However - i'm
getting an error
Failed to construct kafka consumer, Failed to load SSL keystore
dataproc-versa-sase-p12-1.jks of type JKS
Details in stackoverflow -
https://stackoverflo
research project ).
Thank you.
From: Edward Capriolo
Sent: Monday, January 31, 2022 11:28 PM
To: users@kafka.apache.org
Subject: Re: Kafka Consumer Fairness when fetching events from different
partitions.
On Monday, January 31, 2022, Chad Preisler wrote:
> He
On Monday, January 31, 2022, Chad Preisler wrote:
> Hello,
>
> I got this from the JavaDocs for KafkaConsumer.
>
> * If a consumer is assigned multiple partitions to fetch data from, it
> will try to consume from all of them at the same time,
> * effectively giving these partitions the same pri
Hello,
I got this from the JavaDocs for KafkaConsumer.
* If a consumer is assigned multiple partitions to fetch data from, it
will try to consume from all of them at the same time,
* effectively giving these partitions the same priority for consumption.
However in some cases consumers may want
Dear all,
Consider a kafka topic deployment with 3 partitions P1, P2, P3 with
events/records lagging in the partitions equal to 100, 50, 75 for P1, P2, P3
respectively. And let’s suppose that num.poll.records (the maximum number of
records that can be fetched from the broker ) is equal to 100.
Hello Luke
i have build a new kafka environment with kafka 2.8.0
the consumer is a new consumer set up to this environment is throwing the
below error... the old consumers for the same applications for the same
environment -2.8.0 is working fine.. .
could you please advise
2021-11-02 12:25:24 D
Hi,
Which version of kafka client are you using?
I can't find this error message in the source code.
When googling this error message, it showed the error is in Kafka v0.9.
Could you try to use the V3.0.0 and see if that issue still exist?
Thank you.
Luke
On Thu, Oct 28, 2021 at 11:15 PM Kafka L
Dear Kafka Experts
We have set up a group.id (consumer ) = YYY
But when tried to connect to kafka instance : i get this error message. I
am sure this consumer (group id does not exist in kafka) .We user plain
text protocol to connect to kafka 2.8.0. Please suggest how to resolve this
issue.
D
On Thu, Aug 26, 2021 at 4:44 PM Shantam Garg(Customer Service and Transact)
wrote:
> Hello all,
>
> I have come across this behavior in our production cluster where the*
> Kafka consumers and heartbeat threads are getting stuck/hang-up* without
> any error.
>
> As the heartbeat threads as stuck -
Hello all,
I have come across this behavior in our production cluster where the* Kafka
consumers and heartbeat threads are getting stuck/hang-up* without any
error.
As the heartbeat threads as stuck - the session timeout is breached and the
consumers are thrown on the live consumer group. I have
Hi Folks,
We recently came across a weird scenario where we had a consumer group
consuming from multiple topics. When we ran the "Kafka-consumer-group"
command multiple times, we saw that the CURRENT-OFFSET is advancing;
however , we also saw a line printed:
*"Consumer group
> > > > >
> > > > > atomic? Is that possible?
> > > > >
> > > > >
> > > > >
> > > > > On Fri, Jul 16, 2021 at 6:18 PM Chris Larsen
> > > > > > > > >
> > > > >
> > > &
On Fri, Jul 16, 2021 at 6:18 PM Chris Larsen
> > > > > > >
> > > >
> > > > wrote:
> > > >
> > > >
> > > >
> > > > > Pushkar, in kafka development for customer consumer/producer you
> > handle
> &g
; > > Pushkar, in kafka development for customer consumer/producer you
> handle
> > > it.
> > >
> > > > However you can ensure the process stops (or sends message to dead
> > > letter)
> > >
> > > > before manually committin
> > >
>> > >
>> > > > Pushkar, in kafka development for customer consumer/producer you
>> handle
>> > > it.
>> > >
>> > > > However you can ensure the process stops (or sends message to dead
>> > > letter)
>> > >
shkar, in kafka development for customer consumer/producer you
> handle
> > > it.
> > >
> > > > However you can ensure the process stops (or sends message to dead
> > > letter)
> > >
> > > > before manually committing the consumer offset. On t
he process stops (or sends message to dead
> > letter)
> >
> > > before manually committing the consumer offset. On the produce side you
> > can
> >
> > > turn on idempotence or transactions. But unless you are using Streams,
> > you
> >
> >
the produce side you
> can
>
> > turn on idempotence or transactions. But unless you are using Streams,
> you
>
> > chain those together yoursef. Would kafka streams work for the operation
>
> > you’re looking to do?
>
> >
>
> > Best,
>
> >
tions. But unless you are using Streams, you
> chain those together yoursef. Would kafka streams work for the operation
> you’re looking to do?
>
> Best,
> Chris
>
> On Fri, Jul 16, 2021 at 08:30 Pushkar Deole wrote:
>
> > Hi All,
> >
> >
> >
> >
, you
chain those together yoursef. Would kafka streams work for the operation
you’re looking to do?
Best,
Chris
On Fri, Jul 16, 2021 at 08:30 Pushkar Deole wrote:
> Hi All,
>
>
>
> I am using a normal kafka consumer-producer in my microservice, with a
>
> simple model of cons
Hi All,
I am using a normal kafka consumer-producer in my microservice, with a
simple model of consume from source topic -> process the record -> produce
on destination topic.
I am mainly looking for exactly-once guarantee wherein the offset commit
to consumed topic and produce on desti
t;:"onSuccess"}
On Wed, Jul 14, 2021 at 12:28 AM Ran Lupovich wrote:
> I would suggest you will check you bootstrap definition and
> server.properties, somehow it looks for http://ip:9092 , kafka is not
> using
> http protocol, seems something not configured correctly
>
gt;
> We are facing an issue in our application where Kafka Consumer Retries are
> failing whereas a restart of the application is making the Kafka Consumers
> work as expected again.
>
> Kafka Server version is 2.5.0 - confluent 5.5.0
> Kafka Client Vers
Hi,
We are facing an issue in our application where Kafka Consumer Retries are
failing whereas a restart of the application is making the Kafka Consumers
work as expected again.
Kafka Server version is 2.5.0 - confluent 5.5.0
Kafka Client Version is 2.4.1 -
{"comp
I am interested in learning/deducing the maximum consumption rate of a Kafka
consumer in my consumer group. Maximum consumption rate is the rate at which
the consumer can not keep up with the message arrival rate, and hence the
consumer will fall farther and farther behind and the message lag
Dear all,
I am experimenting with an increasing (in terms of msgs/sec) Kafka workload,
where I have continuously access to the following two metrics: consumption rate
per sec CRSEC and arrival rate per sec ARSEC. From the following two metrics
that are continuously monitored, I want to deduce/
31057 - Silea (TV) - ITALY
phone: +39 0422 1836521
l.rov...@reply.it
www.reply.it
-Messaggio originale-
Da: mangat rai
Inviato: 6 May, 2021 11:51 AM
A: users@kafka.apache.org
Oggetto: Re: kafka-consumer-groups option
Hey Lorenzo Rovere,
Consider the case where you want to reprocess all the
g the input.
Regards,
Mangat
On Thu, May 6, 2021 at 11:25 AM Rovere Lorenzo wrote:
> Hi,
>
> I’m playing with the kafka-consumer-groups.sh command.
>
> I wanted to ask the utility of the *--to-current* option used to *reset
> offsets of a consumer group to current off
Hi,
I'm playing with the kafka-consumer-groups.sh command.
I wanted to ask the utility of the --to-current option used to reset offsets of
a consumer group to current offset. The thing I don't understand is in which
scenario I would want to use this option. If I'm already at the
Thanks. Just realised that it was in the API since 0.11.0. Thanks Steve.
On Sat, 23 Jan 2021 at 12:42, Steve Howard
wrote:
> Hi,
>
> Yes, you can use the offsetsForTimes() method. See below for a simple
> example that should get you started...
>
> import org.apache.kafka.clients.consumer.*;
> i
Hi,
Yes, you can use the offsetsForTimes() method. See below for a simple
example that should get you started...
import org.apache.kafka.clients.consumer.*;
import org.apache.kafka.common.config.ConfigException;
import org.apache.kafka.common.*;
import java.io.*;
import java.time.Duration;
impor
Hello,
We know that using KafkaConsumer api we can replay messages from certain
offsets. However, we are not sure if we could specify timeStamp from which
we could replay messages again.
Does anyone know if this is possible?
Regards,
Hi there,
We are switching our monitoring tool, and dealing with JMX metrics, notice we
were using for consumer and producer:
JMX|kafka.server|Fetch:queue-size,\
JMX|kafka.server|Fetch|*:byte-rate,\
JMX|kafka.server|Fetch|*:throttle-time,\
JMX|kafka.server|Produce:queue-size,\
JMX|kafka.server|P
newly assigned partitions : " i.e. empty...
I also verified from kafka-consumer-group script to check member and state
and its showing member as 80 and state as stable. But with verbose option
it is showing assignment as - .
As I am not having any error on the broker logs and consumer logs.
Pushkar,
You are not wrong. Indeed whatever deserialization errors that happens
during the poll() method will cause your code to be interrupted without
much information about which offset failed. A workaround would be trying
to parse the message contained in the exception SerializationExceptio
Hi Ricardo,
Probably this is more complicated than that since the exception has
occurred during Consumer.poll itself, so there is no ConsumerRecord for the
application to process and hence the application doesn't know the offset of
record where the poll has failed.
On Thu, Jun 18, 2020 at 7:03 PM
Pushkar,
Kafka uses the concept of offsets to identify the order of each record
within the log. But this concept is more powerful than it looks like.
Committed offsets are also used to keep track of which records has been
successfully read and which ones are not. When you commit a offset in
t
Hi Gerbrand,
thanks for the update, however if i dig more into it, the issue is because
of schema registry issue and the schema registry not accessible. So the
error is coming during poll operation itself:
So this is a not a bad event really but the event can't be deserialized
itself due to schema
Hello Pushkar,
I'd split records/events in categories based on the error:
- Events that can be parsed or otherwise handled correctly, e.g. good events
- Fatal error, like parsing error, empty or incorrect values, etc., e.g. bad
events
- Non-fatal, like database-connection failure, io-failure, ou
Hi All,
This is what I am observing: we have a consumer which polls data from
topic, does the processing, again polls data which keeps happening
continuously.
At one time, there was some bad data on the topic which could not be
consumed by consumer, probably because it couldn't deserialize the eve
Hi,
You want metadata.max.age.ms which, as you noticed, defaults to 5 minutes
:)
https://kafka.apache.org/documentation/#metadata.max.age.ms
Cheers,
Liam Clarke-Hutchinson
On Thu, May 21, 2020 at 1:06 PM Kafka Shil wrote:
> I was running a test where kafka consumer was reading data f
I was running a test where kafka consumer was reading data from multiple
partitions of a topic. While the process was running I added more
partitions. It took around 5 minutes for consumer thread to read data from
the new partition. I have found this configuration
It seems, that we discovered a bug:
In case if unclean leader election happened, KafkaConsumer may hang up
indefinitely
Full version
According to documentation, in case if `auto.offset.reset` is set
to none or not set, the exception is thrown to a client code, allowing to
handle it in a way that
Thanks Chris
But it won't work,I tried that also.
I found solution
That @KafkaListener default behavior it self is to take one by one data
only..
On Thu, May 7, 2020, 11:28 Chris Toomey wrote:
> You can set the max.poll.records config. setting to 1 in order to pull down
> and process 1 r
You can set the max.poll.records config. setting to 1 in order to pull down
and process 1 record at a time.
See https://kafka.apache.org/documentation/#consumerconfigs .
On Mon, May 4, 2020 at 1:04 AM vishnu murali
wrote:
> Hey Guys,
>
> I am having a topic and in that topic I am having 3000 me
Hey Guys,
I am having a topic and in that topic I am having 3000 messages
In my springboot application I want to consume the data using
@KafkaListener() and also one by one because I need to do some tedious
process on that Data it may take some time
So within this time I don't need to consume
-END-OFFSET
> Corresponding java method
>
>
> -- 原始邮件 --
> 发件人: "Liam Clarke-Hutchinson" 发送时间: 2020年4月22日(星期三) 下午3:35
> 收件人: "users"
> 主题: Re: thank you ! which java-client api can has same effect about
> kafka-consume
> but show "0" and "_", show two value difference???
> thank you !
>
>
> ------ 原始邮件 --
> 发件人: "Liam Clarke-Hutchinson" 发送时间: 2020年4月21日(星期二) 凌晨4:22
> 收件人: "users"
> 主题: Re: kafka-consumer-groups.sh CURRENT-
but show "0" and "_", show two value difference???
thank you !
-- --
??: "Liam Clarke-Hutchinson"
i use :
private static void printConsumerGroupOffsets() throws
InterruptedException, ExecutionException {
Properties props = new Properties();
props.setProperty("bootstrap.servers",
"192.168.1.100:9081,192.168.1.100:9082,192.
oupOffsetsOptions options) instead?
>
> On Wed, Apr 22, 2020 at 6:40 PM 一直以来 <279377...@qq.com> wrote:
>
>> ./kafka-consumer-groups.sh --bootstrap-server localhost:9081 --describe
>> --group test
>>
>>
>> use describeConsumerGroups method ??
>>
&
Looking at the source code, try listConsumerGroupOffsets(String
groupId, ListConsumerGroupOffsetsOptions options) instead?
On Wed, Apr 22, 2020 at 6:40 PM 一直以来 <279377...@qq.com> wrote:
> ./kafka-consumer-groups.sh --bootstrap-server localhost:9081 --describe
> --group te
./kafka-consumer-groups.sh --bootstrap-server localhost:9081 --describe --group
test
use describeConsumerGroups method ??
private static void print() throws InterruptedException,
ExecutionException {
Properties props = new Properties
egards,
Liam Clarke-Hutchinson
On Tue, Apr 21, 2020 at 8:12 PM 一直以来 <279377...@qq.com> wrote:
> ghy@ghy-VirtualBox:~/T/k/bin$ ./kafka-consumer-groups.sh
> --bootstrap-server localhost:9081 --describe --group test
>
>
> GROUP TOPIC
> PART
It means no consumer has consumed anything from that partition. Likely
because there's no data in that partition yet.
On Tue, Apr 21, 2020 at 8:12 PM 一直以来 <279377...@qq.com> wrote:
> ghy@ghy-VirtualBox:~/T/k/bin$ ./kafka-consumer-groups.sh
> --bootstrap-server localhost:9081 -
ghy@ghy-VirtualBox:~/T/k/bin$ ./kafka-consumer-groups.sh --bootstrap-server
localhost:9081 --describe --group test
GROUP TOPIC
PARTITION CURRENT-OFFSET LOG-END-OFFSET
LAG CONSUMER-ID
HOST
CLIENT-ID
test
m/perspectives>
On Thu, Feb 20, 2020 at 3:39 PM Jp Silva wrote:
> Hi,
>
> I'm using kafka-consumer-perf-test but I'm getting an error if I add the
> --print-metrics option.
>
> Here's a snippet of my output including the error:
>
> consumer-fetch-manager-
Hi,
I'm using kafka-consumer-perf-test but I'm getting an error if I add the
--print-metrics option.
Here's a snippet of my output including the error:
consumer-fetch-manager-metrics:fetch-size-max:{client-id=consumer-perf-c
Hello Avshalom,
I think the first question to answer is where are the new consumers coming
from. From your description they seem to be not expected (i.e. you did not
intentionally start up new instances), so looking at those VMs that
suddenly start new consumers would be my first shot.
Guozhang
ssage-
From: Avshalom Manevich
To: users
Sent: Sun, 8 Dec 2019 10:28
Subject: Re: Kafka consumer group keeps moving to PreparingRebalance and stops
consuming
Hi Boyang,
Thanks for your reply.
We looked into this direction, but since we didn't change max.poll.interval
from its default value,
Hi Boyang,
Thanks for your reply.
We looked into this direction, but since we didn't change max.poll.interval
from its default value, we're not sure if it's the case.
On Fri, 6 Dec 2019 at 17:42, Boyang Chen wrote:
> Hey Avshalom,
>
> the consumer instance is initiated per stream thread. You w
Hey Avshalom,
the consumer instance is initiated per stream thread. You will not be
creating new consumers so the root cause is definitely member timeout.
Have you changed the max.poll.interval by any chance? That config controls
how long you tolerate the interval between poll calls to make sure p
We have a Kafka Streams consumer group that keep moving to
PreparingRebalance state and stop consuming. The pattern is as follows:
1.
Consumer group is running and stable for around 20 minutes
2.
New consumers (members) start to appear in the group state without any
clear reason,
Hi everyone,
we want to get Kafka consumer group metrics (throttling and byte rate, for
example).
We have done this already, using:
1. JMX Mbean of the Kafka consumer Java application
2. CLI utility: *bin/kafka-consumer-groups.sh --describe --group
group_name --bootstrap-server localhost:port
Hi everyone,
we use kafka 2.3.0 from the confluent-kafka-2.11 Debian package on Debian 10.
When we want to set an offset of a consumer to a datetime, we get a timeout
error even if we use the timeout switch of the kafka-consumer-groups script:
> kafka-consumer-groups --bootstrap-ser
ty is also passed to each kafka
consumer threads.
6. Each single partition in the topic is assigned to each single consumer
thread using the Kafka Assign Method.
7. A "seekToBeginning" method is called inside the kafka Consumer thread
to consume from first offset from the Kafka Part
Hello,
My apologies for the late reply.
No the data isn't getting deleted faster than the consumer is reading it.
We also notice when this happens the number of Replica Fetchers doubles on the
broker side and the total of out bytes drops significantly.
Regards,
Amine
On 2019/09/07 04:18:18,
Can you check whether its happening because logs are getting purge very
fast.
On Sat, 7 Sep 2019 at 12:18 AM, Aminouvic wrote:
> Hello all,
>
> We're noticing several logs on our consumer apps similar to the following :
>
> 2019-09-06 17:56:36,933 DEBUG
> org.apache.kafka.clients.consumer.intern
Hello all,
We're noticing several logs on our consumer apps similar to the following :
2019-09-06 17:56:36,933 DEBUG
org.apache.kafka.clients.consumer.internals.Fetcher - Ignoring fetched
records for mytopic-7 at offset 45704161910 since the current position is
45704162370
Any idea on what
Hello. I was using Kafka 2.1.1 and facing a problem where our consumers
sometimes intermittently stop consuming from one or two of the partitions. My
config
Hi Garvit,
You can check here https://kafka.apache.org/documentation
Thanks,
Rahul
On Tue, Jun 25, 2019 at 4:11 PM Garvit Sharma wrote:
> Hi All,
>
> I am looking for Kafka consumer API documentation to understand how it
> works internally.
>
> I am facing a problem where my
1 - 100 of 668 matches
Mail list logo