Congratulations Sophie, all the best.
From: Mickael Maison
Sent: Tuesday, August 2, 2022 10:24 AM
To: Users ; dev
Subject: Re: [ANNOUNCE] New Kafka PMC Member: A. Sophie Blee-Goldman
Congratulations Sophie!
On Tue, Aug 2, 2022 at 8:52 AM Josep Prat wrote:
>
> C
catching all the way up, then
> it starts reading the rest of the partitions evenly. I'm not sure why it is
> happening that way.
>
> Hope this helps.
>
>
>
>
>
> On Sun, Jan 23, 2022 at 1:58 AM Mazen Ezzeddine <
> mazen.ezzedd...@etu.univ-cotedazur.fr> w
In kafka, the committed offsets for a certain consumer group can be requested
through the admin client API using a code similar to the below:
Map offsets =
admin.listConsumerGroupOffsets(CONSUMER_GROUP)
.partitionsToOffsetAndMetadata().get();
Suppose that that log.messag
Dear all,
Consider a kafka topic deployment with 3 partitions P1, P2, P3 with
events/records lagging in the partitions equal to 100, 50, 75 for P1, P2, P3
respectively. And let’s suppose that num.poll.records (the maximum number of
records that can be fetched from the broker ) is equal to 100.
Dear all,
Kafka supports adding new partition to a topic dynamically. So suppose that
initially I have a topic T with two partitions P0, P1 and a key space of three
keys K0, K1, K2. Suppose further that I am using some kind of hash partitioner
modulo 2 (number of partitions) at the producer tha
Dear all,
The below code snippet uses kafka admin client to retrieve the last committed
and produced offsets of all partitions for a certain consumer group namely
CONSUMER_GROUP :
Map offsets =
admin.listConsumerGroupOffsets(CONSUMER_GROUP).partitionsToOffsetAndMetadata().get();
Map requestL
Hi all,
Can you please share any open-source real workload traces (events arrivals per
unit time and their interarrival distribution) from different
sources/businesses to benchmark the performance streaming platforms like Kafka
streams? e.g., a credit card payment transactions traces (say for fr
Hi all,
I am using Kafka admin client to query consumer group partition offsets
(committed and latest) using the below code:
Map offsets =
admin.listConsumerGroupOffsets(CONSUMER_GROUP)
.partitionsToOffsetAndMetadata().get();
Map requestLatestOffsets = new HashMap<>();
for(TopicPartition tp:
Dear all,
I am running a simple kafka consumer group reactive autoscaling experiment on
kubernetes, while leveraging range stop of the world assignor in the first run,
and next in the second run I used incremental cooperative assignor. My workload
is shown below where x-axis is the time in seco
consumer group ?
Thank you so much.
From: Mazen Ezzeddine
Sent: Thursday, July 22, 2021 10:10 AM
To: users@kafka.apache.org
Subject: soft excatly once rolling upgrade.
Dear all,
I am interested in achieving zero down time upgrade of my kafka consumer group
Dear all,
I am interested in achieving zero down time upgrade of my kafka consumer group
running on Kubernetes with say soft exactly once semantics (i.e., assume no
failure and/or error is going to happen to my cluster). I configured my
consumer group with rolling update and terminationGracePeri
led/restarted and what state is considered "error". Maybe that's the
place you can check.
Thank you.
Luke
On Sat, Jul 17, 2021 at 8:59 PM Mazen Ezzeddine <
mazen.ezzedd...@etu.univ-cotedazur.fr> wrote:
> Hello Luke,
>
> Please note that recently I was working on tasks
ocking commit so can not happen…)
So please give me a hint on why this is happening and how to correct this
weird behavior/exception.
Thank you.
________
From: Mazen Ezzeddine
Sent: Saturday, July 17, 2021 2:57 PM
To: Kafka Users
Subject: Re: Kafka incremental sticky r
zen,
I think you can ignore this message as the message said, you can try
completing the rebalance by calling poll() and then retry the operation.
You should be able to complete the offset commit in the next poll round.
Thanks.
Luke
On Mon, Jun 21, 2021 at 10:19 PM Mazen Ezzeddine <
m
incremental sticky rebalancing.
Hi Mazen,
Did you auto commit offsets or manually? When do you commit offsets?
Thanks
Luke
Mazen Ezzeddine 於 2021年6月21日 週一
下午7:42 寫道:
> I am running Kafka on Kubernetes using the Kafka Strimzi operator. I am
> using incremental sticky rebalance strategy by configur
I am running Kafka on Kubernetes using the Kafka Strimzi operator. I am using
incremental sticky rebalance strategy by configuring my consumers with the
following:
ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG,
org.apache.kafka.clients.consumer.CooperativeStickyAssignor.class.getNam
I am interested in learning/deducing the maximum consumption rate of a Kafka
consumer in my consumer group. Maximum consumption rate is the rate at which
the consumer can not keep up with the message arrival rate, and hence the
consumer will fall farther and farther behind and the message lag wo
Dear all,
I am experimenting with an increasing (in terms of msgs/sec) Kafka workload,
where I have continuously access to the following two metrics: consumption rate
per sec CRSEC and arrival rate per sec ARSEC. From the following two metrics
that are continuously monitored, I want to deduce/
Dear all,
I am trying to run a Kafka experiment on Kubernetes where I would like to
control the number of messages sent/produced to the broker per unit
time/interval, and the number of messages being consumed by the consumer per
unit time/interval.
Controlling the number of messages sent/prod
umer to shut down
after this rebalance (and, presumably, don't assign any partitions to the
one shutting down)
On Fri, Apr 9, 2021 at 6:04 AM Mazen Ezzeddine <
mazen.ezzedd...@etu.univ-cotedazur.fr> wrote:
> Dear all,
>
>
>
> The kafka admin client API enables the deletio
Dear all,
The kafka admin client API enables the deletion of a consumer group through a
logic like the one shown below
DeleteConsumerGroupsResult deleteConsumerGroupsResult =
adminClient.deleteConsumerGroups(Arrays.asList(consumerGroupToBeDeleted));
However, is there any way/API through
Hey,
check this,
https://www.confluent.io/resources
Among others you should read IN DETAILS Kafka: The Definitive Guide v2 and v1,
and there are many technical online talks available in the link, each talk
describe a partiuclar subject you might be interested in (delivery semantics,
rebalanc
issues, at least it can avoid stop the world rebalances.
https://www.confluent.io/blog/incremental-cooperative-rebalancing-in-kafka/
Cheers,
Liam Clarke
On Mon, 29 Mar. 2021, 8:04 pm Mazen Ezzeddine, <
mazen.ezzedd...@etu.univ-cotedazur.fr> wrote:
> Hi all,
> Given a replicaset/st
Hi all,
Given a replicaset/statefulset of kafka consumers that constitute a consumer
group running on kubernetes, if the number of replicas is x than sometimes x
rebalancing might be triggered since not all of the replicas/consumers send a
join group request in a timely and synced manner to the
Dear all,
I am using Kafka admin client where I am performing some computations on the
offsets (committed, last, …) to derive per-partition and per-consumer
information (on a certain consumer group) such as arrival rate, lag,
consumption rate etc… when certain conditions are met I want to enforc
f they're available and only make a remote
call for *Consumer#committed* if
absolutely necessary. I'll try to follow up on that
On Tue, Mar 9, 2021 at 9:26 AM Mazen Ezzeddine <
mazen.ezzedd...@etu.univ-cotedazur.fr> wrote:
> Hi all,
>
> I am implementing a custom partition ass
Hi all,
I have a kafka consumer pod running on kubernetes, I executed the command
kubectl scale consumerName --replicas=2, and as shown in the logs below two
seperate rebalancing processes were trigerred, so if the number of consumer
replicas scaled = 100, one hundred seperate rebalancing ar
Hi all,
I am implementing a custom partition assignor slightly different than the
sticky assignor assignor. As known, in the sticky assignor each consumer sends
the set of owned partitions as part of its subscription message. This happens
in the subscriptionUserData by calling the serializeTop
Hi all,
I was reading the sticky assignor code as shown in JIRA below.
=
Pseudo-code sketch of the algorithm:
C_f := (P/N)_floor, the floor capacity
C_c := (P/N)_ceil, the ceiling capacity
members := the sorted s
Hi all,
I am implementing a custom consumers to topics/partitions assignor in Kafka.
To this end, I am overriding the AbstractPartitionAssignor which in turn
implements the ConsumerPartitionAssignor interface.
As part of the custom assignor, I want to send a single (float) information
about
I am running a Kafka cluster on Kubernetes. I am implementing a custom
PartitionAssignor to personalize the way topic partitions are assigned to
existing consumers in the consumer group. To this end, I am overriding the
method Map assign( Cluster metadata, Map subscriptions)
If inside the assi
r lag exporter that could help you
exposing Kafka consumer metrics (prometheus style), for instance. But there
are several other ways to get them.
On Sun, Nov 15, 2020, 7:09 AM Mazen Ezzeddine
wrote:
> Thanks,
> My question is mostly about dynamic resource optimization,
>
> sa
ed earlier different designs may lead to different
> decisions. Try to understand first how partitions work. How they are
> assigned in your context (there are different partitioning schemes) and
> then you may have a better sense of it.
>
> I hope it helps
>
> Vinici
On 2020/11/13 09:09:00, John Coleman wrote:
> Hi,
>
> We have very high latency ~1 second when we send a small number of small
> messages to Kafka. The whole group of messages is less than 1k. We are
> thinking better to just send the messages together as 1 message containing a
> collectio
Given a business application that resorts into a message queue solution like
Kafka, what is the best number of partitions to select for a given topic? what
influences such a decision?
On the other hand, say we want to achieve a maximal throughput of message
consumption but at minimal resource
When a new Kafka consumer joins/leaves a consumer group, Kafka runtime triggers
a rebalancing process so that a new assignment/mapping of partitions to the new
set of consumers is performed. I kindly have three questions on the rebalancing
process:
(1) Is it possible to plug in somehow a custom
36 matches
Mail list logo