Hi Guozhang,
We can run a shadow setup to test the workaround hotfix if that is helpful for
you. For the production cluster we'll wait for an official artifact to be
released.
It is not super urgent for us because records are still being
produced/consumed. The main issue is many unnecessary e
I have 100 Kafka topics with one partition each.
I have started 10 instance of consumer and they use the same consumer group id
and i'm using subscribe by pattern.
I noticed that all the topic partition is assigned to the same consumer and all
other instances starve.
I expected that rebalance wou
I have 100 topics with one partition each.
I have 10 instance of consumer with the Same consumer group id and I'm using
subscribe using pattern.
I observe that all the topic partitions are assigned to the same consumer and
others are starving. It is not distributed.
I was expecting that rebalance
One consumer reads from one or more partitions. If you have 1 partition per
topic, only one consumer in the consumer group will read from that
partition, others will be in standby.
To process data in parallel in the consumer group, you need to increase the
number of partitions per topic. For examp
Hi,
According to Kafka's replication mechanism, when a unclean leader election
happened (suppose unclean.leader.election=true), there are possibilities
that replicas of the same partition are inconsistent, i.e. on a certain
offset records store different contents.
IIUC, Kafka will override one ve
Set partition.assignment.strategy to Round Robin, it will resolve your
problem.
On Thu, Jun 13, 2019 at 11:10 AM Pavel Molchanov <
pavel.molcha...@infodesk.com> wrote:
> One consumer reads from one or more partitions. If you have 1 partition per
> topic, only one consumer in the consumer group wi
Hi,
I am wondering if there is something I am missing about my set up to facilitate
long running jobs.
For my purposes it is ok to have `At most once` message delivery, this means it
is not required to think about committing offsets (or at least it is ok to
commit each message offset upon rece
Hi,
We are using Kafka streams 2.1.1 and leveraging DSL APIs for doing stateful
processing (aggregation). As per our use case we ended up creating ~400 sub
topologies.
Our observation is that based on the number of partitions in the input/source
topic, corresponding stream tasks get created fo
Hi,
Is there any mapping between number of stream threads and underlying CPU cores?
Theoretically there shouldn't be any direct mapping. Because every stream
thread is a Java thread and it should get scheduled on CPU cycle in a standard
way (context switching). Wanted to know is there is any p
We have a different use case where we stop consuming due to connection to
an external system being down.
In this case we sleep for the same period as our poll timeout would be and
recommit the previous offset. This stops the consumer going stale and
avoids increasing the max interval.
Perhaps you
kafka-consumer-groups.sh fails for just one consumer group -
group-vendor-cust. It work for every other consumer group. Command errors
out complaining about the timeout, but adding --timeout does not help
either.
I don't know whats wrong or how go about debug this.
Any help or suggestion?? Thank
Hi,
I have the requirement to dedup messages within the window and take bunch of
actions on the filtered message. I understand that we can get parallelism with
the number of Kstream thread and can get maximum parallelism as number of
partitions. But the actions that I take on the filtered messa
12 matches
Mail list logo