895-253A-2BDynamically-2Brefresh-2Bpartition-2Bcount-2Bof-2B-5F-5Fconsumer-5Foffsets&d=DwIFaQ&c=qE8EibqjfXM-zBfebVhd4gtjNZbrDcrKYXvb1gt38s4&r=p-f3AJg4e4Uk20g_16kSyBtabT4JOB-1GIb23_CxD58&m=9DpGX5TXJQUt8QvQvJNrVcwZK1-pYTdpgPjYwQkXkYA752EKHdc2kbLnYC6c9xyj&s=5ncO1fPHJBb_TeHewskIs2vXiZMOexEVhduDZ2cUce
Hi guys,
So, we have a Kafka cluster v2.8, and by mistake, we have increased the
partition number from 50 to 52.
And now we are having some coordinator inconsistencies when consumers try
to consume from the cluster.
Any advice on how to untangle this mess? would a rolling restart of the
cluster
singlinger.ms so that the producer batches a bit more.
>
> Hope that helps a bit.
>
> Andrew
>
> On Mon, Feb 27, 2023 at 3:39 PM David Ballano Fernandez
> wrote:
>
> > thank you!
> >
> > On Mon, Feb 27, 2023 at 12:37 PM David Ballano Fernandez <
>
thank you!
On Mon, Feb 27, 2023 at 12:37 PM David Ballano Fernandez <
dfernan...@demonware.net> wrote:
> Hi guys,
>
> I am loadtesting a couple clusters one with local ssd disks and another
> one with ceph.
>
> Both clusters have the same amount of cpu/ram and they are c
Hi guys,
I am loadtesting a couple clusters one with local ssd disks and another one
with ceph.
Both clusters have the same amount of cpu/ram and they are configured the
same way.
im sending the same amount of messages and producing with linger.ms=0 and
acks=all
besides seeing higuer latencies o
ive you the overall load and the other the topic partition per broker view
> (aka what are your users doing).
>
> what I don't understand is the batching part of your question.
>
> If you like to see messages In I would suggest you to use
>
> > kafka.server:type=Bro
Hi guys,
I am having some confusion around 2 Kafka metrics:
*Request rate.*
kafka.network:type=RequestMetrics,name=RequestsPerSec,request={Produce|FetchConsumer|FetchFollower}
*Produce request rate.*
kafka.server:type=BrokerTopicMetrics,name=TotalProduceRequestsPerSec
https://docs.confluent.io
Hi guys,
I have a question, do any of you run kafka in Kubernetes? Are you using a
kafka operator?
If so, what are you running and what's your opinion?
I'm trying to evaluate a kafka operator so any tips would be greatly
appreciated.
thanks!!
Hi guys.
I have a question about upgrades. I am currently on 2.7 or confluent
opensource 6.1.
since my last upgrade from confluent opensoure 5.x (kafka 2.0) my
server.properties
has this:
inter.broker.protocol.version=2.7
log.message.format.version=2.0
following confluent notes for upgrades me
is
> to scale consumers out at peak load, and then scale them down when load's a
> lot lower.
> It helps meet any data timeliness requirements you might have during high
> load, and as you said, reduces costs during low load.
>
> On Sat, 5 Mar 2022 at 07:09, David Ballano Ferna
to scale applications in the same consumer
> group is very useful, but it needs to be tuned to minimise scaling that can
> cause pauses in consumption.
>
> Kind regards,
>
> Liam Clarke-Hutchinson
>
>
>
> On Wed, 2 Mar 2022 at 13:14, David Ballano Fernandez <
> dfern
&r=p-f3AJg4e4Uk20g_16kSyBtabT4JOB-1GIb23_CxD58&m=dQzp4x9JZe-7YZcgrSl3YrB3X7PYTM_bS4caOQ59hLLonNXE0x3TveYTXVAFcxco&s=_NU3o8FG8CwNpe8wl3mVxXkNeEx_9aCD2_md1riEZa0&e=
>
> Good luck :)
>
>
>
> On Tue, 1 Mar 2022 at 08:49, David Ballano Fernandez <
> dfernan...@dem
Hello Guys,
I was wondering how you guys do autoscaling of you consumers in kubernetes
if you do any.
We have a mirrormaker-like app that mirrors data from cluster to cluster at
the same time does some topic routing. I would like to add hpa to the app
in order to scale up/down depending on avg c
w8AUS-IPeFX07xSqA4ONmNUDFJXnB5xNHw7TFiy6UD4gP&s=wNhgW9w7vSqIYgBLQ1iOcfBsQg3vHcPHxChyXqQ2-K0&e=
> >),
> > and we'll make sure it upgrades to 2.15.0 or newer versions.
> >
> > Thank you.
> > Luke
> >
> > On Sun, Dec 12, 2021 at 12:00 PM Luke Chen w
Hi All,
I wonder if you guys have heard about this vulnerability
https://www.randori.com/blog/cve-2021-44228/ affecting log4j v1 and v2
as far as i can see kafka 2.7 and 2.8 are using log4j v1. which is only
affected if using jms appender.
any thoughts?
Thanks!
that cached broker state being a factor.
>
> Admittedly I could be barking up an entirely wrong tree, and if anyone who
> understands the replica assignment algorithm better than I is reading,
> please do correct me!
>
> Cheers,
>
> Liam Clarke-Hutchinson
>
> On Thu, 1
:
> > HashMap( -> epoch, -> epoch...)"
> >
> > This is to check if the controller knows that all of your brokers are
> live
> > at the time of topic creation. If their id is in that hashmap, they're
> > alive.
> >
> > Cheers,
I should mention that our kafka version is 2.7.
Also I tried the kafka-topic.sh tool via --bootstrap-servers and
--zookeeper options.
same result.
On Tue, Nov 9, 2021 at 4:13 PM David Ballano Fernandez <
dfernan...@demonware.net> wrote:
> We are using Kafka with zookeeper
>
> On T
We are using Kafka with zookeeper
On Tue, Nov 9, 2021 at 4:12 PM Liam Clarke-Hutchinson
wrote:
> Yeah, it's broker side, just wanted to eliminate the obscure edge case.
>
> Oh, and are you using Zookeeper or KRaft?
>
> Cheers,
>
> Liam
>
> On Wed, Nov 10, 2021 at
Liam
>
> On Wed, Nov 10, 2021 at 12:21 PM David Ballano Fernandez <
> dfernan...@demonware.net> wrote:
>
> > Hi Liam,
> >
> > I did a test creating topics with kafka-topics.sh and admin API from
> > confluent kafka python.
> > The same happened
topics.sh
> that ships with Apache Kafka?
>
> Cheers,
>
> Liam Clarke-Hutchinson
>
> On Wed, Nov 10, 2021 at 11:41 AM David Ballano Fernandez <
> dfernan...@demonware.net> wrote:
>
> > Hi All,
> > Trying to figure out why my brokers have some disk imbalance
Hi All,
Trying to figure out why my brokers have some disk imbalance I have found
that Kafka (maybe this is the way it is supposed to work?) is not spreading
all replicas to all available brokers.
I have been trying to figure out how a topic with 5 partitions with
replication_factor=3 (15 replica
like this:
`Utils.abs(groupId.hashCode) % groupMetadataTopicPartitionCount`
You can check the `partitionFor` method in GroupMetadataManager class.
Hope that helps.
Thank you.
Luke
On Fri, Sep 10, 2021 at 3:29 PM David Ballano Fernandez <
dfernan...@demonware.net> wrote:
> Hi al
Hi all,
I am running kafka 2.7 in the cluster and only has one consumer reading
from it multiple topics.
I just find out looking at the __consumer_offset topic that only one
partition of that topic is being written to?
Is that normal?
thank you!
Hi Marcus,
For fetch requests, if the remote time is high, it could be that there is
not enough data to give in a fetch response. This can happen when the
consumer or replica is caught up and there is no new incoming data. If this
is the case, remote time will be close to the max wait time, which
Hi All,
I am planning to do a rolling upgrade of our kafka cluster from 2.0 to
kafka 2.7
I wanted to make sure that my assumptions about client compatibility are
correct.
After reading some documentation, I understood that after kafka broker
0.10.1
any java client should be supported and also al
26 matches
Mail list logo