Hi,
I'll be upgrading Kafka version from 2.1.0 to 2.1.1. Are there any special
steps to take? I'll be doing any Kafka upgrade for the first time.
1. Download and extract Kafka latest version from
https://www.apache.org/dyn/closer.cgi?path=/kafka/2.1.1/kafka_2.12-2.1.1.tgz
2. Copy server.
Hi Peter,
Yes, I meant the data rate.
The only issue is our application traffic is very fluctuated, so if I go
with the high rate for the partition number it doesn't perform very well
for the low data rate as it brings unnecessary network latency. I have
found out sometimes the latency becomes hi
Thank you for your responses!
Guozhang, what you propose seems like a very good way to monitor externally
the healthiness of consumers, with this combination of metrics (offset
advance + bytes-in/out) it can be deduced when a consumer is not working.
What we are trying to accomplish is detect thi
I’ll assume when you say load, you mean data rate flowing into your kafka
topic(s).
One instance can consume from multiple partitions, so on a variable load
workflow, it’s a good idea to have more partitions than your average workload
will require. When the data rate is low, fewer consumers wil
You can read the `__consumer_offsets` topics directly to see if the
offset are there or not:
https://stackoverflow.com/questions/33925866/kafka-how-to-read-from-consumer-offsets-topic
Also, if your brokers are on version 2.1, offsets should not be deleted
as long as the consumer group is online. T
Hi All,
I was wondering how an application can be auto-scalable if only a single
instance can read from the single Kafka partition and two instances cannot
read from the single partition at the same time with the same consumer
group.
Suppose there is an application that has 10 instances running o
I am unable to reproduce it.
I did note also that all the consumer offsets reset in this application,
not just the streams, so it appears that whatever happened is not
streams-specific. The only reason I can think of for all the consumers to
do this, is that the committed offsets information was "
You can set consumer client.id to be the same as the consumer group.id for all
the consumer in your consumer group to accomplish this.
—
Peter
> On Feb 21, 2019, at 7:56 AM, 洪朝阳 <15316036...@163.com> wrote:
>
> It’s very great that Apache Kafka get a feature of setting quota since 0.9.
> https:
It's _not_ a public contract that data of a range() query are returned
ordered by key. Thus, you should not rely on it.
It depends on the store implementation, and it just happens that default
RocksDB does return the data ordered.
-Matthias
On 2/21/19 9:23 AM, Yurii Demchenko wrote:
> Dear Kafk
Your understanding sounds correct.
One follow up: even idempotent producer itself, gives you
"exactly-once", because for many use cases it's important to not write
duplicates into a topic.
Thus, I would not say that you need transactions to do exactly-once (but
I guess it depends what your exact
Hi,
We have been running kafka since quite some time now and have come across
an issue where the consumers are not reporting to the consumer group. This
is happening for just one topic and is working fine for other. I see the
below error in the consumer. I am able to connect to the broker server f
Dear Kafka creators,
First of all I would like to thank you for your great product.
I have a question relating to the Kafka stores and I will be really
appreciate if you find a minute to answer.
According to javadoc of ReadOnlyKeyValueStore#range(K from, K to) method:
> Get an iterator over a gi
Is there anyone encounter the same requirement?
-邮件原件-
发件人: users-return-36748-15316036153=163@kafka.apache.org
[mailto:users-return-36748-15316036153=163@kafka.apache.org] 代表 洪朝阳
发送时间: 2019年2月21日 23:57
收件人: users@kafka.apache.org
主题: Any way to set a quota for a consumer group?
I
It’s very great that Apache Kafka get a feature of setting quota since 0.9.
https://kafka.apache.org/documentation/#design_quotas
But it’s not very perfect that this feature can only limit a specific
client identified by a property “client.id” rather than a consumer group.
Is there any way to set
Thanks, everyone!
On Sun, Feb 17, 2019 at 8:57 PM Becket Qin wrote:
> Congratulations, Randall!
>
> On Sat, Feb 16, 2019 at 2:44 AM Matthias J. Sax
> wrote:
>
> > Congrats Randall!
> >
> >
> > -Matthias
> >
> > On 2/14/19 6:16 PM, Guozhang Wang wrote:
> > > Hello all,
> > >
> > > The PMC of Apa
Hello Guozhang,
thanks, that might help us too.
Just to confirm, this depends on KTable/GlobalKTable usage, right?
I did a test with
streamsConfiguration.put(StreamsConfig.restoreConsumerPrefix(StreamsConfig.RECEIVE_BUFFER_CONFIG),
65536);
streamsConfiguration.put(StreamsConfig.restoreConsumerPre
Thanks Matthias for the answers and the update to FAQ. I understand
exactly-once semantics much better now.
In summary, producer side idempotence can be used on its own using
enable.idempotence parameter (which underneath uses PID and sequence number
combo). However, if exactly-once semantics is n
17 matches
Mail list logo