hudeqi created KAFKA-16710:
--
Summary: Continuously `makeFollower` may cause the replica fetcher
thread to encounter an offset mismatch exception when `processPartitionData`
Key: KAFKA-16710
URL: https
hudeqi created KAFKA-16543:
--
Summary: There may be ambiguous deletions in the
`cleanupGroupMetadata` when the generation of the group is less than or equal
to 0
Key: KAFKA-16543
URL: https://issues.apache.org/jira
time", Doğuşcan.
best,
hudeqi
"Doğuşcan Namal" <namal.dogus...@gmail.com>写道:
> Hello, do we have a metric showing the uptime? We could tag that metric
> with version information as well.
>
> I like the idea of adding the version as a tag as well. However, I am not
&
hudeqi created KAFKA-15607:
--
Summary: Possible NPE is thrown in MirrorCheckpointTask
Key: KAFKA-15607
URL: https://issues.apache.org/jira/browse/KAFKA-15607
Project: Kafka
Issue Type: Bug
Hi Chris,
+1 (non-binding)
Finally, there is no need to use external intrusion tools to change the log
level of kafka connect online! Thanks for the KIP!
best,
hudeqi
about group offset/acl/config replication.
best,
hudeqi
n, for the
LRO cache, you can add an expired time attribute for each partition. If this
expired interval time is exceeded before next updated, the LRO of this
partition can be removed from the cache to avoid possible leaks and OOM.
best,
hudeqi
[
https://issues.apache.org/jira/browse/KAFKA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
hudeqi reopened KAFKA-15397:
> Deserializing produce requests may cause memory leaks when exceptions oc
Hey, Viktor.
As far as my implementation is concerned, the default setting is 30s, but I
added it to `MirrorConnectorConfig`, which can be adjusted freely according to
the load of the source cluster and the number of tasks.
best,
hudeqi
"Viktor Somogyi-Vass" &l
overall offset lag of the topic,
then using the "kafka_consumer_consumer_fetch_manager_metrics_records_lag"
metric will be more real-time and accurate.
This is my suggestion. I hope to be able to throw bricks and start jade, we can
come up with a better solution.
best,
hudeqi
&qu
source cluster
minus the offset of the last record to be polled?
best,
hudeqi
> -原始邮件-
> 发件人: "Elxan Eminov"
> 发送时间: 2023-09-04 14:52:08 (星期一)
> 收件人: dev@kafka.apache.org
> 抄送:
> 主题: Re: [DISCUSS] KIP-971 Expose replication-offset-lag MirrorMaker2 metric
>
Hi, Eminov.
My doubt is: how do you get the LEO of the source cluster for the partition in
`poll`?
best,
hudeqi
> -原始邮件-
> 发件人: "Elxan Eminov"
> 发送时间: 2023-09-02 19:18:23 (星期六)
> 收件人: dev@kafka.apache.org
> 抄送:
> 主题: Re: Re: [DISCUSS] KIP-971 E
Thank you for your answer, Mickael.
If set the value of gauge to a constant value of 1, adding that tag key is
"version" and value is the version value of the obtained string type, does this
solve the problem? We can get the version by tag in prometheus.
best,
hudeqi
"
k it's a good idea to classify replication types, so it's more
flexible to use, but I'm a little confused about what scenarios replicate only
a few of them.
(but it doesn't matter, it depends on the user)
I'm sorry, I don't understand your second and fourth question
Hi, Kamal, thanks your reminding, but I have a question: It seems that I can't
get this metric through "jmx_prometheus"? Although I observed this metric
through other tools.
best,
hudeqi
"Kamal Chandraprakash" <kamal.chandraprak...@gmail.com>写道:
> Hi Hu
this may affect the
performance of the replication itself.
As for the `replication-latency-ms` metric, it is sometimes inaccurate. For
details, see: https://issues.apache.org/jira/projects/KAFKA/issues/KAFKA-15068
best,
hudeqi
"Viktor Somogyi-Vass" <viktor.somo...@cloudera.com.INV
[
https://issues.apache.org/jira/browse/KAFKA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
hudeqi resolved KAFKA-15397.
Resolution: Resolved
> Deserializing produce requests may cause memory leaks when exceptions oc
Hi, all, this is a vote about kip-965, thanks.
best,
hudeqi
> -原始邮件-
> 发件人: hudeqi <16120...@bjtu.edu.cn>
> 发送时间: 2023-08-17 18:03:49 (星期四)
> 收件人: dev@kafka.apache.org
> 抄送:
> 主题: Re: [DISCUSSION] KIP-965: Support disaster recovery between clusters
by MirrorMaker
>
shutdown.
best,
hudeqi
hudeqi created KAFKA-15397:
--
Summary: Deserializing produce requests may cause memory leaks
when exceptions occur
Key: KAFKA-15397
URL: https://issues.apache.org/jira/browse/KAFKA-15397
Project: Kafka
hudeqi created KAFKA-15396:
--
Summary: Add a metric indicating the version of the current
running kafka server
Key: KAFKA-15396
URL: https://issues.apache.org/jira/browse/KAFKA-15396
Project: Kafka
Thanks your feedback! Fomenko. If there is no point of discussion in this KIP?
I'm going to initiate the voting process next week. grateful.
best,
hudeqi
> -原始邮件-
> 发件人: "Igor Fomenko"
> 发送时间: 2023-08-14 21:30:59 (星期一)
> 收件人: dev@kafka.apache.org
> 抄送:
bump this discuss thread.
best,
hudeqi
"hudeqi" <16120...@bjtu.edu.cn>写道:
> Thanks for your suggestion, Ryanne. I have updated the configuration name in
> cwiki.
>
> best,
> hudeqi
bump this discuss thread.
best,
hudeqi
"hudeqi" <16120...@bjtu.edu.cn>写道:
> Sorry for not getting email reminders and ignoring your reply for getting
> back so late, Yash Mayya, Greg Harris, Sagar.
>
> Thank you for your thoughts and suggestions, I learned a lot, I
In fact, I have implemented the bytesIn/bytesOut limit of the topic dimension.
I don't know the community's attitude towards this feature, so I don't know if
I need to propose a KIP to contribute.
best,
hudeqi
> -原始邮件-
> 发件人: hudeqi <16120...@bjtu.edu.cn>
Thanks for your suggestion, Ryanne. I have updated the configuration name in
cwiki.
best,
hudeqi
Hi,all. Let me ask a question first, that is, do we plan to support quota in
the topic dimension?
ster.
group ACL: The group Acl information is obtained by filtering the user obtained
above.
Looking forward to your reply.
Best, hudeqi
es' of the internal topics and make a
warning for the bigger value configured by the user. What do you think? If
there's a better way I'm all ears.
best,
hudeqi
[
https://issues.apache.org/jira/browse/KAFKA-15139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
hudeqi resolved KAFKA-15139.
Resolution: Fixed
> Optimize the performance of `Set.removeAll(List)` in
> `MirrorCheckpointCon
Is anyone following this KIP? Bump this thread.
hudeqi created KAFKA-15139:
--
Summary: Optimize the performance of `Set.removeAll(List)` in
`MirrorCheckpointConnector`
Key: KAFKA-15139
URL: https://issues.apache.org/jira/browse/KAFKA-15139
Project: Kafka
hudeqi created KAFKA-15134:
--
Summary: Enrich the prompt reason in CommitFailedException
Key: KAFKA-15134
URL: https://issues.apache.org/jira/browse/KAFKA-15134
Project: Kafka
Issue Type
hudeqi created KAFKA-15129:
--
Summary: Clean up all metrics that were forgotten to be closed
Key: KAFKA-15129
URL: https://issues.apache.org/jira/browse/KAFKA-15129
Project: Kafka
Issue Type
nt. Otherwise, the default size (50MB) is taken to add or update topic
config by admin client.
Best,
hudeqi
hudeqi created KAFKA-15119:
--
Summary: Support incremental synchronization of topicAcl in
MirrorSourceConnector
Key: KAFKA-15119
URL: https://issues.apache.org/jira/browse/KAFKA-15119
Project: Kafka
hudeqi created KAFKA-15110:
--
Summary: Wrong version may be run, which will cause it to fail to
run when there are multiple version jars under core/build/libs
Key: KAFKA-15110
URL: https://issues.apache.org/jira/browse
hudeqi created KAFKA-15086:
--
Summary: The unreasonable segment size setting of the internal
topics in MM2 may cause the worker startup time to be too long
Key: KAFKA-15086
URL: https://issues.apache.org/jira/browse
Congratulations! Divij!
> -原始邮件-
> 发件人: "Joobi S.B"
> 发送时间: 2023-06-14 00:49:57 (星期三)
> 收件人: dev@kafka.apache.org
> 抄送:
> 主题: Re: [ANNOUNCE] New committer: Divij Vaidya
>
Is there any more attention to this KIP? :)
bump this thread.
Best,
hudeqi
> -原始邮件-
> 发件人: hudeqi <16120...@bjtu.edu.cn>
> 发送时间: 2023-03-26 17:42:31 (星期日)
> 收件人: dev@kafka.apache.org
> 抄送:
> 主题: Re: Re: Re: [DISCUSS] KIP-842: Add richer group offset reset mechanisms
>
hudeqi created KAFKA-15068:
--
Summary: Incorrect replication Latency may be calculated when the
timestamp of the record is of type CREATE_TIME
Key: KAFKA-15068
URL: https://issues.apache.org/jira/browse/KAFKA-15068
Hi, I am also very excited to see this discussion, because I also implemented
the "federation model" based on the kafka-0.10.2.1 version before in company
and got benefits from going online. It solves the problem of seamlessly
migrating the bytesIn/bytesOut of a topic to another kafka cluster wi
The current request queue is very single. In fact, there will be many
performance problems when the business scenario of a single cluster becomes
complicated. Not only to divide according to user, but also to isolate
according to request category, this is just my idea.
best,
hudeqi
>
hudeqi created KAFKA-14979:
--
Summary: Incorrect lag was calculated when
markPartitionsForTruncation in ReplicaAlterLogDirsThread
Key: KAFKA-14979
URL: https://issues.apache.org/jira/browse/KAFKA-14979
than the previous partitions, but
the traffic of the newly expanded partitions may be larger.
best,
hudeqi
"Edoardo Comar" <edoardli...@gmail.com>写道:
> Hi hudeqi,
>
> thanks for the KIP.
>
> For the purpose of monitoring if partitions of a topic are used "
of partition
dimension, especially the issue of traffic skew.
Please take a look here in deatil: https://cwiki.apache.org/confluence/x/LQs0Dw
best,
hudeqi
understand correctly.
best,
hudeqi
> -原始邮件-
> 发件人: "Dániel Urbán"
> 发送时间: 2023-04-19 15:50:01 (星期三)
> 收件人: dev@kafka.apache.org
> 抄送:
> 主题: Re: Re: [DISCUSS] KIP-918: MM2 Topic And Group Listener
>
replicated (even if the topic has no data), and using
MirrorCheckpointMetrics.CHECKPOINT_LATENCY can monitor the currently replicated
group list (if it is wrong, please correct me).
best,
hudeqi
"Dániel Urbán" <urb.dani...@gmail.com>写道:
> Hello everyone,
>
> I would like
[
https://issues.apache.org/jira/browse/KAFKA-14906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
hudeqi reopened KAFKA-14906:
> Extract the coordinator service log from server
[
https://issues.apache.org/jira/browse/KAFKA-14906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
hudeqi resolved KAFKA-14906.
Fix Version/s: 4.0.0
Resolution: Later
This change will be reintroduced in version 4.x.
> Extr
hudeqi created KAFKA-14907:
--
Summary: Add the traffic metric of the partition dimension in
BrokerTopicStats
Key: KAFKA-14907
URL: https://issues.apache.org/jira/browse/KAFKA-14907
Project: Kafka
hudeqi created KAFKA-14906:
--
Summary: Extract the coordinator service log from server log
Key: KAFKA-14906
URL: https://issues.apache.org/jira/browse/KAFKA-14906
Project: Kafka
Issue Type
Another question about using kafka recently:
Currently, Kafka's write throttle only supports the user dimension and clientId
dimension of the request. In fact, such a situation is often encountered in
actual use: a topic entry traffic suddenly increases, and the resource
bottleneck is about to
issue?
best,
hudeqi
hudeqi created KAFKA-14868:
--
Summary: Add metric counting capability to KafkaMetricsGroup to
facilitate checking that the metric is missed when it is closed
Key: KAFKA-14868
URL: https://issues.apache.org/jira/browse
hudeqi created KAFKA-14866:
--
Summary: When the controller changes, the old controller needs to
clean up some related resources when resigning
Key: KAFKA-14866
URL: https://issues.apache.org/jira/browse/KAFKA-14866
Is there any more attention to this KIP?
bump this thread.
Best,
hudeqi
"hudeqi" <16120...@bjtu.edu.cn>写道:
> Hello, have any mates who have discussed it before seen it? Also welcome new
> mates to discuss together.
>
> "hudeqi" <16120...@bjtu.edu.cn
hudeqi created KAFKA-14842:
--
Summary: MirrorCheckpointTask can reduce the rpc calls of
"listConsumerGroupOffsets(group)" of irrelevant groups at each poll
Key: KAFKA-14842
URL: https://issues.apache.org/j
hudeqi created KAFKA-14837:
--
Summary: The MirrorCheckPointConnector of MM2 will rebalance
frequently, when the source cluster group is many more and changes frequently
(but the list of configured synchronous group does not change)
Key
hudeqi created KAFKA-14824:
--
Summary: ReplicaAlterLogDirsThread may cause serious disk usage in
case of unknown exception
Key: KAFKA-14824
URL: https://issues.apache.org/jira/browse/KAFKA-14824
Project
hudeqi created KAFKA-14812:
--
Summary: ProducerPerformance still counting successful sending in
console when sending failed
Key: KAFKA-14812
URL: https://issues.apache.org/jira/browse/KAFKA-14812
Project
I repost the newly changed KIP:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-842%3A+Add+richer+group+offset+reset+mechanisms
"hudeqi" <16120...@bjtu.edu.cn>写道:
> Hello, have any mates who have discussed it before seen it? Also welcome new
> mates to discuss
Hello, have any mates who have discussed it before seen it? Also welcome new
mates to discuss together.
"hudeqi" <16120...@bjtu.edu.cn>写道:
> Long time no see, this issue has been discussed for a long time, now please
> allow me to summarize this issue, and then everyone
fset of the newly expanded
partition to the earliest to "auto.offset.reset"="latest". In this way, Kafka
users do not need to perceive this subtle but useful change, and the processing
of other situations remains unchanged (without considering too many rich offset
processing mechanisms).
I hope you can help me with the direction of the solution to this issue, thank
you.
Best,
hudeqi
k())?
>
>
> For the original use-case you mentioned, that you want to start from
> "latest" when the app starts, but if a new partition is added you want
> to start from "earliest" it seem that the right approach would be to
> actually configure "earlie
, and then let these groups commit an initial offset 0 for
these new expanded partitions (also using adminClient). Finally, the real
process of adding partitions is carried out. In this way, the problem can also
be completely solved.
Best,
hudeqi
"Matthew Howlett" <m...@confluent.i
Thanks for your attention and reply.
Regarding the problem raised by this kip, if you have other ideas or solutions,
you are welcome to put forward them, thank you.
Best,
hudeqi
"David Jacot" <da...@apache.org>写道:
> Thanks for the KIP.
>
> I read it and I am also wor
more useful, so I put it together and put it into this KIP.
Best,
hudeqi
"Matthias J. Sax" <mj...@apache.org>写道:
> Thanks for the KIP.
>
> I don't think I fully digested the proposal yet, but my first reaction
> is: this is quite complicated. Frankly, I am wo
Bump this thread.
So far, I've got:
1 +1 (non-bingin) from Ziming Thanks for people who vote for this KIP.
Hope to receive some bingin votes/comments and any other non-bingin
votes/comments. Details can be found here:
https://cwiki.apache.org/confluence/x/xhyhD
-原始邮件-
发件人:h
Bump this thread.
So far, I've got:
1 +1 (non-bingin) from Ziming
Thanks for people who vote for this KIP.
Hope to receive some bingin votes/comments and any other non-bingin
votes/comments.
Details can be found here: https://cwiki.apache.org/confluence/x/xhyhD
-原始邮件-
发件人:h
--
> 发件人: hudeqi <16120...@bjtu.edu.cn>
> 发送时间: 2022-06-17 16:03:00 (星期五)
> 收件人: dev@kafka.apache.org
> 抄送:
> 主题: Re: Re: [VOTE] KIP-842: Add richer group offset reset mechanisms
>
Bump this thread.
So far, I've got:
1 +1 (non-bingin) from Ziming
Thanks for people who vote for this KIP.
Hope to receive some bingin votes/comments and any other non-bingin
votes/comments.
Details can be found here: https://cwiki.apache.org/confluence/x/xhyhD
> -原始邮件-
> 发件人: "deng
Hi all,
I'd like to start a vote on KIP-842to add some group offset reset mechanisms.
Details can be found here: https://cwiki.apache.org/confluence/x/xhyhD
Any feedback is appreciated.
Thank you.
hudeqi
I think so too, what about Guozhang Wang and Luke Chen? Can I initiate a voting
process?
Best,
hudeqi
> -原始邮件-
> 发件人: "邓子明"
> 发送时间: 2022-06-07 10:23:37 (星期二)
> 收件人: dev@kafka.apache.org
> 抄送:
> 主题: Re: [DISCUSS] KIP-842: Add richer group offset reset mechanisms
>
Hi, Ziming.
I thought about it again and thought it might be better to add an additional
auxiliaryStrategy, so that we can implement more auxiliary strategies in this
way, not just nearest. What do you think?
Best,
hudeqi
> -原始邮件-
> 发件人: hudeqi <16120...@bjtu.edu.cn>
> 发
OutOfRange, I
think we can directly remove this enum value, WDYT?
>
> --
> Best,
> Ziming
>
> > On May 27, 2022, at 5:19 PM, hudeqi <16120...@bjtu.edu.cn>
wrote:
> >
> > Thank you for your attention and reply. Here are my reply to your
questions:
>
ange "reset
> behavior” to “proposed reset behavior”, then we can be clear that this has no
> effect on current behavior.
>
> 4. You added a new config “nearest.offset.reset” and only explain what will
> happen when we set it true, you’d better explain what will happen it it is
> fa
hudeqi created KAFKA-12478:
--
Summary: Consumer group may lose data for newly expanded
partitions when add partitions for topic if the group is set to consume from
the latest
Key: KAFKA-12478
URL: https
78 matches
Mail list logo