Re: Plz help on kafka consumer API performance(taking time on the first consumer.poll())

2024-11-24 Thread giri mungi
Hi Ping, 1) Yes the records are already existent. 2) Yeah the Kafka broker also upgraded to 3.6 version. That’s why we are upgrading the client code also. 3) Do I need to compare any settings between old broker and new broker? Please help. Thanks Giri On Mon, 25 Nov 2024 at 6:45 AM, Chia-Ping

Re: Plz help on kafka consumer API performance(taking time on the first consumer.poll())

2024-11-24 Thread Chia-Ping Tsai
hi 1) The records you want to read are already existent, right? 2) Are all your code changes solely focused on upgrading the client code from version 0.8 to 3.6? Or does the broker get upgraded as well? Thanks, Chia-Ping > giri mungi 於 2024年11月25日 凌晨1:27 寫道: > > Hi Ping, > > My bad,commi

Re: Plz help on kafka consumer API performance(taking time on the first consumer.poll())

2024-11-24 Thread giri mungi
Hi Ping, My bad,commitSync is not required.we can ignore that. Iam calculating diff as below : long stime = Calendar.getInstance().getTimeInMillis(); ConsumerRecords records = consumer.poll(Duration.ofMillis(2000)); long etime = Calendar.getInstance().getTimeInMillis(); log.info("Poll Records C

Re: Plz help on kafka consumer API performance(taking time on the first consumer.poll())

2024-11-24 Thread Chia-Ping Tsai
hi Giri 1. Why do you call `commitSync`? it seems your application does not use the consumer group as your use case is to random read, right? 2. how do you calculate "diff"? giri mungi 於 2024年11月25日 週一 上午12:50寫道: > Hi Ping, > > Please find the details below: > > 1) Kafka broker version is 3.6.1

Re: Plz help on kafka consumer API performance(taking time on the first consumer.poll())

2024-11-24 Thread giri mungi
Hi Ping, Please find the details below: 1) Kafka broker version is 3.6.1 2) Logic Explanation: Polls messages from Kafka using consumer.poll(Duration.ofMillis(2000)). *Exit Conditions:* The loop exits when Message Limit > 1000 is reached: Then the end flag is true and the loop will exit. boo

Re: Plz help on kafka consumer API performance(taking time on the first consumer.poll())

2024-11-24 Thread Chia-Ping Tsai
hi 1) could you share the broker version to us? 2) could you explain how the sample code works? what is the "end"? ``` do { ConsumerRecords < String, String > records = consumer.poll(Duration.ofMillis(1500)); } while (!end) ``` thanks, Chia-Ping giri mungi 於 2024年1

Re: Plz help on kafka consumer API performance(taking time on the first consumer.poll())

2024-11-24 Thread giri mungi
Hi, All of them are from same consumer in each poll. Before poll we set offset to user input offset and try to consume next 1000 messages. Thanks Giridhar. On Sun, 24 Nov 2024 at 8:40 PM, Chia-Ping Tsai wrote: > hi > > > Poll Records Count :0diff :2004 > Poll Records Count :500 diff :943

[jira] [Resolved] (KAFKA-17988) Fix flaky ReconfigurableQuorumIntegrationTest.testRemoveAndAddSameController

2024-11-24 Thread Chia-Ping Tsai (Jira)
[ https://issues.apache.org/jira/browse/KAFKA-17988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai resolved KAFKA-17988. Fix Version/s: 4.0.0 Resolution: Fixed > Fix flaky ReconfigurableQuorumIntegrationT

Re: Plz help on kafka consumer API performance(taking time on the first consumer.poll())

2024-11-24 Thread Chia-Ping Tsai
hi > Poll Records Count :0diff :2004 Poll Records Count :500 diff :943 Are them from the same consumer in each poll? Or they are based on different "offset" and separate consumer's poll? thanks, Chia-Ping giri mungi 於 2024年11月24日 週日 下午8:51寫道: > Do i need to check any settings in the kaf

Re: [DISCUSS] KIP-1099: Extend kafka-consumer-groups command line tool to support new consumer group

2024-11-24 Thread PoAn Yang
Hi Chia-Ping, Thanks for the suggestion. The “isClassic” field in ConsumerGroupDescribeResponse is a boolean value, so we can’t use it as the protocol string. I change the “PROTOCOL” column to “IS-CLASSIC” in kafka-consumer-groups.sh. It returns “true” if a member is in a classic group or it’s a

[jira] [Resolved] (KAFKA-17835) Move ProducerIdManager and RPCProducerIdManager to transaction-coordinator module

2024-11-24 Thread Jira
[ https://issues.apache.org/jira/browse/KAFKA-17835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] 黃竣陽 resolved KAFKA-17835. - Resolution: Fixed > Move ProducerIdManager and RPCProducerIdManager to transaction-coordinator > module > -

Re: Plz help on kafka consumer API performance(taking time on the first consumer.poll())

2024-11-24 Thread giri mungi
Do i need to check any settings in the kafka server level? On Sun, Nov 24, 2024 at 6:19 PM giri mungi wrote: > Hi I have set the below properties as below: > > props.put(ConsumerConfig.FETCH_MIN_BYTES_CONFIG,"175"); > props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "500"); > props.put(Cons

Re: Plz help on kafka consumer API performance(taking time on the first consumer.poll())

2024-11-24 Thread giri mungi
Hi I have set the below properties as below: props.put(ConsumerConfig.FETCH_MIN_BYTES_CONFIG,"175"); props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "500"); props.put(ConsumerConfig.FETCH_MAX_WAIT_MS_CONFIG, "1500"); props.put(ConsumerConfig.MAX_PARTITION_FETCH_BYTES_CONFIG, "175"); Pol

Re: Plz help on kafka consumer API performance(taking time on the first consumer.poll())

2024-11-24 Thread Chia-Ping Tsai
hi Giridar It seems your use case involves random reads, and you expect the consumer to return 1000 records from server at once. Therefore, you could increase the wait time (fetch.max.wait.ms) and fetch size (fetch.min.bytes) to receive a larger response with as many records as possible. A sui

Re: Plz help on kafka consumer API performance(taking time on the first consumer.poll())

2024-11-24 Thread giri mungi
Hi Yang, *This is the old code which is perfectly doing fine and returning less than 3 seconds for all 1000 records.* do { FetchRequest req = new FetchRequestBuilder().clientId(clientName) .addFetch(a_topic, a_partition, readOffset, fetchSize) .build(); FetchResponse fetchResponse = consumer.fet

Re: Plz help on kafka consumer API performance(taking time on the first consumer.poll())

2024-11-24 Thread giri mungi
Hi Yang, Can i get the records from kafka as bytes or compression form so that i will take less time from kafka. I can build messages from those bytes.Is that possible? Can you please give suggestions on this. Thanks, Giridar On Sun, Nov 24, 2024 at 3:50 PM giri mungi wrote: > Hi Yang, > > T

Re: Plz help on kafka consumer API performance(taking time on the first consumer.poll())

2024-11-24 Thread giri mungi
Hi Yang, Thanks for your reply. Now what should I do to improve my performance?Because the old kafka code was good in performance These are the properties: props.setProperty(ConsumerConfig.GROUP_ID_CONFIG, ""); props.setProperty(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false"); props.setProp

Re: Plz help on kafka consumer API performance(taking time on the first consumer.poll())

2024-11-24 Thread PoAn Yang
Hi Giridar, > *Code explanation:Fetching records is taking time for the first poll.* > Poll Records Count: 500 diff: 1284 > Poll Records Count: 500 diff: 3 > > For the first 500 records it took 1284 ms and next 500 records it took 4 ms > > *Why this much difference? I would like to improve the p

[jira] [Created] (KAFKA-18083) ClusterInstance custom controllerListener not work

2024-11-24 Thread Kuan Po Tseng (Jira)
Kuan Po Tseng created KAFKA-18083: - Summary: ClusterInstance custom controllerListener not work Key: KAFKA-18083 URL: https://issues.apache.org/jira/browse/KAFKA-18083 Project: Kafka Issue T