Hi all,
Please give suggestion on this.please help ,Iam stuck with this.
Thanks
Giri
On Mon, 25 Nov 2024 at 9:22 AM, giri mungi wrote:
> Hi Ping,
>
> 1) Yes the records are already existent.
> 2) Yeah the Kafka broker also upgraded to 3.6 version.
> That’s why we are upgrading the client code
Hi Ping,
1) Yes the records are already existent.
2) Yeah the Kafka broker also upgraded to 3.6 version.
That’s why we are upgrading the client code also.
3) Do I need to compare any settings between old broker and new broker?
Please help.
Thanks
Giri
On Mon, 25 Nov 2024 at 6:45 AM, Chia-Ping
hi
1) The records you want to read are already existent, right?
2) Are all your code changes solely focused on upgrading the client code from
version 0.8 to 3.6? Or does the broker get upgraded as well?
Thanks,
Chia-Ping
> giri mungi 於 2024年11月25日 凌晨1:27 寫道:
>
> Hi Ping,
>
> My bad,commi
Hi Ping,
My bad,commitSync is not required.we can ignore that.
Iam calculating diff as below :
long stime = Calendar.getInstance().getTimeInMillis();
ConsumerRecords records =
consumer.poll(Duration.ofMillis(2000));
long etime = Calendar.getInstance().getTimeInMillis();
log.info("Poll Records C
hi Giri
1. Why do you call `commitSync`? it seems your application does not use the
consumer group as your use case is to random read, right?
2. how do you calculate "diff"?
giri mungi 於 2024年11月25日 週一 上午12:50寫道:
> Hi Ping,
>
> Please find the details below:
>
> 1) Kafka broker version is 3.6.1
Hi Ping,
Please find the details below:
1) Kafka broker version is 3.6.1
2) Logic Explanation:
Polls messages from Kafka using consumer.poll(Duration.ofMillis(2000)).
*Exit Conditions:* The loop exits when Message Limit > 1000 is reached:
Then the end flag is true and the loop will exit.
boo
hi
1) could you share the broker version to us?
2) could you explain how the sample code works? what is the "end"?
```
do {
ConsumerRecords < String, String > records =
consumer.poll(Duration.ofMillis(1500));
} while (!end)
```
thanks,
Chia-Ping
giri mungi 於 2024年1
Hi,
All of them are from same consumer in each poll.
Before poll we set offset to user input offset and try to consume next 1000
messages.
Thanks
Giridhar.
On Sun, 24 Nov 2024 at 8:40 PM, Chia-Ping Tsai wrote:
> hi
>
> > Poll Records Count :0diff :2004
> Poll Records Count :500 diff :943
hi
> Poll Records Count :0diff :2004
Poll Records Count :500 diff :943
Are them from the same consumer in each poll? Or they are based on
different "offset" and separate consumer's poll?
thanks,
Chia-Ping
giri mungi 於 2024年11月24日 週日 下午8:51寫道:
> Do i need to check any settings in the kaf
Do i need to check any settings in the kafka server level?
On Sun, Nov 24, 2024 at 6:19 PM giri mungi wrote:
> Hi I have set the below properties as below:
>
> props.put(ConsumerConfig.FETCH_MIN_BYTES_CONFIG,"175");
> props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "500");
> props.put(Cons
Hi I have set the below properties as below:
props.put(ConsumerConfig.FETCH_MIN_BYTES_CONFIG,"175");
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "500");
props.put(ConsumerConfig.FETCH_MAX_WAIT_MS_CONFIG, "1500");
props.put(ConsumerConfig.MAX_PARTITION_FETCH_BYTES_CONFIG, "175");
Pol
hi Giridar
It seems your use case involves random reads, and you expect the consumer to
return 1000 records from server at once. Therefore, you could increase the wait
time (fetch.max.wait.ms) and fetch size (fetch.min.bytes) to receive a larger
response with as many records as possible.
A sui
Hi Yang,
*This is the old code which is perfectly doing fine and returning less than
3 seconds for all 1000 records.*
do {
FetchRequest req = new FetchRequestBuilder().clientId(clientName)
.addFetch(a_topic, a_partition, readOffset, fetchSize)
.build();
FetchResponse fetchResponse = consumer.fet
Hi Yang,
Can i get the records from kafka as bytes or compression form so that i
will take less time from kafka.
I can build messages from those bytes.Is that possible?
Can you please give suggestions on this.
Thanks,
Giridar
On Sun, Nov 24, 2024 at 3:50 PM giri mungi wrote:
> Hi Yang,
>
> T
Hi Yang,
Thanks for your reply.
Now what should I do to improve my performance?Because the old kafka code
was good in performance
These are the properties:
props.setProperty(ConsumerConfig.GROUP_ID_CONFIG, "");
props.setProperty(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,
"false");
props.setProp
Hi Giridar,
> *Code explanation:Fetching records is taking time for the first poll.*
> Poll Records Count: 500 diff: 1284
> Poll Records Count: 500 diff: 3
>
> For the first 500 records it took 1284 ms and next 500 records it took 4 ms
>
> *Why this much difference? I would like to improve the p
Hi Team,
Good day to you.
Iam Giridhar.I need your suggestions in kafka
performance improvement please.
*Scenario is: The user will give the offset as input and based on the
offset we need to give the next 1000 messages from kafka topic and next
offset.The kafka topic contains only one partition
17 matches
Mail list logo