We are preparing to deploy our *Java API-based Kafka Producer* to
production and want to ensure our configuration is optimized.
However, we want to validate the configuration for *high reliability and
optimal performance* in production.
*Message Load Details*
- *Message Rate:* 3,000 to 5,000
workload among a different number of
> producers, partitions, or brokers to see how the throughput behaves.
>
> Without knowing the details of your setup and debugging it directly, it's
> hard to give specific tuning advice. You can try looking online for others'
>
Hi all iam encountering a TimeoutException while publishing messages to
Kafka during load testing of our Kafka producer. The error message
indicates that records are expiring before they can be sent successfully:
org.apache.kafka.common.errors.TimeoutException: Expiring 115
record(s) for ORG_LT
The old implementation is taking upto 2.5 seconds.
new implementation is taking 10 seconds (10 polls for each poll have 500
records returning)
Authentication happens every time internally?is this delay expected? How to
explain this to management please?
On Thu, Jan 16, 2025 at 10:42 PM giri
Hi all,
The below are the two implementation logs : 1) old kafka broker with old
kafka client version (0.8 etc)
2) new kafka brokers (3.6.1) with new kafka client version (3.6.1) .
The old does not have authentication but new kafka we are using
authentication..we went to new one for security an
Hi all,
Please give suggestion on this.please help ,Iam stuck with this.
Thanks
Giri
On Mon, 25 Nov 2024 at 9:22 AM, giri mungi wrote:
> Hi Ping,
>
> 1) Yes the records are already existent.
> 2) Yeah the Kafka broker also upgraded to 3.6 version.
> That’s why we are upgrading
Hi Team,
I am currently pushing 50,000 records to a Kafka producer, but the process
is taking approximately 5 minutes to complete.
Could you please assist me in optimizing the Kafka producer configurations
to reduce the processing time? Any recommendations or best practices would
be greatly appre
Tsai wrote:
> hi
>
> 1) The records you want to read are already existent, right?
>
> 2) Are all your code changes solely focused on upgrading the client code
> from version 0.8 to 3.6? Or does the broker get upgraded as well?
>
>
> Thanks,
> Chia-Ping
>
> &
; 2. how do you calculate "diff"?
>
> giri mungi 於 2024年11月25日 週一 上午12:50寫道:
>
> > Hi Ping,
> >
> > Please find the details below:
> >
> > 1) Kafka broker version is 3.6.1
> >
> > 2) Logic Explanation:
> >
> > Polls messages
share the broker version to us?
> 2) could you explain how the sample code works? what is the "end"?
>
> ```
> do {
> ConsumerRecords < String, String > records =
> consumer.poll(Duration.ofMillis(1500));
> } while (!end)
> ```
&
ount :500 diff :943
>
> Are them from the same consumer in each poll? Or they are based on
> different "offset" and separate consumer's poll?
>
> thanks,
> Chia-Ping
>
>
> giri mungi 於 2024年11月24日 週日 下午8:51寫道:
>
> > Do i need to check any settings
Do i need to check any settings in the kafka server level?
On Sun, Nov 24, 2024 at 6:19 PM giri mungi wrote:
> Hi I have set the below properties as below:
>
> props.put(ConsumerConfig.FETCH_MIN_BYTES_CONFIG,"175");
> props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFI
ms, a value of 1500 ms might match your expectations.
>
> Best,
> Chia-Ping
>
>
> On 2024/11/24 10:55:23 giri mungi wrote:
> > Hi Yang,
> >
> > *This is the old code which is perfectly doing fine and returning less
> than
> > 3 seconds for all 1000 r
);
props.put(ConsumerConfig.FETCH_MAX_WAIT_MS_CONFIG, "500");
return new KafkaConsumer(props);
}
How to improve the performance on this.
On Sun, Nov 24, 2024 at 4:11 PM giri mungi wrote:
> Hi Yang,
>
> Can i get the records from kafka as byt
Hi Yang,
Can i get the records from kafka as bytes or compression form so that i
will take less time from kafka.
I can build messages from those bytes.Is that possible?
Can you please give suggestions on this.
Thanks,
Giridar
On Sun, Nov 24, 2024 at 3:50 PM giri mungi wrote:
> Hi Y
this difference please.*
> >
> > Poll Records Count :500 Time taken:1284 ms
> > Poll Records Count :500 Time taken:3 ms
>
> IIUC, a FetchRequest has a limitation from `fetch.max.bytes`.
> If the record size from offset “0” is bigger than from offset “110999”,
> then a
Hi Team,
Good day to you.
Iam Giridhar.I need your suggestions in kafka
performance improvement please.
*Scenario is: The user will give the offset as input and based on the
offset we need to give the next 1000 messages from kafka topic and next
offset.The kafka topic contains only one partition
17 matches
Mail list logo