Having a brief look on your repo I have noticed at least two major API
usage issues:

1. When you issue Producer#send you get a future object. It is up to the
producer to choose the correct time to send.
If you really need to send the messages one by one you should execute
Producer#flush and then wait for future
completion.

2. In the consumer loop additional wait (Thread.onSpinWait) is not needed.
You should just continue polling. The poll
call should provide the time, which is max time to wait for messages. Poll
under the hood does quite a lot of activities.

Committing after each record is also not necessary, especially if you focus
on high throughput.

On the consumer side you also set high fetch.min.bytes. For low latency you
should set it to 1, otherwise you force
the consumer to wait for data. Similarly you may set max.poll.records to 1,
but this one is a local client setting.

For tuning have a look on this whitepaper:
https://www.confluent.io/white-paper/optimizing-your-apache-kafka-deployment/

HTH,
Piotr



On Sun, Aug 8, 2021 at 11:30 PM Виталий Ромашкин <rvita...@list.ru.invalid>
wrote:

>
> Hi Devs,
>
> Currently, I am bench-marking different transports.
> The first one is Kafka.
> I created a repo in my GitHub —
> https://github.com/rvit34/transport-benchmark
> The result for Kafka is not so good. For RPS 25K and higher latency is
> about 1second and higher.
> Maybe I'm doing something completely wrong but If I change transport from
> Kafka to Aeron my max latency is always under 100ms for any workload (100K
> RPS and higher).
> So, might somebody check it out?
>
>
> Best Regards, Vitaly.



-- 

*Mit freundlichen Grüßen / Kind regards *


Piotr Smolinski

Consulting Engineer, Professional Services EMEA

piotr.smolin...@confluent.io | +49 (151) 267-114-23


Follow us:  Blog <http://www.confluent.io/blog> • Slack
<https://slackpass.io/confluentcommunity> • Twitter
<https://twitter.com/ConfluentInc>


<https://www.confluent.io/>

<https://developer.confluent.io/>

Reply via email to