It's been a month. I guess the answer is no. I have been running into the
same issue. I guess building a Kafka client is the only option.

Rommel

On Mon, Feb 24, 2025, 09:20 Abhishek Singla <abhisheksingla...@gmail.com>
wrote:

> Isn't there a way to do it with kafka connector instead of kafka client?
> Isn't there any way to throttle kafka connector? Seems like a common
> problem.
>
> Regards,
> Abhishek Singla
>
> On Mon, Feb 24, 2025 at 7:24 PM daniel williams <daniel.willi...@gmail.com>
> wrote:
>
>> I think you should be using a foreachPartition and a broadcast to build
>> your producer. From there you will have full control of all options and
>> serialization needed via direct access to the KafkaProducer, as well as all
>> options therein associated (e.g. callbacks, interceptors, etc).
>>
>> -dan
>>
>>
>> On Mon, Feb 24, 2025 at 6:26 AM Abhishek Singla <
>> abhisheksingla...@gmail.com> wrote:
>>
>>> Hi Team,
>>>
>>> I am using spark to read from S3 and write to Kafka.
>>>
>>> Spark Version: 3.1.2
>>> Scala Version: 2.12
>>> Spark Kafka connector: spark-sql-kafka-0-10_2.12
>>>
>>> I want to throttle kafka producer. I tried using *linger.ms
>>> <http://linger.ms>* and *batch.size* config but I can see in 
>>> *ProducerConfig:
>>> ProducerConfig values* at runtime that they are not being set. Is there
>>> something I am missing? Is there any other way to throttle kafka writes?
>>>
>>> *dataset.write().format("kafka").options(options).save();*
>>>
>>> Regards,
>>> Abhishek Singla
>>>
>>>
>>>
>>>
>>>

Reply via email to