Hi Daniel and Jungtaek,

I am using Spark in batch. Tried with kafka.<option>, now I can see they
are being set in Producer Config on Spark Startup but still they are not
being honored. I have set "linger.ms": "1000" and "batch.size": "100000". I
am publishing 10 records and they are flushed to kafka server immediately,
however kafka producer behaviour when publishing via kafka-clients using
foreachPartition is as expected. Am I missing something here or is
throttling not supported in the kafka connector?

Regards,
Abhishek Singla

On Thu, Mar 27, 2025 at 4:56 AM daniel williams <daniel.willi...@gmail.com>
wrote:

> If you're using structured streaming you can pass in options as
> kafka.<option> into options as documented. If you're using Spark in batch
> form you'll want to do a foreach on a KafkaProducer via a Broadcast.
>
> All KafkaProducer specific options
> <https://docs.confluent.io/platform/current/installation/configuration/producer-configs.html>
>  will
> need to be prepended by *kafka.*
>
>
> https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html
>
>
> On Wed, Mar 26, 2025 at 4:11 PM Jungtaek Lim <kabhwan.opensou...@gmail.com>
> wrote:
>
>> Sorry I missed this. Did you make sure that you add "kafka." as prefix on
>> kafka side config when specifying Kafka source/sink option?
>>
>> On Mon, Feb 24, 2025 at 10:31 PM Abhishek Singla <
>> abhisheksingla...@gmail.com> wrote:
>>
>>> Hi Team,
>>>
>>> I am using spark to read from S3 and write to Kafka.
>>>
>>> Spark Version: 3.1.2
>>> Scala Version: 2.12
>>> Spark Kafka connector: spark-sql-kafka-0-10_2.12
>>>
>>> I want to throttle kafka producer. I tried using *linger.ms
>>> <http://linger.ms>* and *batch.size* config but I can see in 
>>> *ProducerConfig:
>>> ProducerConfig values* at runtime that they are not being set. Is there
>>> something I am missing? Is there any other way to throttle kafka writes?
>>>
>>> *dataset.write().format("kafka").options(options).save();*
>>>
>>> Regards,
>>> Abhishek Singla
>>>
>>>
>>>
>>>
>>>
>
> --
> -dan
>

Reply via email to