Hello Spark Devs!

After doing some detective work, I’d like to revisit this idea in earnest. My 
understanding now is that setting `client.rack` dynamically on the executor 
will do nothing. This is because the driver assigns Kafka partitions to 
executors. I’ve summarized a design to enable rack awareness and other location 
assignment patterns more generally in 
SPARK-46798<https://issues.apache.org/jira/browse/SPARK-46798>.

Since this is my first go at contributing to Spark, could I ask for a committer 
to help shepherd this JIRA issue along?

Sincerely,

Randall

From: "Schwager, Randall" <randall.schwa...@charter.com>
Date: Wednesday, January 10, 2024 at 19:39
To: "dev@spark.apache.org" <dev@spark.apache.org>
Subject: Spark Kafka Rack Aware Consumer

Hello Spark Devs!

Has there been discussion around adding the ability to dynamically set the 
‘client.rack’ Kafka parameter at the executor?
The Kafka SQL connector code on master doesn’t seem to support this feature. 
One can easily set the ‘client.rack’ parameter at the driver, but that just 
sets all executors to have the same rack. It seems that if we want each 
executor to set the correct rack, each executor will have to produce the 
setting dynamically on start-up.

Would this be a good area to consider contributing new functionality?

Sincerely,

Randall

Reply via email to