hvanhovell commented on code in PR #52271: URL: https://github.com/apache/spark/pull/52271#discussion_r2330846917
########## sql/connect/server/src/main/scala/org/apache/spark/sql/connect/config/Connect.scala: ########## @@ -392,4 +392,20 @@ object Connect { .internal() .bytesConf(ByteUnit.BYTE) .createWithDefaultString("10g") + + val CONNECT_SESSION_RESULT_CHUNKING_MAX_CHUNK_SIZE = + buildConf("spark.connect.session.resultChunking.maxChunkSize") + .doc("The max size of a chunk in responses for a result batch. Result chunking is enabled" + + " if this config is set to a value greater than 0 and if the client allows it in" + + " ResultChunkingOptions. Otherwise, for example if set to -1, this feature is disabled." + + " While spark.connect.grpc.arrow.maxBatchSize determines the max size of a result batch," + + " maxChunkSize defines the max size of each individual chunk that is part of the batch" + + " that will be sent in a response. This allows the server to send large rows to clients." + + " However, excessively large plans remain unsupported due to Spark internals and JVM" + Review Comment: Remove these two lines. They are not related to the conf. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org