[ 
https://issues.apache.org/jira/browse/SPARK-51769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

L. C. Hsieh resolved SPARK-51769.
---------------------------------
    Fix Version/s: 4.1.0
       Resolution: Fixed

Issue resolved by pull request 50301
[https://github.com/apache/spark/pull/50301]

> Add maxRecordsPerOutputBatch to limit the number of record of Arrow output 
> batch
> --------------------------------------------------------------------------------
>
>                 Key: SPARK-51769
>                 URL: https://issues.apache.org/jira/browse/SPARK-51769
>             Project: Spark
>          Issue Type: Improvement
>          Components: PySpark
>    Affects Versions: 4.1.0
>            Reporter: L. C. Hsieh
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 4.1.0
>
>
> While implementing columnar-based operator for Spark, if the operator takes 
> input from Arrow-based evaluation operator in Spark, the number of records of 
> output batch is unlimited for now. For such columnar-based operator, 
> sometimes we want to limit the maximum number of input batch. If we need to 
> limit the batch size in rows, it seems there is no existing way we can do.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to