zhengruifeng commented on code in PR #49790:
URL: https://github.com/apache/spark/pull/49790#discussion_r1942159626


##########
python/pyspark/sql/pandas/conversion.py:
##########
@@ -785,8 +798,13 @@ def _create_from_arrow_table(
         if not isinstance(schema, StructType):
             schema = from_arrow_schema(table.schema, 
prefer_timestamp_ntz=prefer_timestamp_ntz)
 
+        prefers_large_var_types = self._jconf.arrowUseLargeVarTypes()
         table = _check_arrow_table_timestamps_localize(table, schema, True, 
timezone).cast(
-            to_arrow_schema(schema, 
error_on_duplicated_field_names_in_struct=True)
+            to_arrow_schema(
+                schema,
+                error_on_duplicated_field_names_in_struct=True,
+                prefers_large_types=prefers_large_var_types,

Review Comment:
   I feel we are using more and more configs, probably we can also combine them 
into a `runner_conf: Dict[str, str]` later



##########
python/pyspark/sql/pandas/conversion.py:
##########
@@ -715,9 +721,16 @@ def _create_from_pandas_with_arrow(
         pdf_slices = (pdf.iloc[start : start + step] for start in range(0, 
len(pdf), step))
 
         # Create list of Arrow (columns, arrow_type, spark_type) for 
serializer dump_stream
+        prefers_large_var_types = self._jconf.arrowUseLargeVarTypes()

Review Comment:
   nit: probably we can also fetch all configs via py4j in batch in the future.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to