allisonwang-db commented on code in PR #50099:
URL: https://github.com/apache/spark/pull/50099#discussion_r1978456503


##########
python/pyspark/sql/tests/test_udtf.py:
##########
@@ -2948,21 +2948,21 @@ def eval(self):
         err = "UDTF_ARROW_TYPE_CAST_ERROR"
 
         for ret_type, expected in [
-            ("x: boolean", [Row(x=True)]),
-            ("x: tinyint", [Row(x=1)]),
-            ("x: smallint", [Row(x=1)]),
-            ("x: int", [Row(x=1)]),
-            ("x: bigint", [Row(x=1)]),
+            ("x: boolean", err),
+            ("x: tinyint", err),
+            ("x: smallint", err),
+            ("x: int", err),
+            ("x: bigint", err),

Review Comment:
   Is this a behavior change? I wonder if we should have a separate flag for 
enabling this optimization.



##########
python/pyspark/sql/pandas/serializers.py:
##########
@@ -161,9 +161,12 @@ def wrap_and_init_stream():
                 assert isinstance(batch, pa.RecordBatch)
 
                 # Wrap the root struct
-                struct = pa.StructArray.from_arrays(
-                    batch.columns, fields=pa.struct(list(batch.schema))
-                )
+                if len(batch.columns) == 0:
+                    struct = pa.array([{}] * batch.num_rows)

Review Comment:
   Nit: can we add a comment on why we need this special handling?



##########
python/pyspark/sql/tests/test_udtf.py:
##########
@@ -350,10 +351,7 @@ def terminate(self, a: int):
             TestUDTF(lit(1)).show()
 
     def test_udtf_with_wrong_num_output(self):
-        err_msg = (
-            r"\[UDTF_RETURN_SCHEMA_MISMATCH\] The number of columns in the "
-            "result does not match the specified schema."
-        )
+        err_msg = "(UDTF_ARROW_TYPE_CAST_ERROR|UDTF_RETURN_SCHEMA_MISMATCH)"

Review Comment:
   Why do we need to check both error classes?



##########
python/pyspark/sql/pandas/serializers.py:
##########
@@ -175,6 +178,16 @@ def wrap_and_init_stream():
         return super(ArrowStreamUDFSerializer, 
self).dump_stream(wrap_and_init_stream(), stream)
 
 
+class ArrowStreamUDTFSerializer(ArrowStreamUDFSerializer):
+    """
+    Same as :class:`ArrowStreamSerializer` but it flattens the struct to Arrow 
record batch
+    for applying each function with the raw record arrow batch. See also 
`DataFrame.mapInArrow`.
+    """
+
+    def load_stream(self, stream):
+        return ArrowStreamSerializer.load_stream(self, stream)

Review Comment:
   Shall we update the this doc? It looks like we don't flatten the struct now



##########
python/pyspark/worker.py:
##########
@@ -976,6 +977,8 @@ def use_large_var_types(runner_conf):
 # ensure the UDTF is valid. This function also prepares a mapper function for 
applying
 # the UDTF logic to input rows.
 def read_udtf(pickleSer, infile, eval_type):
+    prefers_large_var_types = False

Review Comment:
   What is this larger_var_types?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to