eejbyfeldt commented on PR #789:
URL: https://github.com/apache/datafusion-comet/pull/789#issuecomment-2276498187
I think spark will be "smart" enough to push that filter through the project
and peform it before the struct is constructed and therefore this change might
not really affect performance. Consider this slightly modified query (since I
did not get yours to work as is)
```
scala> spark.sql("SELECT s FROM (SELECT struct(_1, _2) as s FROM tbl) WHERE
s._1 RLIKE '^[A-Z]{1}'").explain()
== Physical Plan ==
*(1) Project [struct(_1, _1#10, _2, _2#11) AS s#32]
+- *(1) Filter (isnotnull(_1#10) AND RLIKE(_1#10, ^[A-Z]{1}))
+- *(1) ColumnarToRow
+- FileScan parquet spark_catalog.default.tbl[_1#10,_2#11] Batched:
true, DataFilters: [isnotnull(_1#10), RLIKE(_1#10, ^[A-Z]{1})], Format:
Parquet, Location: InMemoryFileIndex(1
paths)[file:/home/eejbyfeldt/dev/apache/datafusion-comet/spark-warehouse/tbl],
PartitionFilters: [], PushedFilters: [IsNotNull(_1)], ReadSchema:
struct<_1:string,_2:int>
```
Notice that the RLIKE is in the filter before the struct is constructed.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]