mbutrovich opened a new pull request, #1578: URL: https://github.com/apache/datafusion-comet/pull/1578
## Which issue does this PR close? <!-- We generally require a GitHub issue to be filed for all bug fixes and enhancements and this helps us generate change logs for our releases. You can link an issue to this PR using the GitHub syntax. For example `Closes #123` indicates that this PR will close issue #123. --> Addresses another failure in #1441. ## Rationale for this change <!-- Why are you proposing this change? If this is already explained clearly in the issue then this section is not needed. Explaining clearly why changes are proposed helps reviewers understand your changes and offer better suggestions for fixes. --> `CometExecSuite.explain native plan` fails with `native_datafusion` experimental scan. It's an interesting query that does a self-join of two columns from the same table. The root case is that when AQE is enabled, it would reuse the shuffle output from one scan as the output of the other scan: ``` +- == Initial Plan == CometProject [_1#6], [_1#6] +- CometSortMergeJoin [_1#6], [_2#11], Inner :- CometSort [_1#6], [_1#6 ASC NULLS FIRST] : +- CometExchange hashpartitioning(_1#6, 10), ENSURE_REQUIREMENTS, CometNativeShuffle, [plan_id=304] : +- CometFilter [_1#6], isnotnull(_1#6) : +- CometNativeScan: [_1#6] +- CometSort [_2#11], [_2#11 ASC NULLS FIRST] +- CometExchange hashpartitioning(_2#11, 10), ENSURE_REQUIREMENTS, CometNativeShuffle, [plan_id=308] +- CometFilter [_2#11], isnotnull(_2#11) +- CometNativeScan: [_2#11] ``` AQE incorrectly adds a `ReusedExchange` on the left side with the same `plan_id` as the right side of the join. ``` == Physical Plan == AdaptiveSparkPlan isFinalPlan=true +- == Final Plan == *(1) CometColumnarToRow +- CometProject [_1#6], [_1#6] +- CometBroadcastHashJoin [_1#6], [_2#11], Inner, BuildRight :- AQEShuffleRead coalesced : +- ShuffleQueryStage 0 : +- CometExchange hashpartitioning(_1#6, 10), ENSURE_REQUIREMENTS, CometNativeShuffle, [plan_id=304] : +- CometFilter [_1#6], isnotnull(_1#6) : +- CometNativeScan: [_1#6] +- BroadcastQueryStage 2 +- CometBroadcastExchange [_2#11] +- AQEShuffleRead local +- ShuffleQueryStage 1 +- ReusedExchange [_2#11], CometExchange hashpartitioning(_1#6, 10), ENSURE_REQUIREMENTS, CometNativeShuffle, [plan_id=304] ``` The reason is that `hashCode()` for `CometNativeScan` is only defined as the output of the node, so the `TrieMap` used in AQE (which hashes the `SparkPlan`) resulted in the stages having the same hash value, making AQE think that one stage could be reused for the other. ## What changes are included in this PR? <!-- There is no need to duplicate the description in the issue here but it is sometimes worth providing a summary of the individual changes in this PR. --> - Expand `hashCode` to include the original `FileSourceScanExec` and `serializedPlanOpt` which has better info about the node. I'd like to understand if this is hashing too much information, and may make stages that could be reused appear to distinct, but need to dig into AQE behavior more. - Expand `equals` to check more than just the plan output. - Expand `doCanonicalize` based on behavior seen in `CometScan` node. Similar to above: I'd like to understand if this is canonicalizing the right information, but need to dig into AQE behavior more. - `CometNativeScan` now uses the `DataSourceScanExec` trait. The benefit here is that we get more detailed information in the Spark plan. For example, explain before (note the `CometNativeScan`): ``` CometProject [_1#6], [_1#6] +- CometSortMergeJoin [_1#6], [_2#11], Inner :- CometSort [_1#6], [_1#6 ASC NULLS FIRST] : +- CometExchange hashpartitioning(_1#6, 10), ENSURE_REQUIREMENTS, CometNativeShuffle, [plan_id=304] : +- CometFilter [_1#6], isnotnull(_1#6) : +- CometNativeScan: [_1#6] +- CometSort [_2#11], [_2#11 ASC NULLS FIRST] +- CometExchange hashpartitioning(_2#11, 10), ENSURE_REQUIREMENTS, CometNativeShuffle, [plan_id=308] +- CometFilter [_2#11], isnotnull(_2#11) +- CometNativeScan: [_2#11] ``` and explain now (note the `CometNativeScan`): ``` CometProject [_1#6], [_1#6] +- CometSortMergeJoin [_1#6], [_2#11], Inner :- CometSort [_1#6], [_1#6 ASC NULLS FIRST] : +- CometExchange hashpartitioning(_1#6, 10), ENSURE_REQUIREMENTS, CometNativeShuffle, [plan_id=91] : +- CometFilter [_1#6], isnotnull(_1#6) : +- CometNativeScan parquet [_1#6] Batched: true, DataFilters: [isnotnull(_1#6)], Format: CometParquet, Location: InMemoryFileIndex(1 paths)[file:/private/var/folders/12/4pf3d5zn72n7q2_0ks3bkh7c0000gn/T/spark-8f..., PartitionFilters: [], PushedFilters: [IsNotNull(_1)], ReadSchema: struct<_1:int> +- CometSort [_2#11], [_2#11 ASC NULLS FIRST] +- CometExchange hashpartitioning(_2#11, 10), ENSURE_REQUIREMENTS, CometNativeShuffle, [plan_id=95] +- CometFilter [_2#11], isnotnull(_2#11) +- CometNativeScan parquet [_2#11] Batched: true, DataFilters: [isnotnull(_2#11)], Format: CometParquet, Location: InMemoryFileIndex(1 paths)[file:/private/var/folders/12/4pf3d5zn72n7q2_0ks3bkh7c0000gn/T/spark-8f..., PartitionFilters: [], PushedFilters: [IsNotNull(_2)], ReadSchema: struct<_2:int> ``` This better represents a corresponding Spark plan with its `FileScan` node: ``` Project [_1#6] +- SortMergeJoin [_1#6], [_2#11], Inner :- Sort [_1#6 ASC NULLS FIRST], false, 0 : +- Exchange hashpartitioning(_1#6, 10), ENSURE_REQUIREMENTS, [plan_id=126] : +- Filter isnotnull(_1#6) : +- FileScan parquet [_1#6] Batched: true, DataFilters: [isnotnull(_1#6)], Format: Parquet, Location: InMemoryFileIndex(1 paths)[file:/private/var/folders/12/4pf3d5zn72n7q2_0ks3bkh7c0000gn/T/spark-8f..., PartitionFilters: [], PushedFilters: [IsNotNull(_1)], ReadSchema: struct<_1:int> +- Sort [_2#11 ASC NULLS FIRST], false, 0 +- Exchange hashpartitioning(_2#11, 10), ENSURE_REQUIREMENTS, [plan_id=127] +- Filter isnotnull(_2#11) +- FileScan parquet [_2#11] Batched: true, DataFilters: [isnotnull(_2#11)], Format: Parquet, Location: InMemoryFileIndex(1 paths)[file:/private/var/folders/12/4pf3d5zn72n7q2_0ks3bkh7c0000gn/T/spark-8f..., PartitionFilters: [], PushedFilters: [IsNotNull(_2)], ReadSchema: struct<_2:int> ``` ## How are these changes tested? <!-- We typically require tests for all PRs in order to: 1. Prevent the code from being accidentally broken by subsequent changes 2. Serve as another way to document the expected behavior of the code If tests are not included in your PR, please explain why (for example, are they covered by existing tests)? --> Existing tests. Enabled one previously skipped test for `native_datafusion`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org For additional commands, e-mail: github-h...@datafusion.apache.org