academy-codex commented on PR #54478:
URL: https://github.com/apache/spark/pull/54478#issuecomment-3983124845

   > Hi, curious, whats the pr use case? MergeInto works for 
DataSourceV2Relation (that support SupportsRowLevelOperation), right?
   
   Yes, MergeInto ultimately requires a DataSourceV2Relation whose table 
supports SupportsRowLevelOperation.
   
   The gap this PR addresses is earlier than that: [DataFrame.mergeInto(table: 
String, ...)](app://-/index.html#) currently parses table as a multipart 
identifier only. So raw path-like inputs (for example /..., abfss://...) fail 
at parse time in this API, even though the underlying provider could support 
row-level merge.
   
   So the use case is:
   
   users operating on path-addressed tables (common in lakehouse/CI flows) 
without catalog registration,
   who expect parity with other Spark APIs and SQL-on-file usage.
   What this PR changes:
   
   keeps existing behavior for catalog identifiers and explicit SQL-on-file 
targets (like delta.\path``),
   adds shorthand support for path-like strings by mapping them to 
defaultDataSourceName + path,
   still relies on the same downstream requirement: if the resolved table does 
not support row-level ops, merge fails as before.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to