Hi,

I'm trying to write a catalog plugin based on spark-3.0-preview,  and I
found even when I use 'use catalog.namespace' to set the current catalog
and namespace, I still need to qualified name in the query.

For example, I add a catalog named 'example_catalog', there is a database
named 'test' in 'example_catalog', and a table 't' in
'example_catalog.test'. I can query the table using 'select * from
example_catalog.test.t' under default catalog(which is spark_catalog).
After I use 'use example_catalog.test' to change the current catalog to
'example_catalog', and the current namespace to 'test', I can query the
table using 'select * from test.t', but 'select * from t' failed due to
table_not_found exception.

I want to know if this is an expected behavior?  If yes, it sounds a little
weird since I think after 'use example_catalog.test', all the un-qualified
identifiers should be interpreted as 'example_catalog.test.identifier'.

Attachment is a test file that you can use to reproduce the problem I met.

Thanks.

Attachment: DataSourceV2ExplainSuite.scala
Description: Binary data

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to