K SQL to capitalize those 'tokens'
before invoking the parser?
If so, why not just modify the SqlBase.g4 to accept lower cases keywords?
Thanks
Boying
发件人:
"Shahab Yunus"
收件人:
"Ramandeep Singh Nanda"
抄送:
"Tomas Bartalos" , l...@china-inv
in the SPARK SQL to capitalize those 'tokens'
> before invoking the parser?
>
> If so, why not just modify the SqlBase.g4 to accept lower cases keywords?
>
> Thanks
>
> Boying
>
>
>
> 发件人: "Shahab Yunus"
> 收件人: "Ramandeep Singh Nand
invoking the parser?
If so, why not just modify the SqlBase.g4 to accept lower cases keywords?
Thanks
Boying
发件人:
"Shahab Yunus"
收件人:
"Ramandeep Singh Nanda"
抄送:
"Tomas Bartalos" , l...@china-inv.cn, "user
@spark/'user @spark'/spark users/use
Thanks all for your help.
I'll try your suggestions.
Thanks again :)
发件人:
"Shahab Yunus"
收件人:
"Ramandeep Singh Nanda"
抄送:
"Tomas Bartalos" , l...@china-inv.cn, "user
@spark/'user @spark'/spark users/user@spark"
日期:
2019/01/24 0
Could be a tangential idea but might help: Why not use queryExecution and
logicalPlan objects that are available when you execute a query using
SparkSession and get a DataFrame back? The Json representation contains
almost all the info that you need and you don't need to go to Hive to get
this info
Explain extended or explain would list the plan along with the tables. Not
aware of any statements that explicitly list dependencies or tables
directly.
Regards,
Ramandeep Singh
On Wed, Jan 23, 2019, 11:05 Tomas Bartalos This might help:
>
> show tables;
>
> st 23. 1. 2019 o 10:43 napísal(a):
>
This might help:
show tables;
st 23. 1. 2019 o 10:43 napísal(a):
> Hi, All,
>
> We need to get all input tables of several SPARK SQL 'select' statements.
>
> We can get those information of Hive SQL statements by using 'explain
> dependency select'.
> But I can't find the equivalent command