Hi,
You don't have to run the SQL statement. You can parse it, that will be the
logical parsing.
val logicalPlan = ss.sessionState.sqlParser.parsePlan(sqlText = query)
println(logicalPlan.prettyJson)
[ {
"class" : "org.apache.spark.sql.catalyst.plans.logical.Project",
"num-children" : 1,
"
Hi,
I am a sw engineer that works with historical FAA and pilot data and think
that Cypher will be a good addition to the Spark ecosystem.
I support this proposal and am looking forward to giving it a try.
Regards,
HJ
--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
If we can make the annotation compatible with Python 2, why don’t we add
type annotation to make life easier for users of Python 3 (with type)?
On Fri, Jan 25, 2019 at 7:53 AM Maciej Szymkiewicz
wrote:
>
> Hello everyone,
>
> I'd like to revisit the topic of adding PySpark type annotations in 3.
Hello everyone,
I'd like to revisit the topic of adding PySpark type annotations in 3.0. It
has been discussed before (
http://apache-spark-developers-list.1001551.n3.nabble.com/Python-friendly-API-for-Spark-3-0-td25016.html
and
http://apache-spark-developers-list.1001551.n3.nabble.com/PYTHON-PySp
Hi, All,
I tried the suggested approach and it works, but it requires to 'run' the
SQL statement first.
I just want to parse the SQL statement without running it, so I can do
this in my laptop without connecting to our production environment.
I tried to write a tool which uses the SqlBase.g4 b