Hi, this is more of a theoretical question and I'm asking it here because I
have no idea where to find documentation for this stuff. 

I am currently working with Spark SQL and am considering using data
contained within JSON datasets. I am aware of the .jsonFile() method in
Spark SQL.

What is the general strategy used by Spark SQL .jsonFile() to parse/decode a
JSON dataset?

(For example, an answer I might be looking for is that the JSON file is read
into an ETL pipeline and transformed into a predefined data structure.)

I am deeply appreciative of any help provided. 





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/How-does-jsonFile-work-tp26802.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to