Silly  question?
When you talk about ‘user specified schema’ do you mean for the user to supply 
an additional schema, or that you’re using the schema that’s described by the 
JSON string?
(or both? [either/or] )

Thx

On Sep 28, 2016, at 12:52 PM, Michael Armbrust 
<mich...@databricks.com<mailto:mich...@databricks.com>> wrote:

Spark SQL has great support for reading text files that contain JSON data. 
However, in many cases the JSON data is just one column amongst others. This is 
particularly true when reading from sources such as Kafka. This 
PR<https://github.com/apache/spark/pull/15274> adds a new functions from_json 
that converts a string column into a nested StructType with a user specified 
schema, using the same internal logic as the json Data Source.

Would love to hear any comments / suggestions.

Michael

Reply via email to