Thank you so much for the information, now i have merge the fix of #1411 and
seems the HiveSQL works with:
SELECT name FROM people WHERE schools[0].time>2.
But one more question is:
Is it possible or planed to support the "schools.time" format to filter the
record that there is an element inside
Yes, just as my last post, using [] to access array data and "." to access
nested fields seems not work.
BTW, i have deeped into the code of the current master branch.
spark / sql / catalyst / src / main / scala / org / apache / spark / sql /
catalyst / plans / logical / LogicalPlan.scala
from l
Thank you so much for the reply, here is my code.
1. val conf = new SparkConf().setAppName("Simple Application")
2. conf.setMaster("local")
3. val sc = new SparkContext(conf)
4. val sqlContext = new org.apache.spark.sql.SQLContext(sc)
5. import sqlContext.createSchemaRDD
6. val path1 =
I mean the query on the nested data such as JSON, not the nested query, sorry
for the misunderstanding.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Query-the-nested-JSON-data-With-Spark-SQL-1-0-1-tp9544p9726.html
Sent from the Apache Spark User List mail
Or is it supported? I know I could doing it myself with filter, but if SQL
could support, would be much better, thx!
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Nested-Query-With-Spark-SQL-1-0-1-tp9544p9547.html
Sent from the Apache Spark User List mai
Hi All:
I am using Spark SQL 1.0.1 for a simple test, the loaded data (JSON format)
which is registered as table "people" is:
{"name":"Michael",
"schools":[{"name":"ABC","time":1994},{"name":"EFG","time":2000}]}
{"name":"Andy", "age":30,"scores":{"eng":98,"phy":89}}
{"name":"Justin", "age":19}