Michael,
Thanks for the explanation. I was able to get this running.
On Wed, Oct 29, 2014 at 3:07 PM, Michael Armbrust
wrote:
> We are working on more helpful error messages, but in the meantime let me
> explain how to read this output.
>
> org.apache.spark.sql.catalyst.errors.package$TreeNodeE
We are working on more helpful error messages, but in the meantime let me
explain how to read this output.
org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Unresolved
attributes: 'p.name,'p.age, tree:
Project ['p.name,'p.age]
Filter ('location.number = 2300)
Join Inner, Some((lo
scala> locations.queryExecution
warning: there were 1 feature warning(s); re-run with -feature for details
res28: _4.sqlContext.QueryExecution forSome { val _4:
org.apache.spark.sql.SchemaRDD } =
== Parsed Logical Plan ==
SparkLogicalPlan (ExistingRdd [locationName#80,locationNumber#81],
Mapped
Can you println the .queryExecution of the SchemaRDD?
On Tue, Oct 28, 2014 at 7:43 PM, Corey Nolet wrote:
> So this appears to work just fine:
>
> hctx.sql("SELECT p.name, p.age FROM people p LATERAL VIEW
> explode(locations) l AS location JOIN location5 lo ON l.number =
> lo.streetNumber WHERE
On Tue, Oct 28, 2014 at 6:56 PM, Corey Nolet wrote:
> Am I able to do a join on an exploded field?
>
> Like if I have another object:
>
> { "streetNumber":"2300", "locationName":"The Big Building"} and I want to
> join with the previous json by the locations[].number field- is that
> possible?
>
Am I able to do a join on an exploded field?
Like if I have another object:
{ "streetNumber":"2300", "locationName":"The Big Building"} and I want to
join with the previous json by the locations[].number field- is that
possible?
On Tue, Oct 28, 2014 at 9:31 PM, Corey Nolet wrote:
> Michael,
>
Michael,
Awesome, this is what I was looking for. So it's possible to use hive
dialect in a regular sql context? This is what was confusing to me- the
docs kind of allude to it but don't directly point it out.
On Tue, Oct 28, 2014 at 9:30 PM, Michael Armbrust
wrote:
> You can do this:
>
> $ sbt
You can do this:
$ sbt/sbt hive/console
scala> jsonRDD(sparkContext.parallelize("""{ "name":"John", "age":53,
"locations": [{ "street":"Rodeo Dr", "number":2300 }]}""" ::
Nil)).registerTempTable("people")
scala> sql("SELECT name FROM people LATERAL VIEW explode(locations) l AS
location WHERE loc
So it wouldn't be possible to have a json string like this:
{ "name":"John", "age":53, "locations": [{ "street":"Rodeo Dr",
"number":2300 }]}
And query all people who have a location with number = 2300?
On Tue, Oct 28, 2014 at 5:30 PM, Michael Armbrust
wrote:
> On Tue, Oct 28, 2014 at 2:19
On Tue, Oct 28, 2014 at 2:19 PM, Corey Nolet wrote:
> Is it possible to select if, say, there was an addresses field that had a
> json array?
>
You can get the Nth item by "address".getItem(0). If you want to walk
through the whole array look at LATERAL VIEW EXPLODE in HiveQL
Try: "address.city".attr
On Tue, Oct 28, 2014 at 8:30 AM, Brett Antonides wrote:
> Hello,
>
> Given the following example customers.json file:
> {
> "name": "Sherlock Holmes",
> "customerNumber": 12345,
> "address": {
> "street": "221b Baker Street",
> "city": "London",
> "zipcode": "NW1 6XE",
>
Hello,
Given the following example customers.json file:
{
"name": "Sherlock Holmes",
"customerNumber": 12345,
"address": {
"street": "221b Baker Street",
"city": "London",
"zipcode": "NW1 6XE",
"country": "United Kingdom"
}
},
{
"name": "Big Bird",
"customerNumber": 10001,
"address": {
"street": "
12 matches
Mail list logo