AT&T 4G LTE smartphone
>
>
> Original message
> From: Michael Armbrust
> Date:07/25/2014 3:24 PM (GMT-08:00)
> To: user@spark.apache.org
> Subject: Re: Spark SQL and Hive tables
>
> [S]ince Hive has a large number of dependencies, it is not include
guess this only applies to standalone mode?
Andrew L
Date: Fri, 25 Jul 2014 15:25:42 -0700
Subject: RE: Spark SQL and Hive tables
From: ssti...@live.com
To: user@spark.apache.org
Thanks! Will do.
Sent via the Samsung GALAXY S®4, an AT&T 4G LTE smartphone
Orig
Thanks! Will do.
Sent via the Samsung GALAXY S®4, an AT&T 4G LTE smartphone
Original message From: Michael Armbrust
Date:07/25/2014 3:24 PM (GMT-08:00)
To: user@spark.apache.org Subject: Re: Spark SQL and Hive
tables
>
> [S]ince Hive has a large number of de
SQLContext SchemaRDD SchemaRDDLike api
> catalystcolumnarexecution package parquet
> test
>
>
> ------
> Date: Fri, 25 Jul 2014 17:48:27 -0400
>
> Subject: Re: Spark SQL and Hive tables
> From: chiling...@gm
columnarexecution package
parquettest
Date: Fri, 25 Jul 2014 17:48:27 -0400
Subject: Re: Spark SQL and Hive tables
From: chiling...@gmail.com
To: user@spark.apache.org
Hi Sameer,
The blog post you referred to is about Spark SQL. I don't think the intent of
the article is mean
Thanks, Michael.
From: mich...@databricks.com
Date: Fri, 25 Jul 2014 14:49:00 -0700
Subject: Re: Spark SQL and Hive tables
To: user@spark.apache.org
>From the programming guide:
When working with Hive one must construct a HiveContext, which inherits from
SQLContext, and adds support for find
Thanks, Jerry.
Date: Fri, 25 Jul 2014 17:48:27 -0400
Subject: Re: Spark SQL and Hive tables
From: chiling...@gmail.com
To: user@spark.apache.org
Hi Sameer,
The blog post you referred to is about Spark SQL. I don't think the intent of
the article is meant to guide you how to read data from
n u.age,
> u.latitude, u.logitude FROM Users u JOIN Events e ON u.userId =
> e.userId""")
>
>
>
> --
> Date: Fri, 25 Jul 2014 17:27:17 -0400
> Subject: Re: Spark SQL and Hive tables
> From: chiling...@gma
are
> talking about?
>
> --
> From: mich...@databricks.com
> Date: Fri, 25 Jul 2014 14:34:53 -0700
>
> Subject: Re: Spark SQL and Hive tables
> To: user@spark.apache.org
>
>
> In particular, have you put your hive-site.xml in the conf/ directory?
Hi Michael,Thanks. I am not creating HiveContext, I am creating SQLContext. I
am using CDH 5.1. Can you please let me know which conf/ directory you are
talking about?
From: mich...@databricks.com
Date: Fri, 25 Jul 2014 14:34:53 -0700
Subject: Re: Spark SQL and Hive tables
To: user
In particular, have you put your hive-site.xml in the conf/ directory?
Also, are you creating a HiveContext instead of a SQLContext?
On Fri, Jul 25, 2014 at 2:27 PM, Jerry Lam wrote:
> Hi Sameer,
>
> Maybe this page will help you:
> https://spark.apache.org/docs/latest/sql-programming-guide.ht
extracted from existing sources,
// such as Apache Hive.
val trainingDataTable = sql("""
SELECT e.action
u.age,
u.latitude,
u.logitude
FROM Users u
JOIN Events e
ON u.userId = e.userId""")
Date: Fri, 25 Jul 2014 17:27:17 -0400
S
Hi Sameer,
Maybe this page will help you:
https://spark.apache.org/docs/latest/sql-programming-guide.html#hive-tables
Best Regards,
Jerry
On Fri, Jul 25, 2014 at 5:25 PM, Sameer Tilak wrote:
> Hi All,
> I am trying to load data from Hive tables using Spark SQL. I am using
> spark-shell. Her
Hi All,I am trying to load data from Hive tables using Spark SQL. I am using
spark-shell. Here is what I see:
val trainingDataTable = sql("""SELECT prod.prod_num, demographics.gender,
demographics.birth_year, demographics.income_group FROM prod p JOIN
demographics d ON d.user_id = p.user_id"""
14 matches
Mail list logo