I am currently using Spark 1.1.0 that has been compiled against Hadoop 2.3. Our
cluster is CDH5.1.2 which is runs Hive 0.12. I have two external Hive tables
that point to Parquet (compressed with Snappy), which were converted over from
Avro if that matters.
I am trying to perform a join with th
Hi Michael,
That worked for me. At least I’m now further than I was. Thanks for the tip!
-Terry
From: Michael Armbrust mailto:mich...@databricks.com>>
Date: Monday, October 13, 2014 at 5:05 PM
To: Terry Siu mailto:terry@smartfocus.com>>
Cc: "user@spark.apach
. Let me know if you need more
information.
Thanks
-Terry
From: Yin Huai mailto:huaiyin@gmail.com>>
Date: Tuesday, October 14, 2014 at 6:29 PM
To: Terry Siu mailto:terry@smartfocus.com>>
Cc: Michael Armbrust mailto:mich...@databricks.com>>,
"user@spark
Huai mailto:huaiyin@gmail.com>>
Date: Thursday, October 16, 2014 at 7:08 AM
To: Terry Siu mailto:terry@smartfocus.com>>
Cc: Michael Armbrust mailto:mich...@databricks.com>>,
"user@spark.apache.org<mailto:user@spark.apache.org>"
mailto:user@spark.
Hi all,
I’m getting a TreeNodeException for unresolved attributes when I do a simple
select from a schemaRDD generated by a join in Spark 1.1.0. A little background
first. I am using a HiveContext (against Hive 0.12) to grab two tables, join
them, and then perform multiple INSERT-SELECT with GR
Hi Michael,
Thanks again for the reply. Was hoping it was something I was doing wrong in
1.1.0, but I’ll try master.
Thanks,
-Terry
From: Michael Armbrust mailto:mich...@databricks.com>>
Date: Monday, October 20, 2014 at 12:11 PM
To: Terry Siu mailto:terry@smartfocus.com>&g
Just to follow up, the queries worked against master and I got my whole flow
rolling. Thanks for the suggestion! Now if only Spark 1.2 will come out with
the next release of CDH5 :P
-Terry
From: Terry Siu mailto:terry@smartfocus.com>>
Date: Monday, October 20, 2014 at 12:22 PM
To: M
Found this as I am having the same issue. I have exactly the same usage as
shown in Michael's join example. I tried executing a SQL statement against
the join data set with two columns that have the same name and tried to
"unambiguate" the column name with the table alias, but I would still get an
I am synced up to the Spark master branch as of commit 23468e7e96. I have Maven
3.0.5, Scala 2.10.3, and SBT 0.13.1. I’ve built the master branch successfully
previously and am trying to rebuild again to take advantage of the new Hive
0.13.1 profile. I execute the following command:
$ mvn -Dski
Thanks for the update, Shivaram.
-Terry
On 10/31/14, 12:37 PM, "Shivaram Venkataraman"
wrote:
>Yeah looks like https://github.com/apache/spark/pull/2744 broke the
>build. We will fix it soon
>
>On Fri, Oct 31, 2014 at 12:21 PM, Terry Siu
>wrote:
>> I am synced u
I just built the 1.2 snapshot current as of commit 76386e1a23c using:
$ ./make-distribution.sh —tgz —name my-spark —skip-java-test -DskipTests
-Phadoop-2.4 -Phive -Phive-0.13.1 -Pyarn
I drop in my Hive configuration files into the conf directory, launch
spark-shell, and then create my HiveConte
Is there any reason why StringType is not a supported type the GT, GTE, LT, LTE
operations? I was able to previously have a predicate where my column type was
a string and execute a filter with one of the above operators in SparkSQL w/o
any problems. However, I synced up to the latest code this
Thanks, Kousuke. I’ll wait till this pull request makes it into the master
branch.
-Terry
From: Kousuke Saruta
mailto:saru...@oss.nttdata.co.jp>>
Date: Monday, November 3, 2014 at 11:11 AM
To: Terry Siu mailto:terry@smartfocus.com>>,
"user@spark.apache.org<mailto:u
Done.
https://issues.apache.org/jira/browse/SPARK-4213
Thanks,
-Terry
From: Michael Armbrust mailto:mich...@databricks.com>>
Date: Monday, November 3, 2014 at 1:37 PM
To: Terry Siu mailto:terry@smartfocus.com>>
Cc: "user@spark.apache.org<mailto:user@spark.apach
I’m trying to execute a subquery inside an IN clause and am encountering an
unsupported language feature in the parser.
java.lang.RuntimeException: Unsupported language features in query: select
customerid from sparkbug where customerid in (select customerid from sparkbug
where customerid in (
Done.
https://issues.apache.org/jira/browse/SPARK-4226
Hoping this will make it into 1.3? :)
-Terry
From: Michael Armbrust mailto:mich...@databricks.com>>
Date: Tuesday, November 4, 2014 at 11:31 AM
To: Terry Siu mailto:terry@smartfocus.com>>
Cc: "user@spark.apach
What version of Spark are you using? Did you compile your Spark version
and if so, what compile options did you use?
On 11/6/14, 9:22 AM, "tridib" wrote:
>Help please!
>
>
>
>--
>View this message in context:
>http://apache-spark-user-list.1001560.n3.nabble.com/Unable-to-use-HiveCont
>ext-in-spa
?
From: Tridib Samanta mailto:tridib.sama...@live.com>>
Date: Thursday, November 6, 2014 at 9:49 AM
To: Terry Siu mailto:terry@smartfocus.com>>,
"u...@spark.incubator.apache.org<mailto:u...@spark.incubator.apache.org>"
mailto:u...@spark.incubator.apache.org>>
Subj
18 matches
Mail list logo