Hi,
I am trying to upgrade to spark-1.5.1 with hadoop 2.7.1 and hive 1.2.1 from
spark-1.3.1.
But with the above spark-assembly, the "USE DEFAULT" functionality seems to be
broken with the message
"
Cannot recognize input near 'default' '' '' in switch database
statement; line 1 pos 4
Yes, just saw myself the same thing last night. Thanks
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/Re-Spark-LDA-model-reuse-with-new-set-of-data-tp16099p16103.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.
thanks thats all i needed
On Tue, Jan 26, 2016 at 6:19 PM, Sean Owen wrote:
> I think it will come significantly later -- or else we'd be at code
> freeze for 2.x in a few days. I haven't heard anyone discuss this
> officially but had batted around May or so instead informally in
> conversation.
Thanks guys! That works good.
On 27 Jan 2016 12:14, "Mao, Wei" wrote:
> I used to meet same compile error within Intellij, and resolved by click:
>
>
>
> View à Tool Windows à Maven Projects à Spark Project Catalyst à Plugins à
> antlr3, then remake project
>
>
>
> Thanks,
>
> William Mao
>
>
>
>
I used to meet same compile error within Intellij, and resolved by click:
View --> Tool Windows --> Maven Projects --> Spark Project Catalyst --> Plugins
--> antlr3, then remake project
Thanks,
William Mao
From: Iulian Dragoș [mailto:iulian.dra...@typesafe.com]
Sent: Wednesday, January 27, 2016
Hi,
This is more a question for the user list, not the dev list, so I'll CC
user.
If you're using mllib.clustering.LDAModel (RDD API), then can you make sure
you're using a LocalLDAModel (or convert to it from DistributedLDAModel)?
You can then call topicDistributions() on the new data.
If you'r
I think it will come significantly later -- or else we'd be at code
freeze for 2.x in a few days. I haven't heard anyone discuss this
officially but had batted around May or so instead informally in
conversation. Does anyone have a particularly strong opinion on that?
That's basically an extra 3 mo
Hi folks,
On Spark 1.6.0, I submitted 2 lines of code via spark-shell in Yarn-client mode:
1) sc.parallelize(Array(1,2,3,3,3,3,4)).collect()
2) sc.parallelize(Array(1,2,3,3,3,3,4)).map( x => (x, 1)).collect()
1) works well whereas 2) raises the following exception:
Driver stacktrace:
Can you post more logs, specially lines around "Initializing execution hive
..." (this is for an internal used fake metastore and it is derby) and
"Initializing HiveMetastoreConnection version ..." (this is for the real
metastore. It should be your remote one)? Also, those temp tables are
stored in
On Tue, Jan 19, 2016 at 6:06 AM, Hyukjin Kwon wrote:
> Hi all,
>
> I usually have been working with Spark in IntelliJ.
>
> Before this PR,
> https://github.com/apache/spark/commit/7cd7f2202547224593517b392f56e49e4c94cabc
> for
> `[SPARK-12575][SQL] Grammar parity with existing SQL parser`. I was
Hi
I posted this on the user list yesterday, I am posting it here now because on
further investigation I am pretty sure this is a bug:
On upgrade from 1.5.0 to 1.6.0 I have a problem with the hivethriftserver2, I
have this code:
val hiveContext = new HiveContext(SparkContext.getOrCreate(conf
11 matches
Mail list logo