Thanks for your explanation.
From: Cheng Lian mailto:lian.cs@gmail.com>>
Date: Thursday, October 2, 2014 at 8:01 PM
To: Du Li mailto:l...@yahoo-inc.com.INVALID>>,
"dev@spark.apache.org<mailto:dev@spark.apache.org>"
mailto:dev@spark.apache.org>>
C
Hi,
In Spark 1.1 HiveContext, I ran a create partitioned table command followed by
a cache table command and got a java.sql.SQLSyntaxErrorException: Table/View
'PARTITIONS' does not exist. But cache table worked fine if the table is not a
partitioned table.
Can anybody confirm that cache of pa
Sunday, September 28, 2014 at 12:13 PM
To: Du Li mailto:l...@yahoo-inc.com.invalid>>
Cc: "dev@spark.apache.org<mailto:dev@spark.apache.org>"
mailto:dev@spark.apache.org>>,
"u...@spark.apache.org<mailto:u...@spark.apache.org>"
mailto:u...@spark.apache.org
Can anybody confirm whether or not view is currently supported in spark? I
found “create view translate” in the blacklist of HiveCompatibilitySuite.scala
and also the following scenario threw NullPointerException on
beeline/thriftserver (1.1.0). Any plan to support it soon?
> create table src
o provide the DDL of this partitioned table together
>with the query you tried? The stacktrace suggests that the query was
>trying to cast a map into something else, which is not supported in
>Spark SQL. And I doubt whether Hive support casting a complex type to
>some other type.
>
>
It might be a problem when inserting into a partitioned table. It worked
fine to when the target table was unpartitioned.
Can you confirm this?
Thanks,
Du
On 9/26/14, 4:48 PM, "Du Li" wrote:
>Hi,
>
>I was loading data into a partitioned table on Spark 1.1.0
>beeline-t
Hi,
I was loading data into a partitioned table on Spark 1.1.0
beeline-thriftserver. The table has complex data types such as map and array>. The query is like ³insert overwrite
table a partition (Š) select в and the select clause worked if run
separately. However, when running the insert query,
Thanks, Yanbo and Nicholas. Now it makes more sense — query optimization is the
answer. /Du
From: Nicholas Chammas
mailto:nicholas.cham...@gmail.com>>
Date: Thursday, September 25, 2014 at 6:43 AM
To: Yanbo Liang mailto:yanboha...@gmail.com>>
Cc: Du Li mailto:l...@yahoo-inc.com.inva
Hi,
The following query does not work in Shark nor in the new Spark SQLContext or
HiveContext.
SELECT key, value, concat(key, value) as combined from src where combined like
’11%’;
The following tweak of syntax works fine although a bit ugly.
SELECT key, value, concat(key, value) as combined fr
Hi,
Wonder anybody had similar experience or any suggestion here.
I have an akka Actor that processes database requests in high-level messages.
Inside this Actor, it creates a HiveContext object that does the actual db
work. The main thread creates the needed SparkContext and passes in to the
map(x => (NullWritable.get(), new Text(x)))
res.saveAsSequenceFile("./test_data")
val rdd2 = sc.sequenceFile("./test_data", classOf[NullWritable],
classOf[Text])
assert(rdd.first == rdd2.first._2.toString)
}
}
From: Matei Zaharia mailto:matei.zaha...@gmail.com>>
Date: Monday, Se
Hi,
I was trying the following on spark-shell (built with apache master and hadoop
2.4.0). Both calling rdd2.collect and calling rdd3.collect threw
java.io.NotSerializableException: org.apache.hadoop.io.NullWritable.
I got the same problem in similar code of my app which uses the newly released
12 matches
Mail list logo