Hi,
I am not sure this has been reported already or not, I run into this error
under spark-sql shell as build from newest of spark git trunk,
spark-sql> describe qiuzhuang_hcatlog_import;
15/02/17 14:38:36 ERROR SparkSQLDriver: Failed in [describe
qiuzhuang_hcatlog_import]
org.apache.spark.sql.so
Hi All,
While doing some ETL, I run into error of 'Too many open files' as
following logs,
Thanks,
Qiuzhuang
4/11/20 20:12:02 INFO collection.ExternalAppendOnlyMap: Thread 63 spilling
in-memory map of 100.8 KB to disk (953 times so far)
14/11/20 20:12:02 ERROR storage.DiskBlockObjectWriter: Unc
Hi,
MapReduce has the feature of skipping bad records. Is there any equivalent
in Spark? Should I use filter API to do this?
Thanks,
Qiuzhuang
When running HiveFromSpark example via run-example shell, I got error,
FAILED: SemanticException Line 1:23 Invalid path
''src/main/resources/kv1.txt'': No files matching path
file:/home/kand/javaprojects/spark/src/main/resources/kv1.txt
==
END HIVE FAILURE OUTPUT
=
rent API-compatible version of Spark, but the runtime versions must
> match across all components.
>
> To fix this issue, I’d check that you’ve run the “package” and “assembly”
> phases and that your Spark cluster is using this updated version.
>
> - Josh
>
> On October 24, 201
components.
>
> To fix this issue, I’d check that you’ve run the “package” and “assembly”
> phases and that your Spark cluster is using this updated version.
>
> - Josh
>
> On October 24, 2014 at 6:17:26 PM, Qiuzhuang Lian (
> qiuzhuang.l...@gmail.com) wrote:
>
> Hi,
Hi,
I update git today and when connecting to spark cluster, I got
the serialVersionUID incompatible error in class BlockManagerId.
Here is the log,
Shouldn't we better give BlockManagerId a constant serialVersionUID avoid
this?
Thanks,
Qiuzhuang
scala> val rdd = sc.parparallelize(1 to 10001
cala.remote.Main$.make(Main.scala:64)
> at org.jetbrains.jps.incremental.scala.remote.Main$.nailMain(Main.scala:22)
> at org.jetbrains.jps.incremental.scala.remote.Main.nailMain(Main.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethod
I also run into this problem when running examples in IDEA. The issue looks
that it uses depends on too many jars and that the classpath seems to have
length limit. So I import the assembly jar and put the head of the list
dependent path and it works.
Thanks,
Qiuzhuang
On Wed, Jun 11, 2014 at 10
Hi,
I customized MVN_HOME/conf/settings.xml's localRepository tag To manage
maven local jars.
F:/Java/maven-build/.m2/repository
However when I build Spark with SBT, it seems that it still gets the
default .m2 repository under
Path.userHome + "/.m2/repository"
How should I let SBT pick up my c
Sorry, I should send to the new dev spark address instead.
Hi,
I have one question on removeRdd method in BlockManagerMasterActor.scala
about asking slave actor to remove RDD.
in this piece of code,
Future.sequence(blockManagerInfo.values.map { bm =>
bm.slaveActor.ask(removeMsg)(akka
Hi,
I have one question on removeRdd method in BlockManagerMasterActor.scala
about asking slave actor to remove RDD.
in this piece of code,
Future.sequence(blockManagerInfo.values.map { bm =>
bm.slaveActor.ask(removeMsg)(akkaTimeout).mapTo[Int]
}.toSeq)
it asks all blockManagerInf
We use jarjar Ant plugin task to assemble into one fat jar.
Qiuzhuang
On Wed, Feb 26, 2014 at 11:26 AM, Evan chan wrote:
> Actually you can control exactly how sbt assembly merges or resolves
> conflicts. I believe the default settings however lead to order which
> cannot be controlled.
>
> I
13 matches
Mail list logo