Re: SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder"

2015-07-10 Thread Mulugeta Mammo
No. Works perfectly. On Fri, Jul 10, 2015 at 3:38 PM, liangdianpeng wrote: > if the class inside the spark_XXX.jar was damaged > > > 发自网易邮箱手机版 > > > On 2015-07-11 06:13 , Mulugeta Mammo Wrote: > > Hi, > > My spark job runs without error, but once it completes

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder"

2015-07-10 Thread Mulugeta Mammo
Hi, My spark job runs without error, but once it completes I get this message and the app is logged as "incomplete application" in my spark-history : SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder" SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www

Re: Setting JVM heap start and max sizes, -Xms and -Xmx, for executors

2015-07-02 Thread Mulugeta Mammo
is time. > > -Todd > > > > On Thu, Jul 2, 2015 at 4:13 PM, Mulugeta Mammo > wrote: > >> thanks but my use case requires I specify different start and max heap >> sizes. Looks like spark sets start and max sizes same value. >> >> On Thu, Jul 2, 2015 a

Re: Setting JVM heap start and max sizes, -Xms and -Xmx, for executors

2015-07-02 Thread Mulugeta Mammo
he.org/docs/latest/configuration.html>: > spark.executor.memory512mAmount of memory to use per executor process, in > the same format as JVM memory strings (e.g.512m, 2g). > > -Todd > > > > On Thu, Jul 2, 2015 at 3:36 PM, Mulugeta Mammo > wrote: > >> tried that one and it t

Re: Setting JVM heap start and max sizes, -Xms and -Xmx, for executors

2015-07-02 Thread Mulugeta Mammo
ns > > Which is documented in the configuration guide: > spark.apache.org/docs/latest/configuration.htnl > On 2 Jul 2015 9:06 pm, "Mulugeta Mammo" wrote: > >> Hi, >> >> I'm running Spark 1.4.0, I want to specify the start and max size (

Setting JVM heap start and max sizes, -Xms and -Xmx, for executors

2015-07-02 Thread Mulugeta Mammo
Hi, I'm running Spark 1.4.0, I want to specify the start and max size (-Xms and Xmx) of the jvm heap size for my executors, I tried: executor.cores.memory="-Xms1g -Xms8g" but doesn't work. How do I specify? Appreciate your help. Thanks,

Re: Can't build Spark

2015-06-02 Thread Mulugeta Mammo
t; Scala version you used > > Thanks > > On Tue, Jun 2, 2015 at 2:50 PM, Mulugeta Mammo > wrote: > >> building Spark is throwing errors, any ideas? >> >> >> [FATAL] Non-resolvable parent POM: Could not transfer artifact >> org.apache:apache:pom:

Can't build Spark

2015-06-02 Thread Mulugeta Mammo
building Spark is throwing errors, any ideas? [FATAL] Non-resolvable parent POM: Could not transfer artifact org.apache:apache:pom:14 from/to central ( http://repo.maven.apache.org/maven2): Error transferring file: repo.maven.apache.org from http://repo.maven.apache.org/maven2/org/apache/apache/1

Building Spark for Hadoop 2.6.0

2015-06-01 Thread Mulugeta Mammo
Does this build Spark for hadoop version 2.6.0? build/mvn -Pyarn -Phadoop-2.6 -Dhadoop.version=2.6.0 -DskipTests clean package Thanks!

Re: Value for SPARK_EXECUTOR_CORES

2015-05-28 Thread Mulugeta Mammo
gt; spark.executor.cores. Memory fraction and safety fraction default to 0.2 > and 0.8 respectively. > > I'd test spark.executor.cores with 2,4,8 and 16 and see what makes your > job run faster.. > > > -- > Ruslan Dautkhanov > > On Wed, May 27, 2015 at 6:46 PM, Mul

Hyperthreading

2015-05-28 Thread Mulugeta Mammo
Hi guys, Does the SPARK_EXECUTOR_CORES assume Hyper threading? For example, if I have 4 cores with 2 threads per core, should the SPARK_EXECUTOR_CORES be 4*2 = 8 or just 4? Thanks,

Value for SPARK_EXECUTOR_CORES

2015-05-27 Thread Mulugeta Mammo
My executor has the following spec (lscpu): CPU(s): 16 Core(s) per socket: 4 Socket(s): 2 Thread(s) per code: 2 The CPU count is obviously 4*2*2 = 16. My question is what value is Spark expecting in SPARK_EXECUTOR_CORES ? The CPU count (16) or total # of cores (2 * 2 = 4) ? Thanks