No. Works perfectly.
On Fri, Jul 10, 2015 at 3:38 PM, liangdianpeng
wrote:
> if the class inside the spark_XXX.jar was damaged
>
>
> 发自网易邮箱手机版
>
>
> On 2015-07-11 06:13 , Mulugeta Mammo Wrote:
>
> Hi,
>
> My spark job runs without error, but once it completes
Hi,
My spark job runs without error, but once it completes I get this message
and the app is logged as "incomplete application" in my spark-history :
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder"
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www
is time.
>
> -Todd
>
>
>
> On Thu, Jul 2, 2015 at 4:13 PM, Mulugeta Mammo
> wrote:
>
>> thanks but my use case requires I specify different start and max heap
>> sizes. Looks like spark sets start and max sizes same value.
>>
>> On Thu, Jul 2, 2015 a
he.org/docs/latest/configuration.html>:
> spark.executor.memory512mAmount of memory to use per executor process, in
> the same format as JVM memory strings (e.g.512m, 2g).
>
> -Todd
>
>
>
> On Thu, Jul 2, 2015 at 3:36 PM, Mulugeta Mammo
> wrote:
>
>> tried that one and it t
ns
>
> Which is documented in the configuration guide:
> spark.apache.org/docs/latest/configuration.htnl
> On 2 Jul 2015 9:06 pm, "Mulugeta Mammo" wrote:
>
>> Hi,
>>
>> I'm running Spark 1.4.0, I want to specify the start and max size (
Hi,
I'm running Spark 1.4.0, I want to specify the start and max size (-Xms and
Xmx) of the jvm heap size for my executors, I tried:
executor.cores.memory="-Xms1g -Xms8g"
but doesn't work. How do I specify?
Appreciate your help.
Thanks,
t; Scala version you used
>
> Thanks
>
> On Tue, Jun 2, 2015 at 2:50 PM, Mulugeta Mammo
> wrote:
>
>> building Spark is throwing errors, any ideas?
>>
>>
>> [FATAL] Non-resolvable parent POM: Could not transfer artifact
>> org.apache:apache:pom:
building Spark is throwing errors, any ideas?
[FATAL] Non-resolvable parent POM: Could not transfer artifact
org.apache:apache:pom:14 from/to central (
http://repo.maven.apache.org/maven2): Error transferring file:
repo.maven.apache.org from
http://repo.maven.apache.org/maven2/org/apache/apache/1
Does this build Spark for hadoop version 2.6.0?
build/mvn -Pyarn -Phadoop-2.6 -Dhadoop.version=2.6.0 -DskipTests clean
package
Thanks!
gt; spark.executor.cores. Memory fraction and safety fraction default to 0.2
> and 0.8 respectively.
>
> I'd test spark.executor.cores with 2,4,8 and 16 and see what makes your
> job run faster..
>
>
> --
> Ruslan Dautkhanov
>
> On Wed, May 27, 2015 at 6:46 PM, Mul
Hi guys,
Does the SPARK_EXECUTOR_CORES assume Hyper threading? For example, if I
have 4 cores with 2 threads per core, should the SPARK_EXECUTOR_CORES be
4*2 = 8 or just 4?
Thanks,
My executor has the following spec (lscpu):
CPU(s): 16
Core(s) per socket: 4
Socket(s): 2
Thread(s) per code: 2
The CPU count is obviously 4*2*2 = 16. My question is what value is Spark
expecting in SPARK_EXECUTOR_CORES ? The CPU count (16) or total # of cores
(2 * 2 = 4) ?
Thanks
12 matches
Mail list logo