+1 (non-binding)

* Built the release from source.
* Compiled Java and Scala apps that interact with HDFS against it.
* Ran them in local mode.
* Ran them against a pseudo-distributed YARN cluster in both yarn-client
mode and yarn-cluster mode.


On Tue, May 13, 2014 at 9:09 PM, witgo <wi...@qq.com> wrote:

> You need to set:
> spark.akka.frameSize         5
> spark.default.parallelism    1
>
>
>
>
>
> ------------------ Original ------------------
> From:  "Madhu";<ma...@madhu.com>;
> Date:  Wed, May 14, 2014 09:15 AM
> To:  "dev"<d...@spark.incubator.apache.org>;
>
> Subject:  Re: [VOTE] Release Apache Spark 1.0.0 (rc5)
>
>
>
> I just built rc5 on Windows 7 and tried to reproduce the problem described
> in
>
> https://issues.apache.org/jira/browse/SPARK-1712
>
> It works on my machine:
>
> 14/05/13 21:06:47 INFO DAGScheduler: Stage 1 (sum at <console>:17) finished
> in 4.548 s
> 14/05/13 21:06:47 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks
> have all completed, from pool
> 14/05/13 21:06:47 INFO SparkContext: Job finished: sum at <console>:17,
> took
> 4.814991993 s
> res1: Double = 5.000005E11
>
> I used all defaults, no config files were changed.
> Not sure if that makes a difference...
>
>
>
> --
> View this message in context:
> http://apache-spark-developers-list.1001551.n3.nabble.com/VOTE-Release-Apache-Spark-1-0-0-rc5-tp6542p6560.html
> Sent from the Apache Spark Developers List mailing list archive at
> Nabble.com.
> .

Reply via email to