+1 (non-binding)
* Built the release from source.
* Compiled Java and Scala apps that interact with HDFS against it.
* Ran them in local mode.
* Ran them against a pseudo-distributed YARN cluster in both yarn-client
mode and yarn-cluster mode.
On Tue, May 13, 2014 at 9:09 PM, witgo wrote:
> Yo
I'm cancelling this vote in favor of rc6.
On Tue, May 13, 2014 at 8:01 AM, Sean Owen wrote:
> On Tue, May 13, 2014 at 2:49 PM, Sean Owen wrote:
>> On Tue, May 13, 2014 at 9:36 AM, Patrick Wendell wrote:
>>> The release files, including signatures, digests, etc. can be found at:
>>> http://peopl
This vote is cancelled in favor of rc6.
On Wed, May 14, 2014 at 1:04 PM, Patrick Wendell wrote:
> I'm cancelling this vote in favor of rc6.
>
> On Tue, May 13, 2014 at 8:01 AM, Sean Owen wrote:
>> On Tue, May 13, 2014 at 2:49 PM, Sean Owen wrote:
>>> On Tue, May 13, 2014 at 9:36 AM, Patrick Wen
When using map() and lookup() in conjunction, I get an exception (each
independently works fine). I'm using Spark 0.9.0/Scala 2.10.3
val a = sc.parallelize(Array(11))
val m = sc.parallelize(Array((11,21)))
a.map(m.lookup(_)(0)).collect
14/05/14 15:03:35 ERROR Executor: Exception in task ID 23
sc
Please vote on releasing the following candidate as Apache Spark version 1.0.0!
This patch has a few minor fixes on top of rc5. I've also built the
binary artifacts with Hive support enabled so people can test this
configuration. When we release 1.0 we might just release both vanilla
and Hive-enab
Hi Sandy,
I assume you are referring to caching added to datanodes via new caching
api via NN ? (To preemptively mmap blocks).
I have not looked in detail, but does NN tell us about this in block
locations?
If yes, we can simply make those process local instead of node local for
executors on th
Hi Matei,
Yes, I'm 100% positive the jar on the executors is the same version. I am
building everything and deploying myself. Additionally, while debugging the
issue, I forked spark's git repo and added additional logging, which I
could see in the driver and executors. These debugging jars exhibit
SHA-1 is being end-of-lived so I’d actually say switch to 512 for all of them
instead.
On May 13, 2014, at 6:49 AM, Sean Owen wrote:
> On Tue, May 13, 2014 at 9:36 AM, Patrick Wendell wrote:
>> The release files, including signatures, digests, etc. can be found at:
>> http://people.apache.org/
The docs for how to run Spark on Mesos have changed very little since
0.6.0, but setting it up is much easier now than then. Does it make sense
to revamp with the below changes?
You no longer need to build mesos yourself as pre-built versions are
available from Mesosphere: http://mesosphere.io/d