we use fine-grained mode. coarse-grained mode keeps JVMs around which often
leads to OOMs, which in turn kill the entire executor, causing entire
stages to be retried. In fine-grained mode, only the task fails and
subsequently gets retried without taking out an entire stage or worse.
On Tue, Nov 3
I'm also super interested in this. Flambo (our clojure DSL) wraps the java
api and it would be great to have this.
On Tue, Apr 21, 2015 at 4:10 PM, Reynold Xin wrote:
> It can reuse. That's a good point and we should document it in the API
> contract.
>
>
> On Tue, Apr 21, 2015 at 4:06 PM, Punya
+1
On Sat, Jul 5, 2014 at 7:41 PM, Krishna Sankar wrote:
> +1
>
>- Compiled rc2 w/ CentOS 6.5, Yarn,Hadoop 2.2.0 - successful
>- Smoke Test (scala,python) (distributed cluster) - successful
>- We had ran Java/SparkSQL (count, distinct et al) ~250M records RDD
>over HBase 0.98.3
s the ClassTag for
> java.lang.Object here — this is what our Java API does to say that type
> info is not known. So you can always pass that. Look at the Java code for
> how to get this ClassTag.
>
> Matei
>
> On Jun 1, 2014, at 4:33 PM, Soren Macbeth wrote:
>
> > I'm wri
om Serializer?
> >
> > Matei
> >
> > On Jun 1, 2014, at 3:40 PM, Soren Macbeth wrote:
> >
> >>
> https://github.com/apache/spark/blob/v1.0.0/core/src/main/scala/org/apache/spark/serializer/Serializer.scala#L64-L66
> >>
> >> These changes to t
https://github.com/apache/spark/blob/v1.0.0/core/src/main/scala/org/apache/spark/serializer/Serializer.scala#L64-L66
These changes to the SerializerInstance make it really gross to call
serialize and deserialize from non-scala languages. I'm not sure what the
purpose of a ClassTag is, but if we co
ld systems, sbt and maven. sbt will
> >> download the correct version of scala, but with Maven you need to
> supply it
> >> yourself and set SCALA_HOME.
> >>
> >> It sounds like the instructions need to be updated-- perhaps create a
> JIRA?
> >>
&g
Hello,
Following the instructions for building spark 1.0.0, I encountered the
following error:
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-antrun-plugin:1.7:run (default) on project
spark-core_2.10: An Ant BuildException has occured: Please set the
SCALA_HOME (or SCALA_LIBRARY_P
.18.1
On Mon, May 12, 2014 at 12:02 PM, Matei Zaharia wrote:
> Hey Soren, are you sure that the JAR you used on the executors is for the
> right version of Spark? Maybe they’re running an older version. The Kryo
> serializer should be initialized the same way on both.
>
> Matei
Hi,
What are the requirements of objects that are stored in RDDs?
I'm still struggling with an exception I've already posted about several
times. My questions are:
1) What interfaces are objects stored in RDDs expected to implement, if any?
2) Are collections (be they scala, java or otherwise) h
I finally managed to track down the source of the kryo issues that I was
having under mesos.
What happens is the for a reason that I haven't tracked down yet, a handful
of the scala collection classes from chill-scala down get registered by the
mesos executors, but they do all get registered in th
at 10:24 PM, Reynold Xin wrote:
> Technically you only need to change the build file, and change part of a
> line in SparkEnv so you don't have to break your oath :)
>
>
>
> On Sun, May 4, 2014 at 10:22 PM, Soren Macbeth wrote:
>
> > that would violate my personal o
t;
> On Sun, May 4, 2014 at 9:08 PM, Soren Macbeth wrote:
>
> > fwiw, it seems like it wouldn't be very difficult to integrate
> chill-scala,
> > since you're already chill-java and probably get kryo serialization of
> > closures and all sorts of other scala
ithub.com/apache/spark/pull/642
>
>
> On Sun, May 4, 2014 at 3:54 PM, Soren Macbeth wrote:
>
> > Thanks for the reply!
> >
> > Ok, if that's the case, I'd recommend a note to that affect in the docs
> at
> > least.
> >
> > Just to giv
other projects are
> starting to use Kryo to serialize more Scala data structures, so I wouldn't
> be surprised if there is a way to work around this now. However, I don't
> have enough time to look into it at this point. If you do, please do post
> your findings. Thanks.
>
apologies for the cross-list posts, but I've gotten zero response in the
user list and I guess this list is probably more appropriate.
According to the documentation, using the KryoSerializer for closures is
supported. However, when I try to set `spark.closure.serializer` to
`org.apache.spark.seri
16 matches
Mail list logo