Hi Xiang,
this error also appears in client mode (maybe the situation that you
were referring to and that worked was local mode?), however the error
is expected and is not a bug.
this line in your snippet:
object Main extends A[String] { //...
is, after desugaring, equivalent to:
object M
Hi,
It is used jointly with a custom implementation of the `equals`
method. In Scala, you can override the `equals` method to change the
behaviour of `==` comparison. On example of this would be to compare
classes based on their parameter values (i.e. what case classes do).
Partitioners aren't case
Pedantic note about hashCode and equals: the equality doesn't need to be
bidirectional, you just need to ensure that a.hashCode == b.hashCode when
a.equals(b), the bidirectional case is usually harder to satisfy due to
possibility of collisions.
Good info:
http://www.programcreek.com/2011/07/j
Andrew, you're correct of course hashing is a one-way operation with
potential collisions
On Wed, Sep 21, 2016 at 3:22 PM, Andrew Duffy wrote:
> Pedantic note about hashCode and equals: the equality doesn't need to be
> bidirectional, you just need to ensure that a.hashCode == b.hashCode when
> a
Than you very much sir! but what i want to know is whether the hashcode
overflow will make a trouble. thank you!
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/What-s-the-use-of-RangePartitioner-hashCode-tp18953p18996.html
Sent from the Apache Spark
Hi,when we fetch Spark 2.0.0 as maven dependency then we automatically end up
with hadoop 2.2 as a transitive dependency, I know multiple profiles are used to
generate the different tar.gz bundles that we can download, Is there by any
chance publications of Spark 2.0.0 with different classifier ac
I am working on profiling TPCH queries for Spark 2.0. I see lot of
temporary object creation (sometimes size as much as the data size) which
is justified for the kind of processing Spark does. But, from production
perspective, is there a guideline on how much memory should be allocated
for process
I'm working on packaging 2.0.1 rc but encountered a problem: R doc fails to
build. Can somebody take a look at the issue ASAP?
** knitting documentation of write.parquet
** knitting documentation of write.text
** knitting documentation of year
~/workspace/spark-release-docs/spark/R
~/workspace/s