it seemed to be related to an Aggregator, so for tests we replaced it with
an ordinary Dataset.reduce operation, and now we got:
java.lang.NegativeArraySizeException
at
org.apache.spark.unsafe.types.UTF8String.getBytes(UTF8String.java:229)
at
org.apache.spark.unsafe.types.UTF8Strin
They should get printed if you turn on debug level logging.
On Fri, May 27, 2016 at 1:00 PM, Koert Kuipers wrote:
> hello all,
> after getting our unit tests to pass on spark 2.0.0-SNAPSHOT we are now
> trying to run some algorithms at scale on our cluster.
> unfortunately this means that when i
hello all,
after getting our unit tests to pass on spark 2.0.0-SNAPSHOT we are now
trying to run some algorithms at scale on our cluster.
unfortunately this means that when i see errors i am having a harder time
boiling it down to a small reproducible example.
today we are running an iterative alg
Create JIRA https://issues.apache.org/jira/browse/SPARK-15605 .
2016-05-27 1:02 GMT-07:00 Yanbo Liang :
> This is because we do not have excellent coverage for Java-friendly
> wrappers.
> I found we only implement JavaParams who is the wrappers of Scala Params.
> We still need Java-friendly wrapp
This is because we do not have excellent coverage for Java-friendly
wrappers.
I found we only implement JavaParams who is the wrappers of Scala Params.
We still need Java-friendly wrappers for other traits who extends from
Scala Params.
For example, in Scala we have:
trait HasLabelCol extends