-1
Tow blocker bugs have been found after this RC.
https://issues.apache.org/jira/browse/SPARK-12089 can cause data corruption
when an external sorter spills data.
https://issues.apache.org/jira/browse/SPARK-12155 can prevent tasks from
acquiring memory even when the executor indeed can allocate m
Yes,
We already used ALS in our production environment, we also want to try SVD++
but it has no python interface.
Any idea? Thanks
-Allen
发件人: Yanbo Liang [mailto:yblia...@gmail.com]
发送时间: 2015年12月3日 10:30
收件人: 张志强(旺轩)
抄送: dev@spark.apache.org
主题: Re: query on SVD++
You means
Thanks Josh, created https://issues.apache.org/jira/browse/SPARK-12166
On Mon, Dec 7, 2015 at 4:32 AM, Josh Rosen wrote:
> I agree that we should unset this in our tests. Want to file a JIRA and
> submit a PR to do this?
>
> On Thu, Dec 3, 2015 at 6:40 PM Jeff Zhang wrote:
>
>> I try to do te
Can you write a script to download and install the JDBC driver to the local
Maven repository if it's not already present? If we had that, we could just
invoke it as part of dev/run-tests.
On Thu, Dec 3, 2015 at 5:55 PM Luciano Resende wrote:
>
>
> On Mon, Nov 30, 2015 at 1:53 PM, Josh Rosen
> w
Dears, for one project, I need to implement something so Spark can read data
from a C++ process.
To provide high performance, I really hope to implement this through shared
memory between the C++ process and Java JVM process.
It seems it may be possible to use named memory mapped files and JNI t
I agree that we should unset this in our tests. Want to file a JIRA and
submit a PR to do this?
On Thu, Dec 3, 2015 at 6:40 PM Jeff Zhang wrote:
> I try to do test on HiveSparkSubmitSuite on local box, but fails. The
> cause is that spark is still using my local single node cluster hadoop when
>