Hi Xiangrui,
Could you please point to the IPM solver that you have positive results
with ? I was planning to compare with CVX, KNITRO from Professor Nocedal
and MOSEK probably...I don't have CPLEX license so I won't be able to do
that comparison...
My experiments so far tells me that ADMM based
I was actually talking to tgraves today at the summit about this.
Based on my understanding, the sizes we track and send (which is
unfortunately O(M*R) regardless of how we change the implementation --
whether we send via task or send via MapOutputTracker) is only used to
compute maxBytesInFlight
> b) Instead of pulling this information, push it to executors as part
> of task submission. (What Patrick mentioned ?)
> (1) a.1 from above is still an issue for this.
I don't understand problem a.1 is. In this case, we don't need to do
caching, right?
> (2) Serialized task size is also a concer
also this one in warning log:
E0702 11:35:08.869998 17840 slave.cpp:2310] Container
'af557235-2d5f-4062-aaf3-a747cb3cd0d1' for executor
'20140616-104524-1694607552-5050-26919-1' of framework
'20140702-113428-1694607552-5050-17766-' failed to start: Failed to
fetch URIs for container 'af557235-
Here is the log:
E0702 10:32:07.599364 14915 slave.cpp:2686] Failed to unmonitor container
for executor 20140616-104524-1694607552-5050-26919-1 of framework
20140702-102939-1694607552-5050-14846-: Not monitored
2014-07-02 1:45 GMT+08:00 Aaron Davidson :
> Can you post the logs from any of t
Has anyone tried running pyspark driver code in Jython, preferably by
calling python code within Java code?
I know CPython is the only interpreter tested because of the need to
support C extensions.
But in my case, C extensions would be called on the worker, not in the
driver.
And being able to
Can you post the logs from any of the dying executors?
On Tue, Jul 1, 2014 at 1:25 AM, qingyang li
wrote:
> i am using mesos0.19 and spark0.9.0 , the mesos cluster is started, when I
> using spark-shell to submit one job, the tasks always lost. here is the
> log:
> --
> 14/07/01 16:24
Hi Bert,
There is a specific process of pull request if you wish to share the code
https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark
I would be glad to benchmark your ANN implementation by means of running some
experiments that we run with the other ANN toolkits. I am also
We had considered both approaches (if I understood the suggestions right) :
a) Pulling only map output states for tasks which run on the reducer
by modifying the Actor. (Probably along lines of what Aaron described
?)
The performance implication of this was bad :
1) We cant cache serialized result
i am using mesos0.19 and spark0.9.0 , the mesos cluster is started, when I
using spark-shell to submit one job, the tasks always lost. here is the
log:
--
14/07/01 16:24:27 INFO DAGScheduler: Host gained which was in lost list
earlier: bigdata005
14/07/01 16:24:27 INFO TaskSetManager: Sta
If I remember correctly, similar/same errors happened with other hadoop
versions. I need to rebuild it with those and compare the logs.
On Tue, Jul 1, 2014 at 1:04 AM, Patrick Wendell wrote:
> Do those also happen if you run other hadoop versions (e.g. try 1.0.4)?
>
> On Tue, Jul 1, 2014 at 1:0
Yeah I created a JIRA a while back to piggy-back the map status info
on top of the task (I honestly think it will be a small change). There
isn't a good reason to broadcast the entire array and it can be an
issue during large shuffles.
- Patrick
On Mon, Jun 30, 2014 at 7:58 PM, Aaron Davidson wr
Do those also happen if you run other hadoop versions (e.g. try 1.0.4)?
On Tue, Jul 1, 2014 at 1:00 AM, Taka Shinagawa wrote:
> Since Spark 1.0.0, I've been seeing multiple errors when running sbt test.
>
> I ran the following commands from Spark 1.0.1 RC1 on Mac OSX 10.9.2.
>
> $ sbt/sbt clean
>
Since Spark 1.0.0, I've been seeing multiple errors when running sbt test.
I ran the following commands from Spark 1.0.1 RC1 on Mac OSX 10.9.2.
$ sbt/sbt clean
$ SPARK_HADOOP_VERSION=1.2.1 sbt/sbt assembly
$ sbt/sbt test
I'm attaching the log file generated by the sbt test.
Here's the summary
15 matches
Mail list logo