hi,Cheng Lian
thanks, printing stdout/stderr of the forked process is more reasonable.
On 2014/8/19 13:35, Cheng Lian wrote:
The exception indicates that the forked process doesn’t executed as expected,
thus the test case /should/ fail.
Instead of replacing the exception with a |logWarning|,
Hi,
We are running the snapshots (new spark features) on YARN and I was
wondering if the webui is available on YARN mode...
The deployment document does not mention webui on YARN mode...
Is it available ?
Thanks.
Deb
The exception indicates that the forked process doesn’t executed as
expected, thus the test case *should* fail.
Instead of replacing the exception with a logWarning, capturing and
printing stdout/stderr of the forked process can be helpful for diagnosis.
Currently the only information we have at h
hi, all
I notice that jenkins may also throw this error when running
tests(https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18688/consoleFull).
This is because in Utils.executeAndGetOutput our progress exitCode is not 0,
may be we should logWarning here rather than throw a
Hey Gary,
There are couple of blockers in Spark core and SQL - but we're quite close.
The goal was to have rc1 on Friday (ish) of last week... I think by tonight
I will be able to cut one. If not, I'll cut a preview release tonight that
does a full package but doesn't trigger an official vote yet
I understand there must still being work done preventing the cutting of an
RC, is the specific remaining items tracked just through Jira?
I've been trying different approaches of this: populating the trie on the
driver and serializing the instance to executors, broadcasting the strings in
an array and populating the trie on the executors, and variants of what I'm
broadcasting or serializing. All approaches seem to have a memory is
I think there's some discussion of this at
https://issues.apache.org/jira/browse/SPARK-2387 and
https://github.com/apache/spark/pull/1328.
- Josh
On Mon, Aug 18, 2014 at 9:46 AM, zycodefish
wrote:
> Hi all,
>
> I'm reading the implementation of the shuffle in Spark.
> My understanding is that
Hi all,
I'm reading the implementation of the shuffle in Spark.
My understanding is that it's not overlapping with upstream stage.
Is it helpful to overlap the computation of upstream stage w/ the shuffle (I
mean the network copy, like in Hadoop)? If it is, is there any plan to
implement it in t
Not sure exactly how you use it. My understanding is that in spark it would be
better to keep the overhead of driver as less as possible. Is it possible to
broadcast trie to executors, do computation there and then aggregate the
counters (??) in reduct phase?
Thanks.
Zhan Zhang
On Aug 18, 201
If you are willing to compile it, "The markdown code can be compiled to
HTML using the [Jekyll tool](http://jekyllrb.com)." More in docs/README.md.
On Mon, Aug 18, 2014 at 9:00 AM, Stephen Boesch wrote:
> Which viewer is capable of seeing all of the content in the spark docs
> -including the (a
Hi Zhan,
Thanks for looking into this. I'm actually using the hash map as an example
of the simplest snippet of code that is failing for me. I know that this is
just the word count. In my actual problem I'm using a Trie data structure
to find substring matches.
On Sun, Aug 17, 2014 at 11:35 PM, Z
Which viewer is capable of seeing all of the content in the spark docs
-including the (apparent) extensions?
An example page:
https://github.com/apache/spark/blob/master/docs/mllib-linear-methods.md
Local MD viewers/editors that I have tried include: mdcharm, retext and
haroopad: one of thes
13 matches
Mail list logo