Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/86#issuecomment-37381283
Updated patch takes review comments form @mridulm and @pwendell into
account.
spark.max.cores is now correctly handled. Jars passed in with --more-jars
are not add
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10507644
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -130,6 +130,16 @@ class SparkContext(
val isLocal = (master == "local" || ma
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10507796
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -130,6 +130,16 @@ class SparkContext(
val isLocal = (master == "local" ||
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/86#issuecomment-37382160
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/86#issuecomment-37382161
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/86#issuecomment-37382223
One or more automated tests failed
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13127/
---
If your pr
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/86#issuecomment-3738
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/92
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enable
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/120#discussion_r10508210
--- Diff: docs/running-on-yarn.md ---
@@ -60,11 +60,11 @@ The command to launch the Spark application on the
cluster is as follows:
--jar \
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/120#issuecomment-37383673
Updated patch addresses Patrick's comments
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project do
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/120#issuecomment-37385544
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not hav
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/120#issuecomment-37385546
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/120#issuecomment-37389759
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/120#issuecomment-37389760
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13128/
---
If your project
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/113#issuecomment-37394958
@pwendell Yes that's the thing. While the repo was down I could still build
the whole project from an empty repo. For artifacts like paho, where it's found
not in the 3 rep
Github user haosdent commented on the pull request:
https://github.com/apache/spark/pull/123#issuecomment-37399739
Thank you. I also consider add saveAsHBase to PairRDDFunctions would bring
dependence problem before. But I don't have other idea to workaround that. Now
I realize that I
Github user haosdent closed the pull request at:
https://github.com/apache/spark/pull/123
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user haosdent commented on the pull request:
https://github.com/apache/spark/pull/123#issuecomment-37399830
Close this and create a new issue if I finish `spark-hbase` `external`
module. :-)
---
If your project is set up for it, you can reply to this email and have your
reply
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/126#discussion_r10514209
--- Diff: core/src/main/scala/org/apache/spark/Dependency.scala ---
@@ -49,9 +49,28 @@ class ShuffleDependency[K, V](
@transient rdd: RDD[_ <: Produ
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/113#issuecomment-37405632
Yes the build failed for me with the error I put above.
Note that this pr would need to have the maven build updated too.
---
If your project is set up for it,
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/120#discussion_r10516035
--- Diff: docs/running-on-yarn.md ---
@@ -60,11 +60,11 @@ The command to launch the Spark application on the
cluster is as follows:
--jar \
Github user qqsun8819 commented on the pull request:
https://github.com/apache/spark/pull/110#issuecomment-37407753
modify patch according to @aarondav 's review
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your pro
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/120#issuecomment-37411628
I'm getting a compile error building this against hadoop 0.23:
[ERROR]
yarn/alpha/src/main/scala/org/apache/spark/deploy/yarn/ExecutorLauncher.scala:231:
value
Folks,
I want just to pint something out...
I didn't had time yet to sort it out and to think enough to give valuable
strict explanation of -- event though, intuitively I feel they are a lot
===> need spark people or time to move forward.
But here is the thing regarding *flatMap*.
Actually, it lo
On Wed, Mar 12, 2014 at 3:06 PM, andy petrella wrote:
> Folks,
>
> I want just to pint something out...
> I didn't had time yet to sort it out and to think enough to give valuable
> strict explanation of -- event though, intuitively I feel they are a lot
> ===> need spark people or time to move fo
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/120#issuecomment-37415976
I fixed the above compile error and tried to run but the executors return
the following error:
Unknown/unsupported param List(--num-executor, 2)
Usage: org.a
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/91#discussion_r10521879
--- Diff: core/pom.xml ---
@@ -17,274 +17,260 @@
-->
http://maven.apache.org/POM/4.0.0";
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instanc
GitHub user tgravescs opened a pull request:
https://github.com/apache/spark/pull/127
[SPARK-1232] Fix the hadoop 0.23 yarn build
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/tgravescs/spark SPARK-1232
Alternatively you can r
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/127#issuecomment-37423955
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not hav
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/127#issuecomment-37423957
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/91#discussion_r10524194
--- Diff: core/pom.xml ---
@@ -17,274 +17,260 @@
-->
http://maven.apache.org/POM/4.0.0";
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/127#issuecomment-37426702
Oh, you beat me to it. +1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have thi
GitHub user tgravescs opened a pull request:
https://github.com/apache/spark/pull/128
[SPARK-1198] Allow pipes tasks to run in different sub-directories
This works as is on Linux/Mac/etc but doesn't cover working on Windows. In
here I use ln -sf for symlinks. Putting this up for co
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/127#issuecomment-37431117
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/127#issuecomment-37431119
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13129/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/128#issuecomment-37431342
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/128#issuecomment-37431340
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not hav
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/129#issuecomment-37432091
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not hav
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/129#issuecomment-37432092
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
GitHub user tgravescs opened a pull request:
https://github.com/apache/spark/pull/129
[SPARK-1233] Fix running hadoop 0.23 due to java.lang.NoSuchFieldException:
DEFAULT_M...
...APREDUCE_APPLICATION_CLASSPATH
You can merge this pull request into a Git repository by running:
$
Github user aarondav commented on the pull request:
https://github.com/apache/spark/pull/110#issuecomment-37432422
This looks good to me, but I will leave this PR for a little longer in case
anyone wants to raise questions about changing the behavior here.
---
If your project is set
Should we try to deprecate these types of configs for 1.0.0? We can start
by accepting both and giving a warning if you use the old one, and then
actually remove them in the next minor release. I think
"spark.speculation.enabled=true" is better than "spark.speculation=true",
and if we decide to use
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/129#issuecomment-37434030
+1. Sorry again.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10527981
--- Diff: docs/configuration.md ---
@@ -430,7 +441,7 @@ Apart from these, the following properties are also
available, and may be useful
spark.broadcas
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10527985
--- Diff: docs/configuration.md ---
@@ -393,6 +394,16 @@ Apart from these, the following properties are also
available, and may be useful
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/119#issuecomment-37434570
Hey @sryza thanks for the review. I responded to your comment and also
added a unit and a doc change test to clarify the behavior wrt threads.
---
If your project is set
Github user hsaputra commented on the pull request:
https://github.com/apache/spark/pull/125#issuecomment-37435981
Hi @ScrapCodes,
As @aarondav mentioned hopefully we do not need to have the RAT docs and
jars in Spark source.
I miss the part on why we do not want this
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10528875
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -130,6 +130,16 @@ class SparkContext(
val isLocal = (master == "local" || ma
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/119#issuecomment-37437138
@mateiz do you mind taking a look at this? Also, how would you feel about
turning this on by default? I think in pretty much every case we'd want the
jars added to be vis
Github user aarondav commented on the pull request:
https://github.com/apache/spark/pull/127#issuecomment-37437658
This was removed by accident in #91. Looks good to me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user aarondav commented on the pull request:
https://github.com/apache/spark/pull/129#issuecomment-37438275
Looks like this was introduced in #102. Looks good to me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/127#issuecomment-37438389
Aha! I _knew_ that I added this originally. I'll merge this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as wel
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/127
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabl
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/119#issuecomment-37439012
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not hav
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/129#issuecomment-37438972
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13131/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/129#issuecomment-37438970
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/128#issuecomment-37438959
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13130/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/128#issuecomment-37438988
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/119#issuecomment-37439159
One or more automated tests failed
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13133/
---
If your p
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/119#issuecomment-37439158
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/128#issuecomment-37438986
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not hav
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/128#issuecomment-37438958
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/119#issuecomment-37439013
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/128#issuecomment-37440254
One or more automated tests failed
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13132/
---
If your p
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/128#issuecomment-37440253
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user dianacarroll opened a pull request:
https://github.com/apache/spark/pull/130
[Spark-1234] clean up text in running-on-yarn.md yarn-client section
Clean up several minor typos, incomplete sentences and so on in the
"yarn-client" instructions of running-on-yarn.md. (This
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/130#issuecomment-37440531
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your proj
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/130#discussion_r10530856
--- Diff: docs/running-on-yarn.md ---
@@ -99,16 +99,16 @@ With this mode, your application is actually run on the
remote machine where the
## Launch s
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/125#issuecomment-37441481
Right now we manually run RAT before making releases - but the proposal
here was to run it every time a PR is created. That will be much better since
we will catch licens
Github user manishamde commented on a diff in the pull request:
https://github.com/apache/spark/pull/79#discussion_r10531612
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/tree/DecisionTree.scala ---
@@ -0,0 +1,1055 @@
+/*
+ * Licensed to the Apache Software Foundati
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10531729
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -767,6 +781,20 @@ class SparkContext(
case _ =>
path
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10531841
--- Diff: core/src/test/scala/org/apache/spark/TestUtils.scala ---
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10531878
--- Diff: core/src/test/scala/org/apache/spark/TestUtils.scala ---
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10531950
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -767,6 +781,20 @@ class SparkContext(
case _ =>
path
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10532017
--- Diff: docs/configuration.md ---
@@ -393,6 +393,16 @@ Apart from these, the following properties are also
available, and may be useful
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10532081
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -130,6 +130,18 @@ class SparkContext(
val isLocal = (master == "local" || m
Github user aarondav commented on the pull request:
https://github.com/apache/spark/pull/129#issuecomment-37445337
Merged into master. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/119#issuecomment-37446754
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not hav
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/119#issuecomment-37446755
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/126#discussion_r10533790
--- Diff: core/src/main/scala/org/apache/spark/Dependency.scala ---
@@ -49,9 +49,28 @@ class ShuffleDependency[K, V](
@transient rdd: RDD[_ <: Product2[K,
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/124#discussion_r10534704
--- Diff:
core/src/main/scala/org/apache/spark/storage/BlockManagerMessages.scala ---
@@ -35,9 +35,9 @@ private[storage] object BlockManagerMessages {
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/119#issuecomment-37450691
About turning this on by default, I'm afraid it will mess up uses of Spark
inside a servlet container or similar. Maybe we can keep it off at first.
---
If your project is
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/101#issuecomment-37451940
ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled a
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10535551
--- Diff: core/src/test/scala/org/apache/spark/TestUtils.scala ---
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/105#issuecomment-37451918
ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled a
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/35#issuecomment-37452003
ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled an
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/119#issuecomment-37453910
One or more automated tests failed
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13134/
---
If your p
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/119#issuecomment-37453908
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
+1.
Not just for Typesafe Config, but if we want to consider hierarchical
configs like JSON rather than flat key mappings, it is necessary. It
is also clearer.
On Wed, Mar 12, 2014 at 9:58 AM, Aaron Davidson wrote:
> Should we try to deprecate these types of configs for 1.0.0? We can start
> by
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10536660
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -130,6 +130,18 @@ class SparkContext(
val isLocal = (master == "local" ||
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/44#issuecomment-37456818
Thanks I've merged this into master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does n
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/128#issuecomment-37459180
jenkins failure seem unrelated to this change. Can someone kick it again
perhaps?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/128#issuecomment-37459778
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/128#issuecomment-37461155
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/117#issuecomment-37461164
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not hav
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/117#issuecomment-37461163
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/128#issuecomment-37461154
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not hav
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/44
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enable
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/126#discussion_r10540258
--- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
@@ -50,23 +54,26 @@ private[spark] class
MapOutputTrackerMasterActor(tracker: MapO
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/126#discussion_r10540325
--- Diff: core/src/main/scala/org/apache/spark/ContextCleaner.scala ---
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) un
1 - 100 of 338 matches
Mail list logo