Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/92#issuecomment-37380473
Thanks, merged.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user prabinb commented on the pull request:
https://github.com/apache/spark/pull/92#issuecomment-37380006
@pwendell rebased with the latest master, should be able to merge now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitH
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/125#issuecomment-37379933
@pwendell thoughts ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/125#issuecomment-37379720
We did not want to have this in our builds (maven or SBT) and running this
so trivial that it might not even need that. I am not sure about the dynamics
of a release, b
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/93#issuecomment-37379571
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13126/
---
If your project i
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/93#issuecomment-37379570
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/117#issuecomment-37379575
One or more automated tests failed
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13125/
---
If your p
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/117#issuecomment-37379573
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/86#discussion_r10506918
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -0,0 +1,188 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) u
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/97#issuecomment-37379097
Hi Matei,
Does this mean that when key is None, then it would do the same thing as
top ? In case NO, then we would need a maxheap since min heap will only keep N
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/86#discussion_r10506816
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -0,0 +1,188 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) u
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10506498
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -130,6 +130,16 @@ class SparkContext(
val isLocal = (master == "local" ||
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10506504
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -130,6 +130,16 @@ class SparkContext(
val isLocal = (master == "local" ||
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10506499
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -130,6 +130,16 @@ class SparkContext(
val isLocal = (master == "local" ||
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/113#issuecomment-37377699
@rxin Sorry for comment bomb but actually before merging this let's check
if it bumps any recursive dependencies. I can take a look on that.
---
If your project is set u
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10506355
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -130,6 +130,16 @@ class SparkContext(
val isLocal = (master == "local" || ma
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/113#issuecomment-37377565
Btw the reason I'm +1 on this is that we should try to upgrade these things
going into 1.0 so we don't end up with super old versions of everything as the
1.X wears on. I
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10506306
--- Diff: docs/configuration.md ---
@@ -393,6 +394,16 @@ Apart from these, the following properties are also
available, and may be useful
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/113#issuecomment-37377469
@srowen yep the repo is there for convenience of CDH users mostly for
historical reasons (for a while CDH was the only way you could get the Hadoop2
code). What I don't u
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/93#issuecomment-37377373
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/93#issuecomment-37377371
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10506244
--- Diff: docs/configuration.md ---
@@ -375,7 +376,7 @@ Apart from these, the following properties are also
available, and may be useful
spark.akka.heartb
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/102
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabl
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10506248
--- Diff: docs/configuration.md ---
@@ -430,7 +441,7 @@ Apart from these, the following properties are also
available, and may be useful
spark.broadcast.b
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10506241
--- Diff: docs/configuration.md ---
@@ -111,6 +111,7 @@ Apart from these, the following properties are also
available, and may be useful
it if you confi
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/119#discussion_r10506221
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -726,6 +736,8 @@ class SparkContext(
* Adds a JAR dependency for all tasks to be
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/102#issuecomment-37377235
Looks good thanks @sryza and to @gzm55 for pointing out the issue. @sryza y
u keep changing API's?
I'll merge this.
---
If your project is set up for it, you ca
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/113#issuecomment-37377281
That sounds good. @pwendell should make the call here ...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If y
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/119#issuecomment-37377166
One or more automated tests failed
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13124/
---
If your p
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/102#discussion_r10506184
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ClientBase.scala ---
@@ -379,9 +381,48 @@ object ClientBase {
// Based on code
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/119#issuecomment-37377165
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/117#issuecomment-37377103
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not hav
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/117#issuecomment-37377104
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/119#issuecomment-37377100
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not hav
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/119#issuecomment-37377101
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/119#issuecomment-37377049
@velvia @sryza... I added some tests for this. Do you remember what the
corner cases were?
---
If your project is set up for it, you can reply to this email and have you
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/42#issuecomment-37374819
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13123/
---
If your project i
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/102#issuecomment-37374814
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13121/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/42#issuecomment-37374813
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/42#issuecomment-37374818
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/102#issuecomment-37374812
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/42#issuecomment-37374815
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13122/
---
If your project i
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/102#issuecomment-37374693
Unfortunately, the alpha/stable distinction doesn't fully capture the
differences here because the APIs are different between the 0.23 Hadoop line
and the 2.0 line, both of
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/102#discussion_r10505297
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ClientBase.scala ---
@@ -379,9 +381,48 @@ object ClientBase {
// Based on code fro
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/102#discussion_r10505246
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ClientBase.scala ---
@@ -379,9 +381,48 @@ object ClientBase {
// Based on code
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/42#issuecomment-37372997
@pwendell and @kayousterhout - Thanks for taking the first pass through
this patch. I have made the relevant changes we discussed and updated to master
(non-trivial as t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/42#issuecomment-37372828
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/42#issuecomment-37372829
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/117#issuecomment-37372715
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/117#issuecomment-37372716
One or more automated tests failed
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13120/
---
If your p
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/42#issuecomment-37372656
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/42#issuecomment-37372657
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/117#issuecomment-37372650
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not hav
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/117#issuecomment-37372652
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/102#issuecomment-37372653
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not hav
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/102#issuecomment-37372654
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/117#issuecomment-37372615
I added fast distance computation and updated the implementation of KMeans.
Squared norms of the points are pre-computed and cached in order to compute
distance faster for
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/42#discussion_r10504565
--- Diff:
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala ---
@@ -106,121 +114,154 @@ private[spark] class JobProgressListener(val sc:
Asm is such a mess. And their suggested solution being everyone should
shade it sounds pretty awful to me (not uncommon to have shaded asm 15
times in a single project). But I guess it you are right that shading is
only way to deal with it at this point...
On Mar 11, 2014 5:35 PM, "Kevin Markey" w
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/117#issuecomment-37372450
@fommil We will mention native libraries in the documentation once this PR
gets merged.
---
If your project is set up for it, you can reply to this email and have your
rep
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/42#discussion_r10504506
--- Diff: core/src/main/scala/org/apache/spark/ui/env/EnvironmentUI.scala
---
@@ -19,76 +19,74 @@ package org.apache.spark.ui.env
import javax.ser
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/42#discussion_r10504478
--- Diff: core/src/main/scala/org/apache/spark/storage/StorageUtils.scala
---
@@ -43,14 +47,18 @@ case class StorageStatus(blockManagerId:
BlockManagerId, m
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/42#discussion_r10504474
--- Diff: core/src/main/scala/org/apache/spark/storage/StorageUtils.scala
---
@@ -17,13 +17,17 @@
package org.apache.spark.storage
+impo
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/42#discussion_r10504458
--- Diff:
core/src/main/scala/org/apache/spark/storage/BlockManagerMasterActor.scala ---
@@ -307,97 +318,93 @@ class BlockManagerMasterActor(val isLocal: Boo
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/42#discussion_r10504421
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/SparkListenerBus.scala ---
@@ -40,35 +37,16 @@ private[spark] class SparkListenerBus extends Logging
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/42#discussion_r10504393
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -54,87 +54,53 @@ import org.apache.spark.util.{MetadataCleaner,
MetadataC
we have a maven corporate repository inhouse and of course we also use
maven central. sbt can handle retrieving from and publishing to maven
repositories just fine. we have maven, ant/ivy and sbt projects depending
on each others artifacts. not sure i see the issue there.
On Tue, Mar 11, 2014 at
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/42#discussion_r10504353
--- Diff: core/src/main/scala/org/apache/spark/SparkEnv.scala ---
@@ -229,10 +243,60 @@ object SparkEnv extends Logging {
broadcastManager,
Nope..I did not test implicit feedback yet...will get into more detailed
debug and generate the testcase hopefully next week...
On Mar 11, 2014 7:02 PM, "Xiangrui Meng" wrote:
> Hi Deb, did you use ALS with implicit feedback? -Xiangrui
>
> On Mon, Mar 10, 2014 at 1:17 PM, Xiangrui Meng wrote:
>
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/102#issuecomment-37371001
Updated patch takes out the SBT stuff, adds the comment requested by
@pwendell, and uses reflection to work around the incompatibilities pointed out
by @gzm55
---
If your
Line 376 should be correct as it is computing \sum_i (c_i - 1) x_i
x_i^T, = \sum_i (alpha * r_i) x_i x_i^T. Are you computing some
metrics to tell which recommendation is better? -Xiangrui
On Tue, Mar 11, 2014 at 6:38 PM, Xiangrui Meng wrote:
> Hi Michael,
>
> I can help check the current impleme
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/126#issuecomment-37369954
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/126#issuecomment-37369955
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13119/
---
If your project
Hi Deb, did you use ALS with implicit feedback? -Xiangrui
On Mon, Mar 10, 2014 at 1:17 PM, Xiangrui Meng wrote:
> Choosing lambda = 0.1 shouldn't lead to the error you got. This is
> probably a bug. Do you mind sharing a small amount of data that can
> re-produce the error? -Xiangrui
>
> On Fri,
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/126#issuecomment-37358288
HAHA! I was already working on adding that try-catch. Realized that a bit
late after the PR.
And yes, super.finalize() is a good call.
---
If your project is set up for
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/16#issuecomment-37363745
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/16#issuecomment-37363746
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Hi Michael,
I can help check the current implementation. Would you please go to
https://spark-project.atlassian.net/browse/SPARK and create a ticket
about this issue with component "MLlib"? Thanks!
Best,
Xiangrui
On Tue, Mar 11, 2014 at 3:18 PM, Michael Allman wrote:
> Hi,
>
> I'm implementing
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/126#issuecomment-37366999
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/126#issuecomment-37366998
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not hav
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/16#issuecomment-37366958
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/16#issuecomment-37366959
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13118/
---
If your project i
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/126#issuecomment-37357590
Hey TD, I just glanced at this quickly, but I have two small suggestions
with the way you are using finalizers. First, errors in finalizers aren't
reported by the GC thr
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/102#discussion_r10498244
--- Diff: project/SparkBuild.scala ---
@@ -236,7 +236,8 @@ object SparkBuild extends Build {
"com.novocode" % "junit-interface" % "0.10" %
Github user fommil commented on the pull request:
https://github.com/apache/spark/pull/117#issuecomment-37353456
@mengxr cool :-D
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Pardon my late entry into the fray, here, but we've just struggled
though some library conflicts that could have been avoided and whose
story shed some light on this question.
We have been integrating Spark with a number of other components. We
discovered several conflicts, most easily elimina
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/126#issuecomment-37346329
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/126#issuecomment-37346330
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13117/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/126#issuecomment-37340125
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/126#issuecomment-37340124
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not hav
GitHub user tdas opened a pull request:
https://github.com/apache/spark/pull/126
[SPARK-1103] [WIP] Automatic garbage collection of RDD, shuffle and
broadcast data
This PR allows Spark to automatically cleanup metadata and data related to
persisted RDDs, shuffles and broadcast vari
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/93#discussion_r10490986
--- Diff: python/pyspark/rdd.py ---
@@ -628,6 +656,31 @@ def mergeMaps(m1, m2):
m1[k] += v
return m1
return
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/108
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabl
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/113#issuecomment-37332343
What exactly do you want tried? I built it for hadoop2 with sbt and ran a
couple of the examples on yarn SparkPi and SparkHdfsLR. Those worked fine and
the UI still wo
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/123#issuecomment-37329995
Hey @haosdent thanks for giving this example. Unfortunately including all
of HBase's dependencies inside of Spark is not something that we can do.
Would you mind
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/97#issuecomment-37329386
Hi Prashant,
For this feature I think it would be better to use a "key" function instead
of a boolean flag for the order. So make the API like this:
```
def
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/120#issuecomment-37328825
Sandy thanks for doing this clean-up. Looks great. Just some minor
comments. /cc @tgraves
---
If your project is set up for it, you can reply to this email and have your
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/120#discussion_r10487009
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMasterArguments.scala
---
@@ -50,16 +50,16 @@ class ApplicationMasterArguments(v
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/120#discussion_r10486909
--- Diff: docs/running-on-yarn.md ---
@@ -60,11 +60,11 @@ The command to launch the Spark application on the
cluster is as follows:
--jar \
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/120#discussion_r10486920
--- Diff: docs/running-on-yarn.md ---
@@ -100,12 +100,12 @@ With yarn-client mode, the application will be
launched locally, just like runni
Config
1 - 100 of 159 matches
Mail list logo