Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/49#issuecomment-36398886
I'll give it a try. Any reason we don't just tie this to the yarn-alpha
profile? Or does it not apply to the hadoop 2.0.2 type builds?
---
If your project
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/33#discussion_r10215150
--- Diff: core/src/main/scala/org/apache/spark/SecurityManager.scala ---
@@ -0,0 +1,259 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/33#discussion_r10215322
--- Diff: core/src/main/scala/org/apache/spark/SecurityManager.scala ---
@@ -0,0 +1,259 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/33#discussion_r10216010
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -135,6 +135,8 @@ class SparkContext(
val isLocal = (master == "
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/33#discussion_r10216556
--- Diff: core/src/main/scala/org/apache/spark/network/Connection.scala ---
@@ -18,25 +18,27 @@
package org.apache.spark.network
import
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/33#discussion_r10216724
--- Diff:
core/src/main/scala/org/apache/spark/network/ConnectionManager.scala ---
@@ -557,7 +754,54 @@ private[spark] class ConnectionManager(port: Int
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/33#discussion_r10220939
--- Diff:
core/src/main/scala/org/apache/spark/network/SecurityMessage.scala ---
@@ -0,0 +1,110 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/33#discussion_r10221207
--- Diff: core/src/main/scala/org/apache/spark/network/Connection.scala ---
@@ -18,25 +18,27 @@
package org.apache.spark.network
import
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/33#discussion_r10221836
--- Diff: docs/configuration.md ---
@@ -477,6 +505,21 @@ Apart from these, the following properties are also
available, and may be useful
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/33#issuecomment-36542188
Thanks for the detailed review Patrick. I've updated based on the comments
except for renaming the Handlers to Servlet and changing to use SparkConf. I
will
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/33#discussion_r1063
--- Diff: core/src/main/scala/org/apache/spark/SecurityManager.scala ---
@@ -0,0 +1,259 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/33#discussion_r1084
--- Diff: core/src/main/scala/org/apache/spark/ui/JettyUtils.scala ---
@@ -41,56 +46,103 @@ private[spark] object JettyUtils extends Logging {
type
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/33#issuecomment-36764964
yes I haven't made that change yet. I should have a patch up in a few hours
with that changed. I'm just doing some finally testing on it. I'm changing
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/33#issuecomment-36781865
filed https://spark-project.atlassian.net/browse/SPARK-1191
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/86#discussion_r10331680
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkApp.scala ---
@@ -0,0 +1,178 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/33#issuecomment-36825493
Pass securityManager and SparkConf around where we can. Note the changes
to JettyUtils and calling functions. Switch to use SparkConf for getting and
setting configs
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/33#issuecomment-36826678
@pwendell Ok this should now be ready for review.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
GitHub user tgravescs opened a pull request:
https://github.com/apache/spark/pull/94
SPARK-1195: set map_input_file environment variable in PipedRDD
Hadoop uses the config mapreduce.map.input.file to indicate the input
filename to the map when the input split is of type FileSplit
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/95#discussion_r10361302
--- Diff: docs/running-on-yarn.md ---
@@ -82,35 +84,30 @@ For example:
./bin/spark-class org.apache.spark.deploy.yarn.Client
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/95#issuecomment-36938497
Looks good to me (with the doc fixes commented on).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/33#discussion_r10361820
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -135,6 +135,8 @@ class SparkContext(
val isLocal = (master == "
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/94#discussion_r10365444
--- Diff: core/src/test/scala/org/apache/spark/PipedRDDSuite.scala ---
@@ -89,4 +97,37 @@ class PipedRDDSuite extends FunSuite with
SharedSparkContext
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/94#issuecomment-36948463
Adding a routine to HadoopPartition sounds good.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/94#discussion_r10368475
--- Diff: core/src/test/scala/org/apache/spark/PipedRDDSuite.scala ---
@@ -89,4 +97,37 @@ class PipedRDDSuite extends FunSuite with
SharedSparkContext
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/33#issuecomment-36955086
Nope nothing else to address in this. I'll merge it shortly.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitH
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/33#issuecomment-36955323
I committed this, thanks for all the reviews Patrick!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/94#discussion_r10384191
--- Diff: core/src/test/scala/org/apache/spark/PipedRDDSuite.scala ---
@@ -89,4 +97,37 @@ class PipedRDDSuite extends FunSuite with
SharedSparkContext
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/113#issuecomment-37183820
Is there a particular reason for the upgrade (bug, feature, etc)? One
reason I ask is changing the version can affect users. Can you create a jira
for this also
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/113#issuecomment-37230471
I generally use mvn to build. Right now I seem to be having issues
building at all due to what appears like the cloudera repository being down.
Error resolving
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/113#issuecomment-37332343
What exactly do you want tried? I built it for hadoop2 with sbt and ran a
couple of the examples on yarn SparkPi and SparkHdfsLR. Those worked fine and
the UI still
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/113#issuecomment-37405632
Yes the build failed for me with the error I put above.
Note that this pr would need to have the maven build updated too.
---
If your project is set up for it
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/120#discussion_r10516035
--- Diff: docs/running-on-yarn.md ---
@@ -60,11 +60,11 @@ The command to launch the Spark application on the
cluster is as follows:
--jar
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/120#issuecomment-37411628
I'm getting a compile error building this against hadoop 0.23:
[ERROR]
yarn/alpha/src/main/scala/org/apache/spark/deploy/yarn/ExecutorLauncher.scal
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/120#issuecomment-37415976
I fixed the above compile error and tried to run but the executors return
the following error:
Unknown/unsupported param List(--num-executor, 2)
Usage
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/91#discussion_r10521879
--- Diff: core/pom.xml ---
@@ -17,274 +17,260 @@
-->
http://maven.apache.org/POM/4.0.0";
xmlns:xsi="http://www.w3.org/2
GitHub user tgravescs opened a pull request:
https://github.com/apache/spark/pull/127
[SPARK-1232] Fix the hadoop 0.23 yarn build
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/tgravescs/spark SPARK-1232
Alternatively you can
GitHub user tgravescs opened a pull request:
https://github.com/apache/spark/pull/128
[SPARK-1198] Allow pipes tasks to run in different sub-directories
This works as is on Linux/Mac/etc but doesn't cover working on Windows. In
here I use ln -sf for symlinks. Putting this u
GitHub user tgravescs opened a pull request:
https://github.com/apache/spark/pull/129
[SPARK-1233] Fix running hadoop 0.23 due to java.lang.NoSuchFieldException:
DEFAULT_M...
...APREDUCE_APPLICATION_CLASSPATH
You can merge this pull request into a Git repository by running
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/128#issuecomment-37459180
jenkins failure seem unrelated to this change. Can someone kick it again
perhaps?
---
If your project is set up for it, you can reply to this email and have your
Github user tgravescs closed the pull request at:
https://github.com/apache/spark/pull/129
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/128#discussion_r10563706
--- Diff: core/pom.xml ---
@@ -184,13 +184,12 @@
metrics-graphite
- org.apache.derby
- derby
- test
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/120#discussion_r10566868
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ClientArguments.scala
---
@@ -133,11 +148,11 @@ class ClientArguments(val args: Array
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/120#issuecomment-37543156
Looks good to me. I made the small comment about perhaps leaving the
--am-class out of the usage but I'm ok either way.
---
If your project is set up for it, yo
GitHub user tgravescs opened a pull request:
https://github.com/apache/spark/pull/33
Add Security to Spark - Akka, Http, ConnectionManager, UI use servlets
resubmit pull request. was
https://github.com/apache/incubator-spark/pull/332.
You can merge this pull request into a Git
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/29#issuecomment-36253998
Thats really unfortunately that hadoop 1.x doesn't support it, as I would
prefer to use the addCredentials since it also handles secrets.
Our only o
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/28#issuecomment-36255394
I personally like the way 4 spaces looks too. The style guide isn't clear
on what its supposed to be. I'll assume it falls under the 4 space rule
similar to
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/33#issuecomment-36275335
its ready for review. I believe I've addressed all the comments from the
previous PR.
---
If your project is set up for it, you can reply to this email and have
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/29#issuecomment-36286358
I believe secrets are mostly for users adding secrets for other services.
secrets are also used by the MR2 framework. secret keys are supported in MR1
via Credentials
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/28#issuecomment-36362791
I committed this. Thanks Sandy!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
GitHub user tgravescs opened a pull request:
https://github.com/apache/spark/pull/47
Update dev merge script to use spark.git instead of incubator-spark
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/tgravescs/spark
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/29#issuecomment-36367636
+1. Looks Good. Thanks Sandy!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/29#issuecomment-36368483
@sryza it looks like this is no longer mergeable due to other check ins.
Can you please update it to the latest.
---
If your project is set up for it, you can reply to
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/6#issuecomment-36370069
I disagree with this change going in and breaking the yarn hadoop0.23
build. If we are going to support maven and hadoop 0.23 you should be able to
build it without
53 matches
Mail list logo