Github user kimballa commented on the pull request:
https://github.com/apache/spark/pull/65#issuecomment-36487843
Aha, thanks for pointing that out. Should be fixed now.
On Sun, Mar 2, 2014 at 11:48 PM, Reynold Xin
wrote:
> Thanks for doing this.
>
>
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/65#issuecomment-36487798
I merged this one too.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this featu
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/65#issuecomment-36487753
Thanks for doing this.
BTW one thing I noticed is that your git commit's email is different from
the ones you registered on github, so your commits don't actually show
Github user kimballa commented on the pull request:
https://github.com/apache/spark/pull/64#issuecomment-36487693
n.p. rebased and pushed to a new branch; see pull req #65.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
GitHub user kimballa opened a pull request:
https://github.com/apache/spark/pull/65
SPARK-1173. (#2) Fix typo in Java streaming example.
Companion commit to pull request #64, fix the typo on the Java side of the
docs.
You can merge this pull request into a Git repository by running
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/64#issuecomment-36487507
Actually you will need to submit another PR. I've already merged this one
(but github is laggy because it is waiting for the asf git bot to synchronize).
Sorry about the confu
Github user kimballa commented on the pull request:
https://github.com/apache/spark/pull/64#issuecomment-36487463
Here you go
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enab
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/55#issuecomment-36487061
Also @ryanlecompte since you changed the implementation to tail recursion.
---
If your project is set up for it, you can reply to this email and have your
reply appear on Git
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/63#issuecomment-36486941
One or more automated tests failed
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/12964/
---
If your pr
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/63#issuecomment-36486939
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/63#issuecomment-36486867
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/63#issuecomment-36486866
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/64#issuecomment-36486817
There's also a typo in the Java version of the doc. If you don't mind
fixing that as well ... :)
---
If your project is set up for it, you can reply to this email and have yo
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/63#issuecomment-36486838
Jenkins, add to whitelist.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this f
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/64#issuecomment-36486791
Thanks Aaron. I've merged this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/42#issuecomment-36486544
Build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this f
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/42#issuecomment-36486545
Build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/64#issuecomment-36486542
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your proje
GitHub user kimballa opened a pull request:
https://github.com/apache/spark/pull/64
SPARK-1173. Improve scala streaming docs.
Clarify imports to add implicit conversions to DStream and
fix other small typos in the streaming intro documentation.
Tested by inspecting outpu
Github user jyotiska commented on the pull request:
https://github.com/apache/spark/pull/34#issuecomment-36484436
I think namedtuple will be more suitable. @mateiz What do you think?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHu
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/12#issuecomment-36484337
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/12962/
---
If your project i
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/12#issuecomment-36484335
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/60#issuecomment-36484334
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/60#issuecomment-36484336
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/12961/
---
If your project i
Github user iven commented on the pull request:
https://github.com/apache/spark/pull/60#issuecomment-36484214
If checkpoint is enabled both in spark and Mesos slaves, the slaves will
write spark and executors information periodically to the disk. Yes, it's used
for Spark to survive sl
Github user kellrott commented on the pull request:
https://github.com/apache/spark/pull/50#issuecomment-36483912
Thank you for the notes. I'll start working on fixing things.
I'd like to keep this patch 'simple', and limit the scope to DISK_ONLY, and
get it accepted before thinki
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/50#discussion_r10201307
--- Diff: core/src/main/scala/org/apache/spark/storage/MemoryStore.scala ---
@@ -59,24 +59,45 @@ private class MemoryStore(blockManager: BlockManager,
maxMemory:
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/50#discussion_r10201291
--- Diff: core/src/main/scala/org/apache/spark/storage/DiskStore.scala ---
@@ -52,11 +52,21 @@ private class DiskStore(blockManager: BlockManager,
diskManager: D
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/50#discussion_r10201264
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -549,34 +555,43 @@ private[spark] class BlockManager(
var marke
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/50#discussion_r10201268
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -549,34 +555,43 @@ private[spark] class BlockManager(
var marke
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/50#issuecomment-36483263
Hey Kyle, thanks for bringing this to the new repo. I looked through it and
made a few comments. Another concern though is that it would be good to make
this work for MEMORY
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/62#issuecomment-36483177
Okay -- Thats seems like a separate conversation. This change looks good to
me.
---
If your project is set up for it, you can reply to this email and have your
reply appe
Github user kayousterhout commented on the pull request:
https://github.com/apache/spark/pull/62#issuecomment-36483058
Unfortunately this isn't very useful for getting network bandwidth...if you
consider a simple case where two shuffle reads (for one task) occur
simultaneously a
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/50#discussion_r10201195
--- Diff: core/src/main/scala/org/apache/spark/CacheManager.scala ---
@@ -71,10 +71,21 @@ private[spark] class CacheManager(blockManager:
BlockManager) extends L
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/50#discussion_r10201182
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -534,8 +539,9 @@ private[spark] class BlockManager(
// If we're s
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/50#discussion_r10201181
--- Diff:
core/src/main/scala/org/apache/spark/serializer/JavaSerializer.scala ---
@@ -23,9 +23,27 @@ import java.nio.ByteBuffer
import org.apache.spark.Spa
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/50#discussion_r10201177
--- Diff: core/src/main/scala/org/apache/spark/CacheManager.scala ---
@@ -71,10 +71,21 @@ private[spark] class CacheManager(blockManager:
BlockManager) extends L
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/50#discussion_r10201097
--- Diff:
core/src/test/scala/org/apache/spark/storage/FlatmapIteratorSuite.scala ---
@@ -0,0 +1,83 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/50#discussion_r10201080
--- Diff:
core/src/test/scala/org/apache/spark/storage/FlatmapIteratorSuite.scala ---
@@ -0,0 +1,83 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/50#discussion_r10201076
--- Diff:
core/src/test/scala/org/apache/spark/storage/FlatmapIteratorSuite.scala ---
@@ -0,0 +1,83 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/60#issuecomment-36482490
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/60#issuecomment-36482491
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/12#issuecomment-36482497
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/12#issuecomment-36482496
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/61#issuecomment-36482475
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/12960/
---
If your project i
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/61#issuecomment-36482474
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/62#issuecomment-36482478
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/12959/
---
If your project i
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/62#issuecomment-36482477
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/34#issuecomment-36482065
@mateiz Do you think this is a good case for a
[namedtuple](http://docs.python.org/2/library/collections.html#collections.namedtuple)?
---
If your project is set up for
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/62#issuecomment-36481963
Hmm -- I have been confused by this before, but if I am reading the comment
right, this could be useful for to get an estimate of the raw network bandwidth
used for shuffl
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/58#issuecomment-36481941
Anyway maybe let's do it like this: if you test it with this change and see
that all the commands (stop, resume, etc) still work, then we can keep it. But
we should also add
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/58#issuecomment-36481890
I think the better way to fix this is, not allow user to start non-slave
cluster, but allow them to login to a all-slaves-lost cluster?
---
If your project is set up for
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/58#issuecomment-36481848
Well I typically go to the EC2 console and just start the master node, get
the hostname and login using ssh. IMHO the scripts are useful for setting up a
whole cluster, so
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/58#issuecomment-36481811
In that case you can still log into the master with ssh. I guess we could
make the "login" command work then, I'm just wondering whether removing this
check here can cause o
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/60#issuecomment-36481768
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/58#issuecomment-36481775
I opened the request in JIRA on the assumption that a master-only cluster
is a usable and valid cluster to work with. If that is not the case, then yeah
I guess we should
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/60#issuecomment-36481666
CC @benh
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled a
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/60#issuecomment-36481635
What does "checkpoint" do exactly, and why not have it on by default? Is it
for Spark to survive slave restarts?
---
If your project is set up for it, you can reply to this
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/60#issuecomment-36481638
Jenkins, this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have thi
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/58#issuecomment-36481449
Why do we want to support this then? Maybe we should just make the
spark-ec2 script not let you launch a cluster without slaves.
---
If your project is set up for it, you c
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/12#issuecomment-36481419
Jenkins, this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have thi
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/34#issuecomment-36481386
@jyotiska instead of passing a dictionary around, could you make a class to
represent a call site? It will be a bit clearer and harder to get wrong when
someone's changing t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/63#issuecomment-36481016
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your proje
GitHub user CodingCat opened a pull request:
https://github.com/apache/spark/pull/63
simplify the implementation of CoarseGrainedSchedulerBackend
There are 5 main data structures in the class, after reading the source
code, I found that some of them are actually not used, some of th
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/62#issuecomment-36480712
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/62#issuecomment-36480711
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/61#issuecomment-36480713
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/60#issuecomment-36480715
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your proje
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/61#issuecomment-36480714
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/61#issuecomment-36480674
Yeah, this was an internal review comment :-)
Thanks !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. I
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/17#issuecomment-36480448
Hey @ScrapCodes I thought about this a bit more - do you mind moving the
java8-tests folder into a new folder called `/extras`? We can put things there
that aren't include
Github user mridulm commented on a diff in the pull request:
https://github.com/apache/spark/pull/43#discussion_r10200502
--- Diff: core/src/main/scala/org/apache/spark/storage/DiskStore.scala ---
@@ -84,12 +84,27 @@ private class DiskStore(blockManager: BlockManager,
diskManager:
GitHub user kayousterhout opened a pull request:
https://github.com/apache/spark/pull/62
Remove the remoteFetchTime metric.
This metric is confusing: it adds up all of the time to fetch
shuffle inputs, but fetches often happen in parallel, so
remoteFetchTime can be much longe
GitHub user kayousterhout opened a pull request:
https://github.com/apache/spark/pull/61
Removed accidentally checked in comment
It looks like this comment was added a while ago by @mridulm as part of a
merge and was accidentally checked in. We should remove it.
You can merge this
GitHub user iven opened a pull request:
https://github.com/apache/spark/pull/60
Add role and checkpoint support for Mesos backend
This commit also changes Mesos version to 0.14.0, which is the earliest
version supporting these features.
You can merge this pull request into a Git
Github user CodingCat closed the pull request at:
https://github.com/apache/spark/pull/59
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/44#issuecomment-36478827
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/12958/
---
If your project i
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/56#issuecomment-36478828
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/44#issuecomment-36478826
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/56#issuecomment-36478829
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/12957/
---
If your project i
Github user pwendell closed the pull request at:
https://github.com/apache/spark/pull/57
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/56
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enable
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/58#issuecomment-36478687
with only a master, en...no service is actually working (in distributed
fashion)
but this patch is just to allow user to login to a master-only cluster
---
If
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/58#issuecomment-36478558
Just curious -- Does this start any of the services correctly ? I think
HDFS, Spark won't have any worker nodes in this mode, so the only thing that
will work is Spark wit
GitHub user CodingCat opened a pull request:
https://github.com/apache/spark/pull/59
SPARK-1166: clean vpc_id if the group was just now created
Reported in https://spark-project.atlassian.net/browse/SPARK-1166
In some very weird situation (when new created group master_group
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/58#issuecomment-36477517
oh, fixed,
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
ena
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/58#issuecomment-36477477
Reported in
[SPARK-1156](https://spark-project.atlassian.net/browse/SPARK-1156), not
SPARK-1159. :)
---
If your project is set up for it, you can reply to this email and
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/58#issuecomment-36477187
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your proje
GitHub user CodingCat opened a pull request:
https://github.com/apache/spark/pull/58
SPARK-1156: allow user to login into a cluster without slaves
Reported in https://spark-project.atlassian.net/browse/SPARK-1159
The current spark-ec2 script doesn't allow user to login to a
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/44#issuecomment-36477086
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/44#issuecomment-36477087
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/57#issuecomment-36476905
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/57#issuecomment-36476906
One or more automated tests failed
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/12956/
---
If your pr
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/56#issuecomment-36476861
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/56#issuecomment-36476860
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/57#issuecomment-36476859
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/57#issuecomment-36476857
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/44#discussion_r10199454
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -950,6 +952,8 @@ class SparkContext(
resultHandler: (Int, U) => Unit,
GitHub user pwendell opened a pull request:
https://github.com/apache/spark/pull/57
Add Jekyll tag to isolate "production-only" doc components. (0.9 version)
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/pwendell/spark jekyll-p
GitHub user pwendell opened a pull request:
https://github.com/apache/spark/pull/56
Add Jekyll tag to isolate "production-only" doc components.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/pwendell/spark jekyll-prod
Alternati
1 - 100 of 183 matches
Mail list logo