The docs for how to run Spark on Mesos have changed very little since
0.6.0, but setting it up is much easier now than then.  Does it make sense
to revamp with the below changes?


You no longer need to build mesos yourself as pre-built versions are
available from Mesosphere: http://mesosphere.io/downloads/

And the instructions guide you towards compiling your own distribution of
Spark, when you can use the prebuilt versions of Spark as well.


I'd like to split that portion of the documentation into two sections, a
build-from-scratch section and a use-prebuilt section.  The new outline
would look something like this:


*Running Spark on Mesos*

Installing Mesos
- using prebuilt (recommended)
 - pointer to mesosphere's packages
- from scratch
 - (similar to current)


Connecting Spark to Mesos
- loading distribution into an accessible location
- Spark settings

Mesos Run Modes
- (same as current)

Running Alongside Hadoop
- (trim this down)



Does that work for people?


Thanks!
Andrew


PS Basically all the same:

http://spark.apache.org/docs/0.6.0/running-on-mesos.html
http://spark.apache.org/docs/0.6.2/running-on-mesos.html
http://spark.apache.org/docs/0.7.3/running-on-mesos.html
http://spark.apache.org/docs/0.8.1/running-on-mesos.html
http://spark.apache.org/docs/0.9.1/running-on-mesos.html
https://people.apache.org/~pwendell/spark-1.0.0-rc3-docs/running-on-mesos.html

Reply via email to