I don't think this is the correct question.  Spark can be deployed on
different cluster manager frameworks like standard alone, yarn & mesos.
Spark can't run without these cluster manager framework, that means spark
depend on cluster manager framework.

And the data management layer is the upstream of spark which is independent
with spark. But spark do provide apis to access different data management
layer.
It should depend on your upstream application which data store should use,
it's not related with spark.


On Wed, Jun 24, 2015 at 3:46 AM, commtech <michael.leon...@opco.com> wrote:

> Hi,
>
> I work at a large financial institution in New York. We're looking into
> Spark and trying to learn more about the deployment/use cases for real-time
> analytics with Spark. When would it be better to deploy standalone Spark
> versus Spark on top of a more comprehensive data management layer (Hadoop,
> Cassandra, MongoDB, etc.)? If you do deploy on top of one of these, are
> there different use cases where one of these database management layers are
> better or worse?
>
> Any color would be very helpful. Thank you in advance.
>
> Sincerely,
> Michael
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/When-to-use-underlying-data-management-layer-versus-standalone-Spark-tp23455.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to