Just to add some more stuff - there are various scenarios where traditional
Hadoop makes more sense than Spark. For example, if you have a long running
processing job in which you do not want to utilize too many resources of
the cluster. Another example could be that you want to run a distributed
extraction job against multiple data sources via Hadoop streaming.

Another good call out but utilizing Scala within Spark is that most of the
Spark code is written in Scala.
On Sat, Nov 22, 2014 at 08:12 Denny Lee <denny.g....@gmail.com> wrote:

> There are various scenarios where traditional Hadoop makes more sense than
> Spark. For example, if you have a long running processing job in which you
> do not want to utilize too many resources of the cluster. Another example
> could be that you want to run a distributed extraction job against multiple
> data sources via Hadoop streaming.
> On Sat, Nov 22, 2014 at 07:36 Guillermo Ortiz <konstt2...@gmail.com>
> wrote:
>
>> Hello,
>>
>> I'm a newbie with Spark but I've been working with Hadoop for a while.
>> I have two questions.
>>
>> Is there any case where MR is better than Spark? I don't know what
>> cases I should be used Spark by MR. When is MR faster than Spark?
>>
>> The other question is, I know Java, is it worth it to learn Scala for
>> programming to Spark or it's okay just with Java? I have done a little
>> piece of code with Java because I feel more confident with it,, but I
>> seems that I'm missed something
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>

Reply via email to