MPLab Big
> Data
> benchmark, but confirmation from the hands-on community is invaluable,
> thank
> you.
>
> I understand a lot of it simply has to do with what-do-you-value-more
> weightings, and we'll do p
what-do-you-value-more
weightings, and we'll do prototypes/benchmarks if we have to, just wasn't
sure if there were any other "key assumptions/requirements/gotchas" to
consider.
--
View this message in context:
http://ap
t; if anyone who has used both in 2014 would care to provide commentary
>> about
>> > the sweet spot use cases / gotchas for non trivial use (eg a simple
>> filter
>> > scan isn't really i
n't really interesting). Soft issues like operational maintenance
> > and time spent developing v out of the box are interesting too...
> >
> >
> >
> > --
> > View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-v-Redshift-tp18112.html
>
care to provide commentary about
>> the sweet spot use cases / gotchas for non trivial use (eg a simple filter
>> scan isn't really interesting). Soft issues like operational maintenance
>> and time spent developing v out of the box are interesting too...
>>
>>
spent developing v out of the box are interesting too...
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-v-Redshift-tp18112.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
ike operational maintenance
and time spent developing v out of the box are interesting too...
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-v-Redshift-tp18112.html
Sent from the Apache Spark User List mailing list archiv