I agree with that. My anecdotal impression is that Hadoop 1.x usage
out there is maybe a couple percent, and so we should shift towards
2.x at least as defaults.

On Sun, Mar 1, 2015 at 10:59 PM, Nicholas Chammas
<nicholas.cham...@gmail.com> wrote:
> https://github.com/apache/spark/blob/fd8d283eeb98e310b1e85ef8c3a8af9e547ab5e0/ec2/spark_ec2.py#L162-L164
>
> Is there any reason we shouldn't update the default Hadoop major version in
> spark-ec2 to 2?
>
> Nick

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to