How does the idea of removing support for Hadoop 1.x for Spark 1.5
strike everyone? Really, I mean, Hadoop < 2.2, as 2.2 seems to me more
consistent with the modern 2.x line than 2.1 or 2.0.

The arguments against are simply, well, someone out there might be
using these versions.

The arguments for are just simplification -- fewer gotchas in trying
to keep supporting older Hadoop, of which we've seen several lately.
We get to chop out a little bit of shim code and update to use some
non-deprecated APIs. Along with removing support for Java 6, it might
be a reasonable time to also draw a line under older Hadoop too.

I'm just gauging feeling now: for, against, indifferent?
I favor it, but would not push hard on it if there are objections.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to