So from my point of view, I'd do it maybe 1-2 years after all the major Hadoop
vendors have stopped supporting Java 6. We're not there yet, but we will be
soon. The reason is that the cost of staying on Java 6 is much smaller to us
(as developers) than the cost of fragmenting the Spark community
A concrete plan and a definite version upon which the upgrade would be
applied sounds like it would benefit the community. If you plan far enough
out (as Hadoop has done) and give the community enough of a notice, I can't
see it being a problem as they would have ample time upgrade.
On Sat, Oct
Hadoop, for better or worse, depends on an ancient version of Jetty
(6), that is even on a different package. So Spark (or anyone trying
to use a newer Jetty) is lucky on that front...
IIRC Hadoop is planning to move to Java 7-only starting with 2.7. Java
7 is also supposed to be EOL some time nex
I'd also wait a bit until these are gone. Jetty is unfortunately a much hairier
topic by the way, because the Hadoop libraries also depend on Jetty. I think it
will be hard to update. However, a patch that shades Jetty might be nice to
have, if that doesn't require shading a lot of other stuff.
my experience is that there are still a lot of java 6 clusters out there.
also distros that bundle spark still support java 6
On Oct 17, 2014 8:01 PM, "Andrew Ash" wrote:
> Hi Spark devs,
>
> I've heard a few times that keeping support for Java 6 is a priority for
> Apache Spark. Given that Java