Jenkins build is back to normal : Hadoop-Hdfs-trunk #1788

2014-06-28 Thread Apache Jenkins Server
See 



Re: Moving to JDK7, JDK8 and new major releases

2014-06-28 Thread Chris Nauroth
Following up on ecosystem, I just took a look at the Apache trunk pom.xml
files for HBase, Flume and Oozie.  All are specifying 1.6 for source and
target in the maven-compiler-plugin configuration, so there may be
additional follow-up required here.  (For example, if HBase has made a
statement that its client will continue to support JDK6, then it wouldn't
be practical for them to link to a JDK7 version of hadoop-common.)

+1 for the whole plan though.  We can work through these details.

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Fri, Jun 27, 2014 at 3:10 PM, Karthik Kambatla 
wrote:

> +1 to making 2.6 the last JDK6 release.
>
> If we want, 2.7 could be a parallel release or one soon after 2.6. We could
> upgrade other dependencies that require JDK7 as well.
>
>
> On Fri, Jun 27, 2014 at 3:01 PM, Arun C. Murthy 
> wrote:
>
> > Thanks everyone for the discussion. Looks like we have come to a
> pragmatic
> > and progressive conclusion.
> >
> > In terms of execution of the consensus plan, I think a little bit of
> > caution is in order.
> >
> > Let's give downstream projects more of a runway.
> >
> > I propose we inform HBase, Pig, Hive etc. that we are considering making
> > 2.6 (not 2.5) the last JDK6 release and solicit their feedback. Once they
> > are comfortable we can pull the trigger in 2.7.
> >
> > thanks,
> > Arun
> >
> >
> > > On Jun 27, 2014, at 11:34 AM, Karthik Kambatla 
> > wrote:
> > >
> > > As someone else already mentioned, we should announce one future
> release
> > > (may be, 2.5) as the last JDK6-based release before making the move to
> > JDK7.
> > >
> > > I am comfortable calling 2.5 the last JDK6 release.
> > >
> > >
> > > On Fri, Jun 27, 2014 at 11:26 AM, Andrew Wang <
> andrew.w...@cloudera.com>
> > > wrote:
> > >
> > >> Hi all, responding to multiple messages here,
> > >>
> > >> Arun, thanks for the clarification regarding MR classpaths. It sounds
> > like
> > >> the story there is improved and still improving.
> > >>
> > >> However, I think we still suffer from this at least on the HDFS side.
> We
> > >> have a single JAR for all of HDFS, and our clients need to have all
> the
> > fun
> > >> deps like Guava on the classpath. I'm told Spark sticks a newer Guava
> at
> > >> the front of the classpath and the HDFS client still works okay, but
> > this
> > >> is more happy coincidence than anything else. While we're leaking
> deps,
> > >> we're in a scary situation.
> > >>
> > >> API compat to me means that an app should be able to run on a new
> minor
> > >> version of Hadoop and not have anything break. MAPREDUCE-4421 sounds
> > like
> > >> it allows you to run e.g. 2.3 MR jobs on a 2.4 YARN cluster, but what
> > >> should also be possible is running an HDFS 2.3 app with HDFS 2.4 JARs
> > and
> > >> have nothing break. If we muck with the classpath, my understanding is
> > that
> > >> this could break.
> > >>
> > >> Owen, bumping the minimum JDK version in a minor release like this
> > should
> > >> be a one-time exception as Tucu stated. A number of people have
> pointed
> > out
> > >> how painful a forced JDK upgrade is for end users, and it's not
> > something
> > >> we should be springing on them in a minor release unless we're *very*
> > >> confident like in this case.
> > >>
> > >> Chris, thanks for bringing up the ecosystem. For CDH5, we standardized
> > on
> > >> JDK7 across the CDH stack, so I think that's an indication that most
> > >> ecosystem projects are ready to make the jump. Is that sufficient in
> > your
> > >> mind?
> > >>
> > >> For the record, I'm also +1 on the Tucu plan. Is it too late to do
> this
> > for
> > >> 2.5? I'll offer to help out with some of the mechanics.
> > >>
> > >> Thanks,
> > >> Andrew
> > >>
> > >> On Wed, Jun 25, 2014 at 4:18 PM, Chris Nauroth <
> > cnaur...@hortonworks.com>
> > >> wrote:
> > >>
> > >>> I understood the plan for avoiding JDK7-specific features in our
> code,
> > >> and
> > >>> your suggestion to add an extra Jenkins job is a great way to guard
> > >> against
> > >>> that.  The thing I haven't seen discussed yet is how downstream
> > projects
> > >>> will continue to consume our built artifacts.  If a downstream
> project
> > >>> upgrades to pick up a bug fix, and the jar switches to 1.7 class
> files,
> > >> but
> > >>> their project is still building with 1.6, then it would be a nasty
> > >>> surprise.
> > >>>
> > >>> These are the options I see:
> > >>>
> > >>> 1. Make sure all other projects upgrade first.  This doesn't sound
> > >>> feasible, unless all other ecosystem projects have moved to JDK7
> > already.
> > >>> If not, then waiting on a single long pole project would hold up our
> > >>> migration indefinitely.
> > >>>
> > >>> 2. We switch to JDK7, but run javac with -target 1.6 until the whole
> > >>> ecosystem upgrades.  I find this undesirable, because in a certain
> > sense,
> > >>> it still leaves a bit of 1.6 lingering in the project.  (I'll assume
> > that
> > >>> end-of-life for JDK6 also mean

Re: Moving to JDK7, JDK8 and new major releases

2014-06-28 Thread Steve Loughran
Guava is a separate problem and I think we should have a separate
discussion "what can we do about guava"? That's more traumatic than a JDK
update, I fear, as the guava releases care a lot less about compatibility.
I don't worry about JDK updates removing classes like "StringBuffer"
because "StringBuilder" is better.


On 27 June 2014 19:26, Andrew Wang  wrote:

> Hi all, responding to multiple messages here,
>
> Arun, thanks for the clarification regarding MR classpaths. It sounds like
> the story there is improved and still improving.
>
> However, I think we still suffer from this at least on the HDFS side. We
> have a single JAR for all of HDFS, and our clients need to have all the fun
> deps like Guava on the classpath. I'm told Spark sticks a newer Guava at
> the front of the classpath and the HDFS client still works okay, but this
> is more happy coincidence than anything else. While we're leaking deps,
> we're in a scary situation.
>

very good point.


>
> API compat to me means that an app should be able to run on a new minor
> version of Hadoop and not have anything break. MAPREDUCE-4421 sounds like
> it allows you to run e.g. 2.3 MR jobs on a 2.4 YARN cluster, but what
> should also be possible is running an HDFS 2.3 app with HDFS 2.4 JARs and
> have nothing break. If we muck with the classpath, my understanding is that
> this could break.
>
>
I think this is possible by having the app upload all the JARs...I need to
experiment here myself.

>
>
> Chris, thanks for bringing up the ecosystem. For CDH5, we standardized on
> JDK7 across the CDH stack, so I think that's an indication that most
> ecosystem projects are ready to make the jump. Is that sufficient in your
> mind?
>
>
+1, we've had no complaints about things not working on Java 7. It's been
out a long time. IF you look at our own code, the main thing that broke
were tests -due to junit test case ordering- and not much else.



> For the record, I'm also +1 on the Tucu plan. Is it too late to do this for
> 2.5? I'll offer to help out with some of the mechanics.
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.