Thanks John and Brahma for reporting issues with CHANGES.txt. IMO, we can move ahead with the current RC0 and address these issues in a commit after the release, but I'm not against cutting another RC to fix CHANGES.txt. What do you think?
Thanks, Sangjin On Fri, Sep 30, 2016 at 9:25 PM, Brahma Reddy Battula < brahmareddy.batt...@hotmail.com> wrote: > Thanks Sangjin! > > > +1 ( non- binding) > > > --Downloaded the source and complied > > --Installed HA cluster > > --Verified basic fsshell commands > > ---Did the regression on issues which is handled by me > > --Ran pi,terasort Slive jobs, all works fine. > > > Happy to see HDFS-9530. > > > Read through the commit log,Seems to be following commits are missed in > changes.txt. > > > HDFS-10653 > > ,HADOOP-13290 > > ,HDFS-10544, > > HADOOP-13255 > > ,HADOOP-13189 > > > > > --Brahma Reddy Battula > > > ------------------------------ > *From:* sjl...@gmail.com <sjl...@gmail.com> on behalf of Sangjin Lee < > sj...@apache.org> > *Sent:* Wednesday, September 28, 2016 1:58 AM > *To:* common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org; > yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org > *Subject:* [VOTE] Release Apache Hadoop 2.6.5 (RC0) > > Hi folks, > > I have created a release candidate RC0 for the Apache Hadoop 2.6.5 release > (the next maintenance release in the 2.6.x release line). Below are the > details of this release candidate: > > The RC is available for validation at: > http://home.apache.org/~sjlee/hadoop-2.6.5-RC0/. > > The RC tag in git is release-2.6.5-RC0 and its git commit is > 6939fc935fba5651fdb33386d88aeb8e875cf27a. > > The maven artifacts are staged via repository.apache.org at: > https://repository.apache.org/content/repositories/orgapachehadoop-1048/. > > You can find my public key at > http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS. > > Please try the release and vote. The vote will run for the usual 5 days. > Huge thanks to Chris Trezzo for spearheading the release management and > doing all the work! > > Thanks, > Sangjin >