[jira] [Resolved] (HDFS-12821) Block invalid IOException causes the DFSClient domain socket being disabled

2017-11-16 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved HDFS-12821.
---
Resolution: Duplicate

> Block invalid IOException causes the DFSClient domain socket being disabled
> ---
>
> Key: HDFS-12821
> URL: https://issues.apache.org/jira/browse/HDFS-12821
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.4.0, 2.6.0
>Reporter: Gang Xie
>
> We use HDFS2.4 & 2.6, and recently hit a issue that DFSClient domain socket 
> is disabled when datanode throw block invalid exception. 
> The block is invalidated for some reason on datanote and it's OK. Then 
> DFSClient tries to access this block on this datanode via domain socket. This 
> triggers a IOExcetion. On DFSClient side, when get a IOExcetion and error 
> code 'ERROR', it disables the domain socket and fails back to TCP. and the 
> worst is that it seems never recover the socket. 
> I think this is a defect and with such "block invalid" exception, we should 
> not disable the domain socket because the is nothing wrong about the domain 
> socket service.
> And thoughts?
> The code:
> {code}
> private ShortCircuitReplicaInfo requestFileDescriptors(DomainPeer peer,
> Slot slot) throws IOException {
>   ShortCircuitCache cache = clientContext.getShortCircuitCache();
>   final DataOutputStream out =
>   new DataOutputStream(new BufferedOutputStream(peer.getOutputStream()));
>   SlotId slotId = slot == null ? null : slot.getSlotId();
>   new Sender(out).requestShortCircuitFds(block, token, slotId, 1);
>   DataInputStream in = new DataInputStream(peer.getInputStream());
>   BlockOpResponseProto resp = BlockOpResponseProto.parseFrom(
>   PBHelper.vintPrefixed(in));
>   DomainSocket sock = peer.getDomainSocket();
>   switch (resp.getStatus()) {
>   case SUCCESS:
> byte buf[] = new byte[1];
> FileInputStream fis[] = new FileInputStream[2];
> sock.recvFileInputStreams(fis, buf, 0, buf.length);
> ShortCircuitReplica replica = null;
> try {
>   ExtendedBlockId key =
>   new ExtendedBlockId(block.getBlockId(), block.getBlockPoolId());
>   replica = new ShortCircuitReplica(key, fis[0], fis[1], cache,
>   Time.monotonicNow(), slot);
> } catch (IOException e) {
>   // This indicates an error reading from disk, or a format error.  Since
>   // it's not a socket communication problem, we return null rather than
>   // throwing an exception.
>   LOG.warn(this + ": error creating ShortCircuitReplica.", e);
>   return null;
> } finally {
>   if (replica == null) {
> IOUtils.cleanup(DFSClient.LOG, fis[0], fis[1]);
>   }
> }
> return new ShortCircuitReplicaInfo(replica);
>   case ERROR_UNSUPPORTED:
> if (!resp.hasShortCircuitAccessVersion()) {
>   LOG.warn("short-circuit read access is disabled for " +
>   "DataNode " + datanode + ".  reason: " + resp.getMessage());
>   clientContext.getDomainSocketFactory()
>   .disableShortCircuitForPath(pathInfo.getPath());
> } else {
>   LOG.warn("short-circuit read access for the file " +
>   fileName + " is disabled for DataNode " + datanode +
>   ".  reason: " + resp.getMessage());
> }
> return null;
>   case ERROR_ACCESS_TOKEN:
> String msg = "access control error while " +
> "attempting to set up short-circuit access to " +
> fileName + resp.getMessage();
> if (LOG.isDebugEnabled()) {
>   LOG.debug(this + ":" + msg);
> }
> return new ShortCircuitReplicaInfo(new InvalidToken(msg));
>   default:
> LOG.warn(this + ": unknown response code " + resp.getStatus() +
> " while attempting to set up short-circuit access. " +
> resp.getMessage());
> clientContext.getDomainSocketFactory()
> .disableShortCircuitForPath(pathInfo.getPath());
> <<=
> return null;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12824) ViewFileSystem should support EC.

2017-11-16 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-12824:


 Summary: ViewFileSystem should support EC.
 Key: HDFS-12824
 URL: https://issues.apache.org/jira/browse/HDFS-12824
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: erasure-coding, fs
Affects Versions: 3.0.0-alpha1
Reporter: Harshakiran Reddy


Current ViewFileSystem does not support EC, it will throw 
IllegalArgumentException.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12825) After Block Corrupted, FSCK Report printing the Direct configuration.

2017-11-16 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-12825:


 Summary: After Block Corrupted, FSCK Report printing the Direct 
configuration.  
 Key: HDFS-12825
 URL: https://issues.apache.org/jira/browse/HDFS-12825
 Project: Hadoop HDFS
  Issue Type: Wish
  Components: hdfs
Affects Versions: 3.0.0-alpha1
Reporter: Harshakiran Reddy
Priority: Minor


Scenario:
Corrupt the Block in any datanode
Take the *FSCK *Report for that file.

Actual Output:
==
printing the direct configuration in fsck report

{{dfs.namenode.replication.min}}

Expected Output:

it should be {{MINIMAL BLOCK REPLICATION}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12826) Document Saying the RPC port, But it's required IPC port in Balancer Document.

2017-11-16 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-12826:


 Summary: Document Saying the RPC port, But it's required IPC port 
in Balancer Document.
 Key: HDFS-12826
 URL: https://issues.apache.org/jira/browse/HDFS-12826
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer & mover, documentation
Affects Versions: 3.0.0-beta1
Reporter: Harshakiran Reddy
Priority: Minor


In {{Adding a new Namenode to an existing HDFS cluster}} , refreshNamenodes 
command required IPC port but in Documentation it's saying the RPC port.

http://hadoop.apache.org/docs/r3.0.0-beta1/hadoop-project-dist/hadoop-hdfs/Federation.html#Balancer

{noformat} 
bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin 
-refreshNamenodes host-name:65110
refreshNamenodes: Unknown protocol: 
org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
bin.:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin 
-refreshNamenodes
Usage: hdfs dfsadmin [-refreshNamenodes datanode-host:ipc_port]
bin>:~/hdfsdata/HA/install/hadoop/datanode/bin> ./hdfs dfsadmin 
-refreshNamenodes host-name:50077
bin>:~/hdfsdata/HA/install/hadoop/datanode/bin>
{noformat} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12827) Update the description about Replica Placement: The First Baby Steps in HDFS Architecture documentation

2017-11-16 Thread Suri babu Nuthalapati (JIRA)
Suri babu Nuthalapati created HDFS-12827:


 Summary: Update the description about Replica Placement: The First 
Baby Steps in HDFS Architecture documentation
 Key: HDFS-12827
 URL: https://issues.apache.org/jira/browse/HDFS-12827
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Suri babu Nuthalapati
Priority: Minor


The placement should be this: 
https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html

HDFS’s placement policy is to put one replica on one node in the local rack, 
another on a node in a different (remote) rack, and the last on a different 
node in the same remote rack.

Hadoop Definitive guide says the same and I have tested and saw the same 
behavior as above.


But the documentation in the versions after r2.5.2 it was mentioned as:
http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html

HDFS’s placement policy is to put one replica on one node in the local rack, 
another on a different node in the local rack, and the last on a different node 
in a different rack. 




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.9.0 (RC3)

2017-11-16 Thread Wangda Tan
+1 (Binding).

Built from source, deployed pseudo cluster, ran sample jobs.

Thanks,
Wangda

On Wed, Nov 15, 2017 at 8:34 PM, Brahma Reddy Battula 
wrote:

> +1 ( non-binding)
>
> -Built from the source
> -Installed 3 Node HA cluster and a pseudo cluster
> -Verified through hdfs shell commands
> -Verified HDFS router federation
> -Ran sample jobs like pi,Slive
>
>
>
> --Brahma Reddy
>
>
> On Tue, Nov 14, 2017 at 5:40 AM, Arun Suresh  wrote:
>
> > Hi Folks,
> >
> > Apache Hadoop 2.9.0 is the first release of Hadoop 2.9 line and will be
> the
> > starting release for Apache Hadoop 2.9.x line - it includes 30 New
> Features
> > with 500+ subtasks, 407 Improvements, 790 Bug fixes new fixed issues
> since
> > 2.8.2.
> >
> > More information about the 2.9.0 release plan can be found here:
> > *https://cwiki.apache.org/confluence/display/HADOOP/
> > Roadmap#Roadmap-Version2.9
> >  > Roadmap#Roadmap-Version2.9>*
> >
> > New RC is available at: *https://home.apache.org/~
> > asuresh/hadoop-2.9.0-RC3/
> > *
> >
> > The RC tag in git is: release-2.9.0-RC3, and the latest commit id is:
> > 756ebc8394e473ac25feac05fa493f6d612e6c50.
> >
> > The maven artifacts are available via repository.apache.org at:
> >  > apache.org%2Fcontent%2Frepositories%2Forgapachehadoop-1066&sa=D&
> > sntz=1&usg=AFQjCNFcern4uingMV_sEreko_zeLlgdlg>*https://
> > repository.apache.org/content/repositories/orgapachehadoop-1068/
> >  orgapachehadoop-1068/
> > >*
> >
> > We are carrying over the votes from the previous RC given that the delta
> is
> > the license fix.
> >
> > Given the above - we are also going to stick with the original deadline
> for
> > the vote : ending on Friday 17th November 2017 2pm PT time.
> >
> > Thanks,
> > -Arun/Subru
> >
>
>
>
> --
>
>
>
> --Brahma Reddy Battula
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-11-16 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/594/

[Nov 15, 2017 8:32:53 AM] (sunilg) YARN-7464. Introduce filters in Nodes page 
of new YARN UI. Contributed
[Nov 15, 2017 6:03:29 PM] (templedf) HADOOP-14876. Create downstream developer 
docs from the compatibility
[Nov 15, 2017 6:03:29 PM] (templedf) YARN-6953. Clean up
[Nov 15, 2017 6:03:29 PM] (templedf) YARN-7414. FairScheduler#getAppWeight() 
should be moved into
[Nov 15, 2017 6:32:02 PM] (jlowe) YARN-7361. Improve the docker container 
runtime documentation.
[Nov 16, 2017 12:44:06 AM] (xiao) HADOOP-15023. ValueQueue should also validate 
(int) (lowWatermark *
[Nov 16, 2017 3:20:37 AM] (cdouglas) Revert "HDFS-12681. Fold 
HdfsLocatedFileStatus into HdfsFileStatus."
[Nov 16, 2017 8:14:21 AM] (sunilg) YARN-7492. Set up SASS for new YARN UI 
styling. Contributed by Vasudevan
[Nov 16, 2017 8:19:53 AM] (wwei) HDFS-12814. Add blockId when warning slow 
mirror/disk in BlockReceiver.
[Nov 16, 2017 3:58:06 PM] (billie) YARN-7486. Race condition in service AM that 
can cause NPE. Contributed

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Re: [VOTE] Release Apache Hadoop 2.9.0 (RC3)

2017-11-16 Thread Junping Du
I did following verification work:

- build succeed from source
- verify signature
- deploy a pseudo cluster and run some simple MR jobs
- Check HDFS/YARN daemons' UI
- Verify commit history matching with JIRA's fix version (which will be 
included in site document).

Most works fine, except I found ~200 JIRA are claimed to be included in 2.9.0 
but actually not show up in commit log for following cases:
a. commits are missing in 2.9.0
b. JIRA is marked as resolved but not fixed which shouldn't have fix version
c. JIRA is umbrella which doesn't include particular commits
d. JIRA commit message is lacking of JIRA number or have wrong number
e. JIRA commit is included in a whole commit due to branch merge work, like ATS 
v2.

While c,d,e is something we have to live with, but I hope we can enhance a. and 
b. next time.


Assume document issue is not a blocker for release, I give my bind +1.


Thanks,

Junping


From: Arun Suresh 
Sent: Monday, November 13, 2017 4:10 PM
To: yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org; Hadoop Common; 
Hdfs-dev
Cc: Subramaniam Krishnan
Subject: [VOTE] Release Apache Hadoop 2.9.0 (RC3)

Hi Folks,

Apache Hadoop 2.9.0 is the first release of Hadoop 2.9 line and will be the
starting release for Apache Hadoop 2.9.x line - it includes 30 New Features
with 500+ subtasks, 407 Improvements, 790 Bug fixes new fixed issues since
2.8.2.

More information about the 2.9.0 release plan can be found here:
*https://cwiki.apache.org/confluence/display/HADOOP/Roadmap#Roadmap-Version2.9
*

New RC is available at: *https://home.apache.org/~asuresh/hadoop-2.9.0-RC3/
*

The RC tag in git is: release-2.9.0-RC3, and the latest commit id is:
756ebc8394e473ac25feac05fa493f6d612e6c50.

The maven artifacts are available via repository.apache.org at:
*https://repository.apache.org/content/repositories/orgapachehadoop-1068/
*

We are carrying over the votes from the previous RC given that the delta is
the license fix.

Given the above - we are also going to stick with the original deadline for
the vote : ending on Friday 17th November 2017 2pm PT time.

Thanks,
-Arun/Subru


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0 RC0

2017-11-16 Thread John Zhuge
+1 (binding)

   - Verified checksums of all tarballs
   - Built source with native, Java 1.8.0_131-b11 on Mac OS X 10.12.6
   - Passed all S3A and ADL integration tests
   - Deployed both binary and built source to a pseudo cluster, passed the
   following sanity tests in insecure, SSL, and SSL+Kerberos mode:
  - HDFS basic and ACL
  - DistCp basic
  - MapReduce wordcount (skipped in SSL+Kerberos mode)
  - KMS and HttpFS basic
  - Balancer start/stop


On Tue, Nov 14, 2017 at 1:34 PM, Andrew Wang 
wrote:

> Hi folks,
>
> Thanks as always to the many, many contributors who helped with this
> release. I've created RC0 for Apache Hadoop 3.0.0. The artifacts are
> available here:
>
> http://people.apache.org/~wang/3.0.0-RC0/
>
> This vote will run 5 days, ending on Nov 19th at 1:30pm Pacific.
>
> 3.0.0 GA contains 291 fixed JIRA issues since 3.0.0-beta1. Notable
> additions include the merge of YARN resource types, API-based configuration
> of the CapacityScheduler, and HDFS router-based federation.
>
> I've done my traditional testing with a pseudo cluster and a Pi job. My +1
> to start.
>
> Best,
> Andrew
>



-- 
John


[jira] [Created] (HDFS-12828) OIV ReverseXML Processor Fails With Escaped Characters

2017-11-16 Thread Erik Krogen (JIRA)
Erik Krogen created HDFS-12828:
--

 Summary: OIV ReverseXML Processor Fails With Escaped Characters
 Key: HDFS-12828
 URL: https://issues.apache.org/jira/browse/HDFS-12828
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 2.8.0
Reporter: Erik Krogen


The HDFS OIV ReverseXML processor fails if the XML file contains escaped 
characters:
{code}
ekrogen at ekrogen-ld1 in 
~/dev/hadoop/trunk/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT on trunk!
± $HADOOP_HOME/bin/hdfs dfs -fs hdfs://localhost:9000/ -ls /
Found 4 items
drwxr-xr-x   - ekrogen supergroup  0 2017-11-16 14:48 /foo
drwxr-xr-x   - ekrogen supergroup  0 2017-11-16 14:49 /foo"
drwxr-xr-x   - ekrogen supergroup  0 2017-11-16 14:50 /foo`
drwxr-xr-x   - ekrogen supergroup  0 2017-11-16 14:49 /foo&
{code}
Then after doing {{saveNamespace}} on that NameNode...
{code}
ekrogen at ekrogen-ld1 in 
~/dev/hadoop/trunk/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT on trunk!
± $HADOOP_HOME/bin/hdfs oiv -i 
/tmp/hadoop-ekrogen/dfs/name/current/fsimage_008 -o 
/tmp/hadoop-ekrogen/dfs/name/current/fsimage_008.xml -p XML
ekrogen at ekrogen-ld1 in 
~/dev/hadoop/trunk/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT on trunk!
± $HADOOP_HOME/bin/hdfs oiv -i 
/tmp/hadoop-ekrogen/dfs/name/current/fsimage_008.xml -o 
/tmp/hadoop-ekrogen/dfs/name/current/fsimage_008.xml.rev -p 
ReverseXML
OfflineImageReconstructor failed: unterminated entity ref starting with &
org.apache.hadoop.hdfs.util.XMLUtils$UnmanglingError: unterminated entity ref 
starting with &
at 
org.apache.hadoop.hdfs.util.XMLUtils.unmangleXmlString(XMLUtils.java:232)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.loadNodeChildrenHelper(OfflineImageReconstructor.java:383)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.loadNodeChildrenHelper(OfflineImageReconstructor.java:379)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.loadNodeChildren(OfflineImageReconstructor.java:418)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.access$1000(OfflineImageReconstructor.java:95)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor$INodeSectionProcessor.process(OfflineImageReconstructor.java:524)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.processXml(OfflineImageReconstructor.java:1710)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageReconstructor.run(OfflineImageReconstructor.java:1765)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:191)
at 
org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:134)
{code}
See attachments for relevant fsimage XML file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12829) Moving logging APIs over to slf4j in hdfs

2017-11-16 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDFS-12829:
-

 Summary: Moving logging APIs over to slf4j in hdfs
 Key: HDFS-12829
 URL: https://issues.apache.org/jira/browse/HDFS-12829
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.9.0 (RC3)

2017-11-16 Thread Konstantinos Karanasos
+1.

Did the following:
1) set up a 6-node cluster;
2) tried some simple HDFS commands and generated some Gridmix data;
2) ran some Gridmix jobs;
3) ran (2) after enabling opportunistic containers (used a mix of
guaranteed and opportunistic containers for each job);
4) ran (3) but this time enabling distributed scheduling of opportunistic
containers.

All of the above worked with no issues.

Thanks again guys for all the effort!

Konstantinos

On Thu, Nov 16, 2017 at 1:44 PM, Junping Du  wrote:

> I did following verification work:
>
> - build succeed from source
> - verify signature
> - deploy a pseudo cluster and run some simple MR jobs
> - Check HDFS/YARN daemons' UI
> - Verify commit history matching with JIRA's fix version (which will be
> included in site document).
>
> Most works fine, except I found ~200 JIRA are claimed to be included in
> 2.9.0 but actually not show up in commit log for following cases:
> a. commits are missing in 2.9.0
> b. JIRA is marked as resolved but not fixed which shouldn't have fix
> version
> c. JIRA is umbrella which doesn't include particular commits
> d. JIRA commit message is lacking of JIRA number or have wrong number
> e. JIRA commit is included in a whole commit due to branch merge work,
> like ATS v2.
>
> While c,d,e is something we have to live with, but I hope we can enhance
> a. and b. next time.
>
>
> Assume document issue is not a blocker for release, I give my bind +1.
>
>
> Thanks,
>
> Junping
>
> 
> From: Arun Suresh 
> Sent: Monday, November 13, 2017 4:10 PM
> To: yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org; Hadoop
> Common; Hdfs-dev
> Cc: Subramaniam Krishnan
> Subject: [VOTE] Release Apache Hadoop 2.9.0 (RC3)
>
> Hi Folks,
>
> Apache Hadoop 2.9.0 is the first release of Hadoop 2.9 line and will be the
> starting release for Apache Hadoop 2.9.x line - it includes 30 New Features
> with 500+ subtasks, 407 Improvements, 790 Bug fixes new fixed issues since
> 2.8.2.
>
> More information about the 2.9.0 release plan can be found here:
> *
> https://cwiki.apache.org/confluence/display/HADOOP/Roadmap#Roadmap-Version2.9
> <
> https://cwiki.apache.org/confluence/display/HADOOP/Roadmap#Roadmap-Version2.9
> >*
>
> New RC is available at: *
> https://home.apache.org/~asuresh/hadoop-2.9.0-RC3/
> *
>
> The RC tag in git is: release-2.9.0-RC3, and the latest commit id is:
> 756ebc8394e473ac25feac05fa493f6d612e6c50.
>
> The maven artifacts are available via repository.apache.org at:
> <
> https://www.google.com/url?q=https%3A%2F%2Frepository.apache.org%2Fcontent%2Frepositories%2Forgapachehadoop-1066&sa=D&sntz=1&usg=AFQjCNFcern4uingMV_sEreko_zeLlgdlg
> >*https://repository.apache.org/content/repositories/orgapachehadoop-1068/
>  >*
>
> We are carrying over the votes from the previous RC given that the delta is
> the license fix.
>
> Given the above - we are also going to stick with the original deadline for
> the vote : ending on Friday 17th November 2017 2pm PT time.
>
> Thanks,
> -Arun/Subru
>
>
> -
> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
>
>
-- 
Konstantinos


[DISCUSS] Apache Hadoop 2.7.5 Release Plan

2017-11-16 Thread Konstantin Shvachko
Hi developers,

We have accumulated about 30 commits on branch-2.7. Those are mostly
valuable bug fixes, minor optimizations and test corrections. I would like
to propose to make a quick maintenance release 2.7.5.

If there are no objections I'll start preparations.

Thanks,
--Konstantin


[jira] [Resolved] (HDFS-12827) Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture documentation

2017-11-16 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDFS-12827.
---
Resolution: Not A Problem
  Assignee: Bharat Viswanadham

> Need Clarity on Replica Placement: The First Baby Steps in HDFS Architecture 
> documentation
> --
>
> Key: HDFS-12827
> URL: https://issues.apache.org/jira/browse/HDFS-12827
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Suri babu Nuthalapati
>Assignee: Bharat Viswanadham
>Priority: Minor
>
> The placement should be this: 
> https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html
> HDFS’s placement policy is to put one replica on one node in the local rack, 
> another on a node in a different (remote) rack, and the last on a different 
> node in the same remote rack.
> Hadoop Definitive guide says the same and I have tested and saw the same 
> behavior as above.
> 
> But the documentation in the versions after r2.5.2 it was mentioned as:
> http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
> HDFS’s placement policy is to put one replica on one node in the local rack, 
> another on a different node in the local rack, and the last on a different 
> node in a different rack. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12830) Ozone: TestOzoneRpcClient#testPutKeyRatisThreeNodes fails

2017-11-16 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-12830:


 Summary: Ozone: TestOzoneRpcClient#testPutKeyRatisThreeNodes fails
 Key: HDFS-12830
 URL: https://issues.apache.org/jira/browse/HDFS-12830
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Yiqun Lin
Assignee: Yiqun Lin


The test {{TestOzoneRpcClient#testPutKeyRatisThreeNodes}} always fails in the 
feature branch. Stack trace:
{noformat}
2017-11-16 11:44:28,512 [IPC Server handler 7 on 43551] ERROR 
ratis.RatisManagerImpl (RatisManagerImpl.java:getPipeline(130))  - Get 
pipeline call failed. We are not able to find free nodes or operational 
pipeline.
2017-11-16 11:44:28,513 [IPC Server handler 7 on 43551] WARN  ipc.Server 
(Server.java:logException(2721)) - IPC Server handler 7 on 43551, call Call#679 
Retry#0 
org.apache.hadoop.ozone.protocol.ScmBlockLocationProtocol.allocateScmBlock from 
172.17.0.2:42671
java.lang.NullPointerException
at 
org.apache.hadoop.scm.container.common.helpers.ContainerInfo.getProtobuf(ContainerInfo.java:132)
at 
org.apache.hadoop.ozone.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:221)
at 
org.apache.hadoop.ozone.scm.block.BlockManagerImpl.preAllocateContainers(BlockManagerImpl.java:190)
at 
org.apache.hadoop.ozone.scm.block.BlockManagerImpl.allocateBlock(BlockManagerImpl.java:292)
at 
org.apache.hadoop.ozone.scm.StorageContainerManager.allocateBlock(StorageContainerManager.java:1047)
at 
org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.allocateScmBlock(ScmBlockLocationProtocolServerSideTranslatorPB.java:107)
at 
{noformat}

The warn log {{Get pipeline call failed. We are not able to find free nodes or 
operational pipeline.}} is the failed reason. This is broken by the change in 
HDFS-12756. It didn't reset datanode num and use default value.
{code}
-cluster = new MiniOzoneCluster.Builder(conf).numDataNodes(5)
+cluster = new MiniOzoneClassicCluster.Builder(conf)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.9.0 (RC3)

2017-11-16 Thread Akira Ajisaka

+1

* Downloaded source tarball and verified checksum and signature
* Compiled the source with OpenJDK 1.8.0_151 and CentOS 7.4
* Deployed a pseudo cluster and ran some simple MR jobs

I noticed ISA-L build options are documented in BUILDING.txt
but the options do not exist in 2.x releases.
Filed HADOOP-15045 to fix this issue.
I think this issue doesn't block the release.

Thanks and regards,
Akira

On 2017/11/14 9:10, Arun Suresh wrote:

Hi Folks,

Apache Hadoop 2.9.0 is the first release of Hadoop 2.9 line and will be the
starting release for Apache Hadoop 2.9.x line - it includes 30 New Features
with 500+ subtasks, 407 Improvements, 790 Bug fixes new fixed issues since
2.8.2.

More information about the 2.9.0 release plan can be found here:
*https://cwiki.apache.org/confluence/display/HADOOP/Roadmap#Roadmap-Version2.9
*

New RC is available at: *https://home.apache.org/~asuresh/hadoop-2.9.0-RC3/
*

The RC tag in git is: release-2.9.0-RC3, and the latest commit id is:
756ebc8394e473ac25feac05fa493f6d612e6c50.

The maven artifacts are available via repository.apache.org at:
*https://repository.apache.org/content/repositories/orgapachehadoop-1068/
*

We are carrying over the votes from the previous RC given that the delta is
the license fix.

Given the above - we are also going to stick with the original deadline for
the vote : ending on Friday 17th November 2017 2pm PT time.

Thanks,
-Arun/Subru



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.0 RC0

2017-11-16 Thread Akira Ajisaka

Hi Andrew,

Signatures are missing. Would you upload them?

Thanks,
Akira

On 2017/11/15 6:34, Andrew Wang wrote:

Hi folks,

Thanks as always to the many, many contributors who helped with this
release. I've created RC0 for Apache Hadoop 3.0.0. The artifacts are
available here:

http://people.apache.org/~wang/3.0.0-RC0/

This vote will run 5 days, ending on Nov 19th at 1:30pm Pacific.

3.0.0 GA contains 291 fixed JIRA issues since 3.0.0-beta1. Notable
additions include the merge of YARN resource types, API-based configuration
of the CapacityScheduler, and HDFS router-based federation.

I've done my traditional testing with a pseudo cluster and a Pi job. My +1
to start.

Best,
Andrew



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org