[jira] [Created] (HDDS-258) Helper methods to generate NodeReport and ContainerReport for testing
Nanda kumar created HDDS-258: Summary: Helper methods to generate NodeReport and ContainerReport for testing Key: HDDS-258 URL: https://issues.apache.org/jira/browse/HDDS-258 Project: Hadoop Distributed Data Store Issue Type: Improvement Components: SCM Reporter: Nanda kumar Assignee: Nanda kumar Fix For: 0.2.1 Having helper methods to generate NodeReport and ContainerReport for testing SCM will make our life easy. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13742) Make HDFS Upgrade Domains more dynamic / automation friendly for scripted additions when adding datanodes etc
Hari Sekhon created HDFS-13742: -- Summary: Make HDFS Upgrade Domains more dynamic / automation friendly for scripted additions when adding datanodes etc Key: HDFS-13742 URL: https://issues.apache.org/jira/browse/HDFS-13742 Project: Hadoop HDFS Issue Type: Improvement Components: balancer & mover, hdfs, namenode, rolling upgrades, scripts, shell, tools Affects Versions: 3.1.0 Reporter: Hari Sekhon Improvement Request to change HDFS Upgrade Domains from basic JSON file to online command and script-able Rest API based management for better automation when scripting datanode additions to a cluster. [http://hadoop.apache.org/docs/r3.1.0/hadoop-project-dist/hadoop-hdfs/HdfsUpgradeDomain.html] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-259) Implement ContainerReportPublisher and NodeReportPublisher
Nanda kumar created HDDS-259: Summary: Implement ContainerReportPublisher and NodeReportPublisher Key: HDDS-259 URL: https://issues.apache.org/jira/browse/HDDS-259 Project: Hadoop Distributed Data Store Issue Type: Improvement Components: Ozone Datanode Reporter: Nanda kumar Assignee: Nanda kumar In HddsDatanode {{ReportPublisher}} will publish reports which are sent as part of the heartbeat, {{ContainerReportPublisher}} and {{NodeReportPublisher}} has to be implemented in order to send container and node reports to SCM. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13743) Router throws NullPointerException due to the invalid initialization of MountTableResolver
Takanobu Asanuma created HDFS-13743: --- Summary: Router throws NullPointerException due to the invalid initialization of MountTableResolver Key: HDFS-13743 URL: https://issues.apache.org/jira/browse/HDFS-13743 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Takanobu Asanuma Assignee: Takanobu Asanuma When {{dfs.federation.router.default.nameserviceId}} isn't set and any other default name service isn't found, clients can't submit requests to the router because of {{NullPointerException}}. # client side {noformat} $ hadoop fs -ls hdfs://router:/ ls: java.lang.NullPointerException {noformat} # Router log {noformat} java.lang.NullPointerException at java.util.TreeMap.getEntry(TreeMap.java:347) at java.util.TreeMap.containsKey(TreeMap.java:232) at java.util.TreeSet.contains(TreeSet.java:234) at org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2287) at org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:2239) at org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1163) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:966) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1689) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682) {noformat} The cause of this error is that the initialization of {{MountTableResolver}} doesn't work properly. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13744) OIV tool should better handle control characters present in file or directory names
Zsolt Venczel created HDFS-13744: Summary: OIV tool should better handle control characters present in file or directory names Key: HDFS-13744 URL: https://issues.apache.org/jira/browse/HDFS-13744 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs, tools Affects Versions: 3.0.3, 2.7.6, 2.8.4, 2.9.1, 2.6.5 Reporter: Zsolt Venczel Assignee: Zsolt Venczel In certain cases when control characters or white space is present in file or directory names OIV tool processors can export data in a misleading format. In the below examples we have EXAMPLE_NAME as a file and a directory name where the directory has a line feed character at the end (the actual production case has multiple line feeds and multiple spaces) * CSV processor case: ** misleading example: {code:java} /user/data/EXAMPLE_NAME ,0,2017-04-24 04:34,1969-12-31 16:00,0,0,0,-1,-1,drwxrwxr-x+,user,group /user/data/EXAMPLE_NAME,2016-08-26 03:00,2017-05-16 10:05,134217728,1,520,0,0,-rw-rwxr--+,user,group {code} ** expected example as suggested by [https://tools.ietf.org/html/rfc4180#section-2:] {code:java} "/user/data/EXAMPLE_NAME%x0D",0,2017-04-24 04:34,1969-12-31 16:00,0,0,0,-1,-1,drwxrwxr-x+,user,group "/user/data/EXAMPLE_NAME",2016-08-26 03:00,2017-05-16 10:05,134217728,1,520,0,0,-rw-rwxr--+,user,group {code} * XML processor case: ** misleading example: {code:java} 479867791DIRECTORYEXAMPLE_NAME 1493033668294user:group:0775 113632535FILEEXAMPLE_NAME314722056575041494954320141134217728user:group:0674 {code} ** expected example as specified in [https://www.w3.org/TR/REC-xml/#sec-line-ends:] {code:java} 479867791DIRECTORYEXAMPLE_NAME#xA1493033668294user:group:0775 479867791DIRECTORYEXAMPLE_NAME 1493033668294user:group:0775 {code} * JSON: The OIV Web Processor behaves correctly and produces the following: {code:java} { "FileStatuses": { "FileStatus": [ { "fileId": 113632535, "accessTime": 1494954320141, "replication": 3, "owner": "user", "length": 520, "permission": "674", "blockSize": 134217728, "modificationTime": 1472205657504, "type": "FILE", "group": "group", "childrenNum": 0, "pathSuffix": "EXAMPLE_NAME" }, { "fileId": 479867791, "accessTime": 0, "replication": 0, "owner": "user", "length": 0, "permission": "775", "blockSize": 0, "modificationTime": 1493033668294, "type": "DIRECTORY", "group": "group", "childrenNum": 0, "pathSuffix": "EXAMPLE_NAME\n" } ] } } {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13745) libhdfs++: Fix race in FileSystem destructor
James Clampffer created HDFS-13745: -- Summary: libhdfs++: Fix race in FileSystem destructor Key: HDFS-13745 URL: https://issues.apache.org/jira/browse/HDFS-13745 Project: Hadoop HDFS Issue Type: Task Components: native Reporter: James Clampffer Assignee: James Clampffer Whatever happens to have the last shared_ptr to the IoService will run ~IoService when the shared_ptr goes out of scope. IoService's destructor is responsible for joining all worker threads in the pool. Most callbacks now own weak_ptr that can be promoted to a shared_ptr in order to post new async tasks. If a callback object is the last thing holding the IoService shared_ptr it's going to try to join the thread pool inside of one of the thread pool's threads. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-260) Support in Datanode for sending ContainerActions to SCM
Nanda kumar created HDDS-260: Summary: Support in Datanode for sending ContainerActions to SCM Key: HDDS-260 URL: https://issues.apache.org/jira/browse/HDDS-260 Project: Hadoop Distributed Data Store Issue Type: Improvement Components: Ozone Datanode Reporter: Nanda kumar Assignee: Nanda kumar Fix For: 0.2.1 Datanode sends {{ContainerActions}} to inform SCM to take actions. For sending ContainerActions as part of heartbeat we need support in Datanode. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13746) Still occasional "Should be different group" failure in TestRefreshUserMappings#testGroupMappingRefresh
Siyao Meng created HDFS-13746: - Summary: Still occasional "Should be different group" failure in TestRefreshUserMappings#testGroupMappingRefresh Key: HDFS-13746 URL: https://issues.apache.org/jira/browse/HDFS-13746 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 3.0.0 Reporter: Siyao Meng Assignee: Siyao Meng In https://issues.apache.org/jira/browse/HDFS-13723, increasing the amount of time in sleep() helps but the problem still appears, which is annoying. Solution: Use a loop to allow the test case to fail 100(MAX_RETRIES) times before declaring failure. Wait 50 ms between each retry. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-261) Fix TestOzoneConfigurationFields
Hanisha Koneru created HDDS-261: --- Summary: Fix TestOzoneConfigurationFields Key: HDDS-261 URL: https://issues.apache.org/jira/browse/HDDS-261 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Hanisha Koneru HDDS-187 added a config key - {{hdds.command.status.report.interval}} to {{HddsConfigKeys}}. This class needs to be added to the {{configurationClasses}} field in {{TestOzoneConfigurationFields}} also so that the above mentioned config key is loaded into configurationMemberVariables. {code:java} configurationClasses = new Class[] {OzoneConfigKeys.class, ScmConfigKeys.class, OMConfigKeys.class};{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-261) Fix TestOzoneConfigurationFields
[ https://issues.apache.org/jira/browse/HDDS-261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru resolved HDDS-261. - Resolution: Duplicate Sorry this is a duplicate of HDDS-255. > Fix TestOzoneConfigurationFields > > > Key: HDDS-261 > URL: https://issues.apache.org/jira/browse/HDDS-261 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Hanisha Koneru >Priority: Minor > > HDDS-187 added a config key - {{hdds.command.status.report.interval}} to > {{HddsConfigKeys}}. This class needs to be added to the > {{configurationClasses}} field in {{TestOzoneConfigurationFields}} also so > that the above mentioned config key is loaded into > configurationMemberVariables. > {code:java} > configurationClasses = > new Class[] {OzoneConfigKeys.class, ScmConfigKeys.class, > OMConfigKeys.class};{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-262) Send SCM healthy and failed volumes in the heartbeat
Bharat Viswanadham created HDDS-262: --- Summary: Send SCM healthy and failed volumes in the heartbeat Key: HDDS-262 URL: https://issues.apache.org/jira/browse/HDDS-262 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Bharat Viswanadham The current code only sends volumes which are successfully created during datanode startup. For any volume an error occurred during HddsVolume object creation, we should move that volume to failedVolume Map. This should be sent to SCM as part of NodeReports. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: Hadoop 3.2 Release Plan proposal
On Tue, Jul 17, 2018 at 7:21 PM Steve Loughran wrote: > > > On 16 Jul 2018, at 23:45, Sunil G sun...@apache.org>> wrote: > > I would also would like to take this opportunity to come up with a detailed > plan. > > - Feature freeze date : all features should be merged by August 10, 2018. > > > > > > Please let me know if I missed any features targeted to 3.2 per this > > > Well there these big todo lists for S3 & S3Guard. > > https://issues.apache.org/jira/browse/HADOOP-15226 > https://issues.apache.org/jira/browse/HADOOP-15220 > > > There's a bigger bit of work coming on for Azure Datalake Gen 2 > https://issues.apache.org/jira/browse/HADOOP-15407 > > I don't think this is quite ready yet, I've been doing work on it, but if > we have a 3 week deadline, I'm going to expect some timely reviews on > https://issues.apache.org/jira/browse/HADOOP-15546 > > I've uprated that to a blocker feature; will review the S3 & S3Guard JIRAs > to see which of those are blocking. Then there are some pressing "guave, > java 9 prep" > > I can help with this part if you like. > > > > timeline. I would like to volunteer myself as release manager of 3.2.0 > release. > > > well volunteered! > > > Yes, thank you for stepping up. > > I think this raises a good q: what timetable should we have for the 3.2. & > 3.3 releases; if we do want a faster cadence, then having the outline time > from the 3.2 to the 3.3 release means that there's less concern about > things not making the 3.2 dealine > > -Steve > > Good idea to mitigate the short deadline. -AF
[jira] [Created] (HDFS-13747) Statistic for list_located_status is incremented incorrectly by listStatusIterator
Todd Lipcon created HDFS-13747: -- Summary: Statistic for list_located_status is incremented incorrectly by listStatusIterator Key: HDFS-13747 URL: https://issues.apache.org/jira/browse/HDFS-13747 Project: Hadoop HDFS Issue Type: Bug Components: hdfs-client Affects Versions: 3.0.3 Reporter: Todd Lipcon -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: Hadoop 3.2 Release Plan proposal
Thanks Sunil for volunteering to be RM of 3.2 release, +1 for that. To concerns from Steve, It is a good idea to keep the door open to get important changes / features in before cutoff. I would prefer to keep the proposed release date to make sure things can happen earlier instead of last minute and we all know that releases are always get delayed :). I'm also fine if we want get another several weeks time. Regarding of 3.3 release, I would suggest doing that before thanksgiving. Do you think is it good or too early / late? Eric, The YARN-8220 will be replaced by YARN-8135, if YARN-8135 can get merged in time, we probably not need the YARN-8220. Sunil, Could u update https://cwiki.apache.org/confluence/display/HADOOP/Roadmap with proposed plan as well? We can fill feature list first before getting consensus of time. Thanks, Wangda On Wed, Jul 18, 2018 at 6:20 PM Aaron Fabbri wrote: > On Tue, Jul 17, 2018 at 7:21 PM Steve Loughran > wrote: > > > > > > > On 16 Jul 2018, at 23:45, Sunil G > sun...@apache.org>> wrote: > > > > I would also would like to take this opportunity to come up with a > detailed > > plan. > > > > - Feature freeze date : all features should be merged by August 10, 2018. > > > > > > > > > > > > > Please let me know if I missed any features targeted to 3.2 per this > > > > > > Well there these big todo lists for S3 & S3Guard. > > > > https://issues.apache.org/jira/browse/HADOOP-15226 > > https://issues.apache.org/jira/browse/HADOOP-15220 > > > > > > There's a bigger bit of work coming on for Azure Datalake Gen 2 > > https://issues.apache.org/jira/browse/HADOOP-15407 > > > > I don't think this is quite ready yet, I've been doing work on it, but if > > we have a 3 week deadline, I'm going to expect some timely reviews on > > https://issues.apache.org/jira/browse/HADOOP-15546 > > > > I've uprated that to a blocker feature; will review the S3 & S3Guard > JIRAs > > to see which of those are blocking. Then there are some pressing "guave, > > java 9 prep" > > > > > I can help with this part if you like. > > > > > > > > > > > timeline. I would like to volunteer myself as release manager of 3.2.0 > > release. > > > > > > well volunteered! > > > > > > > Yes, thank you for stepping up. > > > > > > I think this raises a good q: what timetable should we have for the 3.2. > & > > 3.3 releases; if we do want a faster cadence, then having the outline > time > > from the 3.2 to the 3.3 release means that there's less concern about > > things not making the 3.2 dealine > > > > -Steve > > > > > Good idea to mitigate the short deadline. > > -AF >