[jira] [Created] (HDFS-11068) [SPS]: Provide unique trackID to track the block movement sends to coordinator

2016-10-27 Thread Rakesh R (JIRA)
Rakesh R created HDFS-11068:
---

 Summary: [SPS]: Provide unique trackID to track the block movement 
sends to coordinator
 Key: HDFS-11068
 URL: https://issues.apache.org/jira/browse/HDFS-11068
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R


Presently DatanodeManager uses constant  value -1 as 
[trackID|https://github.com/apache/hadoop/blob/HDFS-10285/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java#L1607],
 which is a temporary value. As per discussion with [~umamaheswararao], one 
proposal is to use {{BlockCollectionId/InodeFileId}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [DISCUSS] HADOOP-13603 - Remove package line length checkstyle rule

2016-10-27 Thread Shane Kumpf
Thank you to everyone for the discussion.

To summarize, it appears there are no objections with moving forward
with HADOOP-13603,
which will remove the package line length checkstyle rule. Removing or
expanding the 80 character line length limit globally will not be changed
at this time.

Suppression of other checkstyle rules can be accomplished via annotations,
where appropriate, as of HADOOP-13411 (thank you for this work, it will be
quite useful!), so additional rule changes are not warranted at this time.

I will move forward on HADOOP-13603. Please continue to share feedback
and/or let me know if you disagree with the summary above.

Thank you!
-Shane Kumpf

On Fri, Oct 21, 2016 at 12:49 PM, Andrew Wang 
wrote:

> Thanks for the clarification Akira, I'm fine with removing it for the
> package line too (and imports if that's a problem), +1.
>
> On Fri, Oct 21, 2016 at 2:02 AM, Akira Ajisaka 
> wrote:
>
> > This discussion was split into two separate topics.
> >
> > 1) Remove line length checkstyle rule for package line
> > 2) Remove line length checkstyle rule for the entire source code
> >
> > 1) I'm +1 for removing the rule for package line. I can provide a trivial
> > patch shortly in HADOOP-13603.
> >
> > 2) As Andrew said, the discussion was done in 2015. If we really want to
> > change the rule, we need to discuss again.
> >
> > Regards,
> > Akira
> >
> >
> > On 10/21/16 07:12, Andrew Wang wrote:
> >
> >> I don't think anything has really changed since we had this discussion
> in
> >> 2015 [1]. Github and gerrit and IDEs existed then too, and we decided to
> >> leave it at 80 characters due to split screens and readability.
> >>
> >> I personally still like 80 chars for these same reasons.
> >>
> >> [1]
> >> https://lists.apache.org/thread.html/3e1785cbbe14dcab9bb970f
> >> a0f534811cfe00795a8cd1100580f27dc@1430849118@%3Ccommon-dev.h
> >> adoop.apache.org%3E
> >>
> >> On Thu, Oct 20, 2016 at 7:46 AM, John Zhuge 
> wrote:
> >>
> >> With HADOOP-13411, it is possible to suppress any checkstyle warning
> with
> >>> an annotation.
> >>>
> >>> In this case, just add the following annotation before the class or
> >>> method:
> >>>
> >>> @SuppressWarnings("checkstyle:linelength")
> >>>
> >>> However this will not work if the warning is widespread in different
> >>> classes or methods.
> >>>
> >>> Thanks,
> >>> John Zhuge
> >>>
> >>> John Zhuge
> >>> Software Engineer, Cloudera
> >>>
> >>> On Thu, Oct 20, 2016 at 3:22 AM, Steve Loughran <
> ste...@hortonworks.com>
> >>> wrote:
> >>>
> >>>
>  On 19 Oct 2016, at 14:52, Shane Kumpf 
> >
>  wrote:
> 
> >
> > All,
> >
> > I would like to start a discussion on the possibility of removing the
> > package line length checkstyle rule (HADOOP-13603
> > ).
> >
> > While working on various aspects of YARN container runtimes, all of
> my
> > pre-commit jobs would fail as the package line length exceeded 80
> > characters. While I'm all for automated checks, I feel checks need to
> >
>  be
> >>>
>  enforceable and provide value. Fixing the package line length error
> >
>  does
> >>>
>  not improve readability or maintainability of the code, and IMO should
> >
>  be
> >>>
>  removed.
> >
> >
>  I kind of agree here
> 
>  working on other projects with wider line lenghts (100, 120) means
> that
>  you find going back to 80 chars so restrictive; and as we adopt java 8
> 
> >>> code
> >>>
>  with closures, your nesting gets even more complex. Trying to fit
> things
>  into 80 char width often adds lots of line breaks which can make the
>  code
>  messier than if it need be.
> 
>  the argument against wider lines has historically been "helped
>  side-by-side" patch reviews. But we have so much patch review software
>  these days: github, gerrit, IDEs. i don't think we need to stay in
>  punched-card width code limits just because it worked with a review
> 
> >>> process
> >>>
>  of 6+ years ago
> 
> 
>  While on this topic, are there other automated checks that are
> >
>  difficult
> >>>
>  to
> 
> > enforce or you feel are not providing value (perhaps the 150 line
> >
>  method
> >>>
>  length)?
> >
> >
>  I like that as a warning sign of complexity...it's not a hard veto
> after
>  all.
> 
>  -
>  To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
>  For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> 
> 
> 
> >>>
> >>
> >
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-10-27 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/207/

[Oct 26, 2016 6:59:39 AM] (rkanter) YARN-5753. fix NPE in 
AMRMClientImpl.getMatchingRequests() (haibochen
[Oct 26, 2016 11:39:09 AM] (weichiu) HADOOP-13659. Upgrade jaxb-api version. 
Contributed by Sean Mackrory.
[Oct 26, 2016 1:07:53 PM] (kihwal) HDFS-11050. Change log level to 'warn' when 
ssl initialization fails and
[Oct 26, 2016 2:16:13 PM] (kihwal) HDFS-11053. Unnecessary superuser check in 
versionRequest(). Contributed
[Oct 26, 2016 3:27:26 PM] (cnauroth) HADOOP-13614. Purge some 
superfluous/obsolete S3 FS tests that are
[Oct 26, 2016 3:55:42 PM] (cnauroth) HADOOP-13502. Split 
fs.contract.is-blobstore flag into more descriptive
[Oct 26, 2016 5:32:35 PM] (lei) HDFS-10638. Modifications to remove the 
assumption that StorageLocation
[Oct 26, 2016 6:31:00 PM] (sjlee) YARN-5433. Audit dependencies for Category-X. 
Contributed by Sangjin
[Oct 26, 2016 9:08:54 PM] (weichiu) Addendum patch for HADOOP-13514 Upgrade 
maven surefire plugin to 2.19.1.
[Oct 26, 2016 9:11:38 PM] (liuml07) HDFS-10921. TestDiskspaceQuotaUpdate 
doesn't wait for NN to get out of
[Oct 26, 2016 9:25:03 PM] (wang) HADOOP-8299. ViewFileSystem link slash mount 
point crashes with
[Oct 27, 2016 2:39:02 AM] (aengineer) HDFS-11038. DiskBalancer: support running 
multiple commands in single
[Oct 27, 2016 6:04:07 AM] (rohithsharmaks) YARN-4555. 
TestDefaultContainerExecutor#testContainerLaunchError fails
[Oct 27, 2016 6:27:17 AM] (rohithsharmaks) YARN-4363. In TestFairScheduler, 
testcase should not create




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.ha.TestZKFailoverController 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager 
   org.apache.hadoop.hdfs.server.blockmanagement.TestReplicationPolicy 
   
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped 
   
org.apache.hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 
   org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS 
   org.apache.hadoop.tools.TestHadoopArchives 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/207/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/207/artifact/out/diff-compile-javac-root.txt
  [168K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/207/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/207/artifact/out/diff-patch-pylint.txt
  [16K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/207/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/207/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/207/artifact/out/whitespace-eol.txt
  [11M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/207/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/207/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/207/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [120K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/207/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [120K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/207/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [276K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/207/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt
  [124K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/207/artifact/out/patch-unit-hadoop-tools_hadoop-archives.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/207/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt
  [16K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/207/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Pow

[jira] [Created] (HDFS-11069) Tighten the authorization of datanode RPC

2016-10-27 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-11069:
-

 Summary: Tighten the authorization of datanode RPC
 Key: HDFS-11069
 URL: https://issues.apache.org/jira/browse/HDFS-11069
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, security
Reporter: Kihwal Lee


The current implementation of {{checkSuperuserPrivilege()}} allows the datanode 
user from any node to be recognized as a super user.  If one datanode is 
compromised, the intruder can issue {{shutdownDatanode()}}, {{evictWriters()}}, 
{{triggerBlockReport()}}, etc. against all other datanodes.

This needs to be tightened to allow only the local datanode user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-10455) Logging the username when deny the setOwner operation

2016-10-27 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee reopened HDFS-10455:
---

Reverted the commits.

> Logging the username when deny the setOwner operation
> -
>
> Key: HDFS-10455
> URL: https://issues.apache.org/jira/browse/HDFS-10455
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
>Priority: Minor
> Attachments: HDFS-10455.000.patch, HDFS-10455.002.patch
>
>
> The attached patch appends the user name in the logging when the setOwner 
> operation is denied due to insufficient permissions on this user (based on 
> his/her name). 
> The same practice is used in {{FSPermissionChecker}} such as {{checkOwner()}} 
> and {{checkSuperuserPrivilege()}}.
> {code:title=FSDirAttrOp.java|borderStyle=solid}
>if (!pc.isSuperUser()) {
>  if (username != null && !pc.getUser().equals(username)) {
> -  throw new AccessControlException("Non-super user cannot change 
> owner");
> +  throw new AccessControlException("User " + pc.getUser()
> +  + " is not a super user (non-super user cannot change 
> owner).");
>  }
>  if (group != null && !pc.containsGroup(group)) {
> -  throw new AccessControlException("User does not belong to " + 
> group);
> +  throw new AccessControlException("User " + pc.getUser()
> +  + " does not belong to " + group);
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2016-10-27 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/137/

[Oct 26, 2016 1:07:53 PM] (kihwal) HDFS-11050. Change log level to 'warn' when 
ssl initialization fails and
[Oct 26, 2016 2:16:13 PM] (kihwal) HDFS-11053. Unnecessary superuser check in 
versionRequest(). Contributed
[Oct 26, 2016 3:27:26 PM] (cnauroth) HADOOP-13614. Purge some 
superfluous/obsolete S3 FS tests that are
[Oct 26, 2016 3:55:42 PM] (cnauroth) HADOOP-13502. Split 
fs.contract.is-blobstore flag into more descriptive
[Oct 26, 2016 5:32:35 PM] (lei) HDFS-10638. Modifications to remove the 
assumption that StorageLocation
[Oct 26, 2016 6:31:00 PM] (sjlee) YARN-5433. Audit dependencies for Category-X. 
Contributed by Sangjin
[Oct 26, 2016 9:08:54 PM] (weichiu) Addendum patch for HADOOP-13514 Upgrade 
maven surefire plugin to 2.19.1.
[Oct 26, 2016 9:11:38 PM] (liuml07) HDFS-10921. TestDiskspaceQuotaUpdate 
doesn't wait for NN to get out of
[Oct 26, 2016 9:25:03 PM] (wang) HADOOP-8299. ViewFileSystem link slash mount 
point crashes with
[Oct 27, 2016 2:39:02 AM] (aengineer) HDFS-11038. DiskBalancer: support running 
multiple commands in single
[Oct 27, 2016 6:04:07 AM] (rohithsharmaks) YARN-4555. 
TestDefaultContainerExecutor#testContainerLaunchError fails
[Oct 27, 2016 6:27:17 AM] (rohithsharmaks) YARN-4363. In TestFairScheduler, 
testcase should not create
[Oct 27, 2016 6:46:59 AM] (iwasakims) HADOOP-13017. Implementations of 
InputStream.read(buffer, offset, bytes)
[Oct 27, 2016 6:50:15 AM] (vinayakumarb) HDFS-9929. Duplicate keys in 
NAMENODE_SPECIFIC_KEYS (Contributed by
[Oct 27, 2016 7:30:57 AM] (aajisaka) HDFS-11049. The description of 
dfs.block.replicator.classname is not
[Oct 27, 2016 8:11:49 AM] (varunsaxena) YARN-5752. 
TestLocalResourcesTrackerImpl#testLocalResourceCache times
[Oct 27, 2016 8:25:17 AM] (varunsaxena) YARN-5710. Fix inconsistent naming in 
class ResourceRequest (Yufei Gu
[Oct 27, 2016 8:32:29 AM] (varunsaxena) YARN-5686. DefaultContainerExecutor 
random working dir algorigthm skews
[Oct 27, 2016 8:46:03 AM] (vinayakumarb) HDFS-10769. BlockIdManager.clear 
doesn't reset the counter for
[Oct 27, 2016 10:06:59 AM] (varunsaxena) MAPREDUCE-6798. Fix intermittent 
failure of
[Oct 27, 2016 11:14:00 AM] (vinayakumarb) HDFS-8492. DN should notify NN when 
client requests a missing block
[Oct 27, 2016 11:40:02 AM] (naganarasimha_gr) YARN-3848. 
TestNodeLabelContainerAllocation is timing out. Contributed
[Oct 27, 2016 11:51:01 AM] (varunsaxena) YARN-5757. RM REST API documentation 
is not up to date (Miklos Szegedi
[Oct 27, 2016 12:33:13 PM] (naganarasimha_gr) MAPREDUCE-6541. Exclude scheduled 
reducer memory when calculating
[Oct 27, 2016 12:52:07 PM] (naganarasimha_gr) YARN-5420. Delete




-1 overall


The following subsystems voted -1:
compile unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.timelineservice.storage.common.TestRowKeys 
   hadoop.yarn.server.timelineservice.storage.common.TestKeyConverters 
   hadoop.yarn.server.timelineservice.storage.common.TestSeparator 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.resourcemanager.TestResourceTrackerService 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorage 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   
hadoop.yarn.server.timelineservice.storage.TestPhoenixOfflineAggregationWriterImpl
 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   hadoop.yarn.applica

[jira] [Created] (HDFS-11070) NPE in BlockSender due to race condition

2016-10-27 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-11070:
--

 Summary: NPE in BlockSender due to race condition
 Key: HDFS-11070
 URL: https://issues.apache.org/jira/browse/HDFS-11070
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Wei-Chiu Chuang


Saw the following NPE in a unit test:
{quote}
2016-10-27 14:42:58,450 ERROR DataNode - 127.0.0.1:51987:DataXceiver error 
processing READ_BLOCK operation  src: /127.0.0.1:52429 dst: /127.0.0.1:51987
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:284)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:581)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:150)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:102)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289)
at java.lang.Thread.run(Thread.java:745)
{quote}

The NPE occurred here:
{code:title=BlockSender.}
  // Obtain a reference before reading data
  this.volumeRef = datanode.data.getVolume(block).obtainReference();
{code}

Right before the NPE was a few lines of debug message that indicated the 
replica was appended and updated.
{quote}
2016-10-27 14:42:58,442 DEBUG DataNode - 
block=BP-1071315328-172.16.1.88-1477604513635:blk_1073741825_1192, 
replica=FinalizedReplica, blk_1073741825_1192, FINALIZED
  getNumBytes() = 192
  getBytesOnDisk()  = 192
  getVisibleLength()= 192
  getVolume()   = 
/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1
  getBlockURI() = 
file:/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1/current/BP-1071315328-172.16.1.88-1477604513635/current/finalized/subdir0/subdir0/blk_1073741825
2016-10-27 14:42:58,442 INFO  FsDatasetImpl - Appending to FinalizedReplica, 
blk_1073741825_1192, FINALIZED
  getNumBytes() = 192
  getBytesOnDisk()  = 192
  getVisibleLength()= 192
  getVolume()   = 
/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1
  getBlockURI() = 
file:/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1/current/BP-1071315328-172.16.1.88-1477604513635/current/finalized/subdir0/subdir0/blk_1073741825
2016-10-27 14:42:58,442 DEBUG FsDatasetCache - Block with id 1073741825, pool 
BP-1071315328-172.16.1.88-1477604513635 does not need to be uncached, because 
it is not currently in the mappableBlockMap.
2016-10-27 14:42:58,450 DEBUG LocalReplica - Renaming 
/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1/current/BP-1071315328-172.16.1.88-1477604513635/current/finalized/subdir0/subdir0/blk_1073741825_1192.meta
 to 
/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1/current/BP-1071315328-172.16.1.88-1477604513635/current/rbw/blk_1073741825_1193.meta
2016-10-27 14:42:58,450 DEBUG LocalReplica - Renaming 
/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1/current/BP-1071315328-172.16.1.88-1477604513635/current/finalized/subdir0/subdir0/blk_1073741825
 to 
/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1/current/BP-1071315328-172.16.1.88-1477604513635/current/rbw/blk_1073741825,
 file length=192
2016-10-27 14:42:58,450 DEBUG DataNode - writeTo blockfile is 
/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1/current/BP-1071315328-172.16.1.88-1477604513635/current/rbw/blk_1073741825
 of size 192
2016-10-27 14:42:58,450 DEBUG DataNode - writeTo metafile is 
/Users/weichiu/sandbox/hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/dfs/data/data1/current/BP-1071315328-172.16.1.88-1477604513635/current/rbw/blk_1073741825_1193.meta
 of size 11
{quote}

The block object's genstamp should have been the same as the ondisk replica. 
However, the log suggests the replica's genstamp may have been updated after:
{code:title=BlockSender.}
  try(AutoCloseableLock lock = datanode.data.acquireDatasetLock()) {
replica = getReplica(block, datanode);
replicaVisibleLength = replica.getVisibleLength();
  }
  // if there is a write in progress
  ChunkChecksum chunkChecksum = null;
  if (replica.getState() == ReplicaState.RBW) {
final ReplicaInPipeline rbw = (ReplicaInPipeline) replica;
waitForMinLength(rbw, startOffset + length);
chunkChecksum = rbw.getLastChecksumAndDataLen();
  }
{code}

In summary, I think the assumption here is not valid, because a 
write-in-progress may happen after that check.



--
This message w

[jira] [Created] (HDFS-11071) Ozone: SCM: Move SCM config keys to ScmConfig

2016-10-27 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-11071:
---

 Summary: Ozone: SCM: Move SCM config keys to ScmConfig
 Key: HDFS-11071
 URL: https://issues.apache.org/jira/browse/HDFS-11071
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Anu Engineer
Assignee: Anu Engineer
Priority: Minor
 Fix For: HDFS-7240


Move SCM keys from OzoneConfigKeys.Ozone to SCMConfigKeys.SCM * 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11072) Add ability to unset and change directory EC policy

2016-10-27 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-11072:
--

 Summary: Add ability to unset and change directory EC policy
 Key: HDFS-11072
 URL: https://issues.apache.org/jira/browse/HDFS-11072
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: erasure-coding
Affects Versions: 3.0.0-alpha1
Reporter: Andrew Wang


Since the directory-level EC policy simply applies to files at create time, it 
makes sense to make it more similar to storage policies and allow changing and 
unsetting the policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-1499) mv the namenode NameSpace and BlocksMap to hbase to save the namenode memory

2016-10-27 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang resolved HDFS-1499.
-
Resolution: Duplicate

Resolving the old JIRA since many similar JIRAs have been raised, including 
HDFS-8286.

> mv the namenode NameSpace and BlocksMap to hbase to save the namenode memory
> 
>
> Key: HDFS-1499
> URL: https://issues.apache.org/jira/browse/HDFS-1499
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: dl.brain.ln
>
> The NameNode stores all its metadata in the main memory of the machine on 
> which it is deployed. With the file-count and block number growing, namenode 
> machine can't hold anymore files and blocks in its memory and thus restrict 
> the HDFS cluster growth. So many people are talking and thinking abont this 
> problem. Google's next version of GFS use bigtable to store the metadata of 
> the DFS and that seem works. What if we use hbase as the same?
> In the namenode structure, the namespace of the filesystem and the map of 
> block -> datanodes, datanode->blocks which keeped in memory are consume most 
> of the namenode's heap, what if we store those data structure in hbase to 
> decrease the namenode's memory?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org