[jira] [Reopened] (HDFS-12794) Ozone: Parallelize ChunkOutputSream Writes to container

2018-01-16 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh reopened HDFS-12794:
--

This patch has been reverted in HDFS-7240. Reopening it.

> Ozone: Parallelize ChunkOutputSream Writes to container
> ---
>
> Key: HDFS-12794
> URL: https://issues.apache.org/jira/browse/HDFS-12794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: HDFS-7240
>
> Attachments: HDFS-12794-HDFS-7240.001.patch, 
> HDFS-12794-HDFS-7240.002.patch, HDFS-12794-HDFS-7240.003.patch, 
> HDFS-12794-HDFS-7240.004.patch, HDFS-12794-HDFS-7240.005.patch, 
> HDFS-12794-HDFS-7240.006.patch, HDFS-12794-HDFS-7240.007.patch, 
> HDFS-12794-HDFS-7240.008.patch, HDFS-12794-HDFS-7240.009.patch
>
>
> The chunkOutPutStream Write are sync in nature .Once one chunk of data gets 
> written, the next chunk write is blocked until the previous chunk is written 
> to the container.
> The ChunkOutputWrite Stream writes should be made async and Close on the 
> OutputStream should ensure flushing of all dirty buffers to the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13023) Journal Sync not working on a kerberos cluster

2018-01-16 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDFS-13023:
-

 Summary: Journal Sync not working on a kerberos cluster
 Key: HDFS-13023
 URL: https://issues.apache.org/jira/browse/HDFS-13023
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


2018-01-10 01:15:40,517 INFO  server.JournalNodeSyncer 
(JournalNodeSyncer.java:syncWithJournalAtIndex(235)) - Syncing Journal 
/0.0.0.0:8485 with xxx, journal id: mycluster
2018-01-10 01:15:40,583 ERROR server.JournalNodeSyncer 
(JournalNodeSyncer.java:syncWithJournalAtIndex(259)) - Could not sync with 
Journal at xxx/xxx:8485
com.google.protobuf.ServiceException: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
 User nn/xxx (auth:PROXY) via jn/xxx (auth:KERBEROS) is not authorized for 
protocol interface org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: 
this service is only accessible by nn/x...@example.com
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:242)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy16.getEditLogManifest(Unknown Source)
at 
org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer.syncWithJournalAtIndex(JournalNodeSyncer.java:254)
at 
org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer.syncJournals(JournalNodeSyncer.java:230)
at 
org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer.lambda$startSyncJournalsDaemon$0(JournalNodeSyncer.java:190)
at java.lang.Thread.run(Thread.java:748)
Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
 User nn/xxx (auth:PROXY) via jn/xxx (auth:KERBEROS) is not authorized for 
protocol interface org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol: 
this service is only accessible by nn/xxx
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1491)
at org.apache.hadoop.ipc.Client.call(Client.java:1437)
at org.apache.hadoop.ipc.Client.call(Client.java:1347)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
... 6 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13024) Ozone: ContainerStateMachine should synchronize operations between createContainer op and writeChunk

2018-01-16 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDFS-13024:


 Summary: Ozone: ContainerStateMachine should synchronize 
operations between createContainer op and writeChunk
 Key: HDFS-13024
 URL: https://issues.apache.org/jira/browse/HDFS-13024
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: HDFS-7240


This issue happens after HDFS-12853. with HDFS-12853, writeChunk op has been 
divided into two stages 1) the log write phase (here the state machine data is 
written) 2) ApplyTransaction.

With a 3 node ratis ring, ratis leader will append the log entry to its log and 
forward it to its followers. However there is no guarantee on when the 
followers will apply the log to the state machine in {{applyTransaction}}. 

This issue happens in the following order

1) Leader accepts create container
2) Leader add entries to its logs and forwards to followers
3) Followers append the entry to its log and Ack to the raft leader (Please 
note that the transaction still hasn't been applied)
4) Leader applies the transaction and now replies
5) write chunk call is sent to the Leader
6) Leader now forwards the call to the followers
7) Followers try to apply the log by calling {{Dispatcher#dispatch}} however 
the create container call in 3) still hasn't been applied
8) write chunk call on followers fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-01-16 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/659/

[Jan 16, 2018 2:28:29 AM] (rohithsharmaks) YARN-6736. Consider writing to both 
ats v1 & v2 from RM for smoother
[Jan 16, 2018 10:51:02 AM] (brahma) HDFS-8693. refreshNamenodes does not 
support adding a new standby to a




-1 overall


The following subsystems voted -1:
asflicense findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
   org.apache.hadoop.yarn.api.records.Resource.getResources() may expose 
internal representation by returning Resource.resources At Resource.java:by 
returning Resource.resources At Resource.java:[line 234] 

Failed junit tests :

   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.TestReadStripedFileWithMissingBlocks 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 
   hadoop.hdfs.TestDFSClientRetries 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 
   hadoop.hdfs.web.TestWebHDFS 
   hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean 
   hadoop.hdfs.web.TestWebHDFSAcl 
   hadoop.hdfs.TestParallelUnixDomainRead 
   hadoop.hdfs.crypto.TestHdfsCryptoStreams 
   hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics 
   hadoop.hdfs.TestWriteReadStripedFile 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.TestReplication 
   hadoop.hdfs.TestSafeModeWithStripedFile 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 
   hadoop.hdfs.TestErasureCodingMultipleRacks 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.TestPread 
   hadoop.hdfs.TestDecommissionWithStriped 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem 
   hadoop.hdfs.nfs.nfs3.TestViewfsWithNfs3 
   hadoop.hdfs.nfs.nfs3.TestExportsTable 
   hadoop.yarn.webapp.TestWebApp 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/659/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/659/artifact/out/diff-compile-javac-root.txt
  [280K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/659/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/659/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/659/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/659/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/659/artifact/out/whitespace-eol.txt
  [9.2M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/659/artifact/out/whitespace-tabs.txt
  [292K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/659/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/659/artifact/out/diff-javadoc-javadoc-root.txt
  [760K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/659/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [640K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/659/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
  [308K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/659/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-nfs.txt
  [32K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/659/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
  [44K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/659/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/659/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/659/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8

[jira] [Created] (HDFS-13025) [SPS]: Implement a mechanism to scan the files for external SPS

2018-01-16 Thread Uma Maheswara Rao G (JIRA)
Uma Maheswara Rao G created HDFS-13025:
--

 Summary: [SPS]: Implement a mechanism to scan the files for 
external SPS
 Key: HDFS-13025
 URL: https://issues.apache.org/jira/browse/HDFS-13025
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G


HDFS-12911 modularization is introducing FileIDCollector interface for canning 
the files. That will help us to plugin different ways of scanning mechanisms if 
needed.

For Internal SPS, we have INode based scanning. For external SPS, we should go 
via client API scanning as we can not access NN internal structures. 

This is the task to implement the scanning plugin for external SPS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org