[jira] [Created] (HDDS-1948) S3 MPU can't be created with octet-stream content-type

2019-08-10 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1948:
--

 Summary: S3 MPU can't be created with octet-stream content-type 
 Key: HDDS-1948
 URL: https://issues.apache.org/jira/browse/HDDS-1948
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Elek, Marton
Assignee: Elek, Marton


This problem is reported offline by [~shaneku...@gmail.com].

When aws-sdk-go is used to access to s3 gateway of Ozone it sends the Multi 
Part Upload initialize message with "application/octet-stream" Content-Type. 

This Content-Type is missing from the aws-cli which is used to reimplement s3 
endpoint.

The problem is that we use the same rest endpoint for initialize and complete 
Multipart Upload request. For the completion we need the 
CompleteMultipartUploadRequest parameter which is parsed from the body.

For initialize we have an empty body which can't be serialized to 
CompleteMultipartUploadRequest.

The workaround is to set a specific content type from a filter which help up to 
create two different REST method for initialize and completion message.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14718) HttpFS: Sort LISTSTATUS response by key names

2019-08-10 Thread Siyao Meng (JIRA)
Siyao Meng created HDFS-14718:
-

 Summary: HttpFS: Sort LISTSTATUS response by key names
 Key: HDFS-14718
 URL: https://issues.apache.org/jira/browse/HDFS-14718
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: httpfs
Reporter: Siyao Meng
Assignee: Siyao Meng


Example: see description of HDFS-14665.

Root cause:
WebHDFS is [using a 
TreeMap|https://github.com/apache/hadoop/blob/99bf1dc9eb18f9b4d0338986d1b8fd2232f1232f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java#L120]
 to serialize HdfsFileStatus, while HttpFS [uses a 
LinkedHashMap|https://github.com/apache/hadoop/blob/6fcc5639ae32efa5a5d55a6b6cf23af06fc610c3/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java#L107]
 to serialize FileStatus.

Questions:
Why the difference? Is this intentional? e.g. for some performance concern on 
HttpFS?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-08-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/409/

[Aug 9, 2019 10:49:55 PM] (cliang) HDFS-12975. [SBN read] Changes to the 
NameNode to support reads from
[Aug 9, 2019 10:49:55 PM] (cliang) HDFS-12977. [SBN read] Add stateId to RPC 
headers. Contributed by Plamen
[Aug 9, 2019 10:49:55 PM] (cliang) HDFS-13331. [SBN read] Add lastSeenStateId 
to RpcRequestHeader.
[Aug 9, 2019 10:49:55 PM] (cliang) HDFS-13286. [SBN read] Add haadmin commands 
to transition between
[Aug 9, 2019 10:49:55 PM] (cliang) HDFS-13578. [SBN read] Add ReadOnly 
annotation to methods in
[Aug 9, 2019 10:49:55 PM] (cliang) HDFS-13399. [SBN read] Make Client field 
AlignmentContext non-static.
[Aug 9, 2019 10:49:55 PM] (cliang) HDFS-13607. [SBN read] Edit Tail Fast Path 
Part 1: Enhance JournalNode
[Aug 9, 2019 10:49:55 PM] (cliang) HDFS-13608. [SBN read] Edit Tail Fast Path 
Part 2: Add ability for
[Aug 9, 2019 10:49:55 PM] (cliang) HDFS-13609. [SBN read] Edit Tail Fast Path 
Part 3: NameNode-side changes
[Aug 9, 2019 10:49:55 PM] (cliang) HDFS-13706. [SBN read] Rename client context 
to ClientGSIContext.
[Aug 9, 2019 10:49:55 PM] (cliang) HDFS-12976. [SBN read] Introduce 
ObserverReadProxyProvider. Contributed
[Aug 9, 2019 10:49:55 PM] (cliang) HDFS-13665. [SBN read] Move RPC response 
serialization into
[Aug 9, 2019 10:49:55 PM] (cliang) HDFS-13610. [SBN read] Edit Tail Fast Path 
Part 4: Cleanup. Integration
[Aug 9, 2019 10:49:56 PM] (cliang) HDFS-13688. [SBN read] Introduce msync API 
call. Contributed by Chen
[Aug 9, 2019 10:49:56 PM] (cliang) HDFS-13789. Reduce logging frequency of
[Aug 9, 2019 10:49:56 PM] (cliang) HDFS-13767. Add msync server implementation. 
Contributed by Chen Liang.
[Aug 9, 2019 10:49:56 PM] (cliang) HDFS-13851. Remove AlignmentContext from
[Aug 9, 2019 10:49:56 PM] (cliang) HDFS-13782. ObserverReadProxyProvider should 
work with
[Aug 9, 2019 10:49:56 PM] (cliang) HDFS-13779. [SBN read] Implement proper 
failover and observer failure
[Aug 9, 2019 10:49:56 PM] (cliang) HDFS-13880. Add mechanism to allow certain 
RPC calls to bypass sync.
[Aug 9, 2019 10:49:56 PM] (cliang) HDFS-13778. [SBN read] 
TestStateAlignmentContextWithHA should use real
[Aug 9, 2019 10:49:56 PM] (cliang) HDFS-13749. [SBN read] Use getServiceStatus 
to discover observer
[Aug 9, 2019 10:49:56 PM] (cliang) HDFS-13898. [SBN read] Throw retriable 
exception for getBlockLocations
[Aug 9, 2019 10:49:56 PM] (cliang) HDFS-13791. Limit logging frequency of edit 
tail related statements.
[Aug 9, 2019 10:49:57 PM] (cliang) HDFS-13961. [SBN read] TestObserverNode 
refactoring. Contributed by
[Aug 9, 2019 10:49:57 PM] (cliang) HDFS-13523. Support observer nodes in 
MiniDFSCluster. Contributed by
[Aug 9, 2019 10:49:57 PM] (cliang) HDFS-13925. Unit Test for transitioning 
between different states.
[Aug 9, 2019 10:49:57 PM] (cliang) HDFS-13924. [SBN read] Handle 
BlockMissingException when reading from
[Aug 9, 2019 10:49:57 PM] (cliang) HDFS-14016. [SBN read] 
ObserverReadProxyProvider should enable observer
[Aug 9, 2019 10:49:57 PM] (cliang) HDFS-14035. NN status discovery does not 
leverage delegation token.
[Aug 9, 2019 10:49:57 PM] (cliang) HDFS-14017. [SBN read] 
ObserverReadProxyProviderWithIPFailover should
[Aug 9, 2019 10:49:57 PM] (cliang) HDFS-14067. [SBN read] Allow manual failover 
between standby and
[Aug 9, 2019 10:49:57 PM] (cliang) HDFS-14094. [SBN read] Fix the order of 
logging arguments in
[Aug 9, 2019 10:49:57 PM] (cliang) HDFS-14120. [SBN read] ORFPP should also 
clone DT for the virtual IP.
[Aug 9, 2019 10:49:57 PM] (cliang) HDFS-14131. [SBN read] Create user guide for 
Consistent Reads from
[Aug 9, 2019 10:49:57 PM] (cliang) HDFS-14142. Move ipfailover config key out 
of HdfsClientConfigKeys.
[Aug 9, 2019 10:49:57 PM] (cliang) HDFS-13873. [SBN read] ObserverNode should 
reject read requests when it
[Aug 9, 2019 10:49:57 PM] (cliang) HDFS-14138. [SBN read] Description errors in 
the comparison logic of
[Aug 9, 2019 10:49:57 PM] (cliang) HDFS-14146. [SBN read] Handle exceptions 
from and prevent handler
[Aug 9, 2019 10:49:57 PM] (cliang) HDFS-14116. [SBN read] Fix class cast error 
in NNThroughputBenchmark
[Aug 9, 2019 10:49:57 PM] (cliang) HDFS-14149. [SBN read] Fix annotations on 
new interfaces/classes for SBN
[Aug 9, 2019 10:49:57 PM] (cliang) HDFS-14160. [SBN read] 
ObserverReadInvocationHandler should implement
[Aug 9, 2019 10:49:57 PM] (cliang) HDFS-14154. [SBN read] Document 
dfs.ha.tail-edits.period in user guide.
[Aug 9, 2019 10:49:57 PM] (cliang) HDFS-14170. [SBN read] Fix checkstyle 
warnings related to SBN reads.
[Aug 9, 2019 10:49:58 PM] (cliang) [SBN read] Addendum: Misc fix to be Java 7 
compatible
[Aug 9, 2019 10:49:58 PM] (cliang) HDFS-14250. [SBN read]. msync should always 
direct to active NameNode to
[Aug 9, 2019 10:49:58 PM] (cliang) HDFS-14317. Ensure checkpoints are created 
when in-progress edit log
[Aug 9, 2019 10:49:58 PM] (cliang) HDFS-14272. [SBN read] Make 
Obser

[jira] [Created] (HDDS-1949) Missing or error-prone test cleanup

2019-08-10 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1949:
---

 Summary: Missing or error-prone test cleanup
 Key: HDDS-1949
 URL: https://issues.apache.org/jira/browse/HDDS-1949
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


Some integration tests do not clean up after themselves.  Some only clean up if 
the test is successful.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1950) S3 MPU part list can't be called if there are no parts

2019-08-10 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1950:
--

 Summary: S3 MPU part list can't be called if there are no parts
 Key: HDDS-1950
 URL: https://issues.apache.org/jira/browse/HDDS-1950
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: S3
Reporter: Elek, Marton


If an S3 multipart upload is created but no part is upload the part list can't 
be called because it throws HTTP 500:

Create an MPU:

{code}
aws s3api --endpoint http://localhost: create-multipart-upload 
--bucket=docker --key=testkeu 
{
"Bucket": "docker",
"Key": "testkeu",
"UploadId": "85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234"
}
{code}

List the parts:

{code}
aws s3api --endpoint http://localhost: list-parts  --bucket=docker 
--key=testkeu 
--upload-id=85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234
{code}

It throws an exception on the server side, because in the 
KeyManagerImpl.listParts the  ReplicationType is retrieved from the first part:

{code}
HddsProtos.ReplicationType replicationType =
partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
{code}

Which is not yet available in this use case.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1951) Wrong symbolic release name on 0.4.1 branc

2019-08-10 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1951:
--

 Summary: Wrong symbolic release name on 0.4.1 branc
 Key: HDDS-1951
 URL: https://issues.apache.org/jira/browse/HDDS-1951
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Elek, Marton


Should be Biscayne instead of Crater lake according to the Roadmap:

https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Road+Map




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1952) TestMiniChaosOzoneCluster may run until OOME

2019-08-10 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1952:
---

 Summary: TestMiniChaosOzoneCluster may run until OOME
 Key: HDDS-1952
 URL: https://issues.apache.org/jira/browse/HDDS-1952
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Doroszlai, Attila


{{TestMiniChaosOzoneCluster}} runs load generator on a cluster for supposedly 1 
minute, but it may run indefinitely until JVM crashes due to OutOfMemoryError.

In 0.4.1 nightly build it crashed 29/30 times (and no tests were executed in 
the remaining one run due to some other error).

Latest:
https://github.com/elek/ozone-ci/blob/3f553ed6ad358ba61a302967617de737d7fea01a/byscane/byscane-nightly-wggqd/integration/output.log#L5661-L5662

When it crashes, it leaves GBs of data lying around.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-08-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1224/

[Aug 9, 2019 4:29:00 AM] (github) HDDS-1884. Support Bucket ACL operations for 
OM HA. (#1202)
[Aug 9, 2019 4:38:31 AM] (bharat) HDDS-1934. TestSecureOzoneCluster may fail 
due to port conflict (#1254)
[Aug 9, 2019 6:11:16 AM] (abmod) YARN-9694. UI always show default-rack for all 
the nodes while running
[Aug 9, 2019 7:34:23 AM] (snemeth) YARN-9727: Allowed Origin pattern is 
discouraged if regex contains *.
[Aug 9, 2019 7:49:18 AM] (snemeth) YARN-9094: Remove unused interface method:
[Aug 9, 2019 7:59:19 AM] (snemeth) YARN-9096: Some GpuResourcePlugin and 
ResourcePluginManager methods are
[Aug 9, 2019 8:18:34 AM] (snemeth) YARN-9092. Create an object for cgroups 
mount enable and cgroups mount
[Aug 9, 2019 8:37:54 AM] (rakeshr) HDFS-14700. Clean up pmem cache before 
setting pmem cache capacity.
[Aug 9, 2019 10:13:46 AM] (snemeth) SUBMARINE-57. Add more elaborate message if 
submarine command is not
[Aug 9, 2019 10:35:02 AM] (sunilg) YARN-9715. [UI2] yarn-container-log URI need 
to be encoded to avoid
[Aug 9, 2019 11:38:13 AM] (stevel) HADOOP-16315. ABFS: transform full UPN for 
named user in AclStatus
[Aug 9, 2019 1:52:37 PM] (gabor.bota) HADOOP-16499. S3A retry policy to be 
exponential (#1246). Contributed by
[Aug 9, 2019 3:33:08 PM] (gabor.bota) HADOOP-16481. 
ITestS3GuardDDBRootOperations.test_300_MetastorePrune
[Aug 9, 2019 4:55:30 PM] (abmodi) YARN-9732. 
yarn.system-metrics-publisher.enabled=false is not honored by
[Aug 9, 2019 6:12:17 PM] (eyang) YARN-9527.  Prevent rogue Localizer Runner 
from downloading same file
[Aug 9, 2019 6:28:38 PM] (xyao) HDDS-1906. 
TestScmSafeMode#testSCMSafeModeRestrictedOp is failing.
[Aug 9, 2019 10:37:29 PM] (weichiu) HDFS-14195. OIV: print out storage policy 
id in oiv Delimited output.
[Aug 9, 2019 10:41:37 PM] (weichiu) HDFS-14623. In NameNode Web UI, for Head 
the file (first 32K) old data
[Aug 10, 2019 1:00:22 AM] (weichiu) HDFS-12125. Document the missing EC 
removePolicy command (#1258)
[Aug 10, 2019 1:40:28 AM] (weichiu) HDFS-13359. DataXceiver hung due to the 
lock in




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

Failed junit tests :

   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.security.token.delegation.TestZKDelegationTokenSecretManager 
   hadoop.hdfs.server.datanode.TestLargeBlockReport 
   hadoop.hdfs.server.federation.router.TestRouterFaultTolerant 
   hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup 
   hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken 
   
hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing
 
   hadoop.tools.dynamometer.TestDynamometerInfra 
   hadoop.tools.dynamometer.TestDynamometerInfra 
   hadoop.fs.azure.metrics.TestNativeAzureFileSystemMetricsSystem 
   hadoop.yarn.sls.TestReservationSystemInvariants 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1224/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1224/artifact/out/diff-compile-javac-root.txt
  [332K]

   checkstyle:

   
https://builds.apache.org/job/hadoop