[jira] [Created] (HDFS-13914) DN UI logs link is broken when https is enabled after HDFS-13902 fixed

2018-09-13 Thread Jianfei Jiang (JIRA)
Jianfei Jiang created HDFS-13914:


 Summary: DN UI logs link is broken when https is enabled after 
HDFS-13902 fixed
 Key: HDFS-13914
 URL: https://issues.apache.org/jira/browse/HDFS-13914
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jianfei Jiang
Assignee: Jianfei Jiang
 Attachments: HDFS-13914_001.patch

The bug that DN UI logs link is broken when https is enabled is fixed by 
HDFS-13581, however, after fixing HDFS-13902, this bug appears again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-442) ozone oz doesn't work with rest and without a hostname

2018-09-13 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-442:
-

 Summary: ozone oz doesn't work with rest and without a hostname
 Key: HDDS-442
 URL: https://issues.apache.org/jira/browse/HDDS-442
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Elek, Marton
 Fix For: 0.2.1


Starting ozone oz commands from the *datanode*

Rpc works without a hostname:

{{ozone oz volume create o3:///test2 --user hadoop --root}}

Rest doesn't work:

{{ozone oz volume create [http:///test2|http://issues.apache.org/test2] --user 
hadoop --root}}

Error:
{quote}2018-09-13 08:43:08 ERROR OzoneClientFactory:294 - Couldn't create 
protocol class org.apache.hadoop.ozone.client.rest.RestClient exception: 
 java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
 at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:291)
 at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getRestClient(OzoneClientFactory.java:247)
 at org.apache.hadoop.ozone.web.ozShell.Handler.verifyURI(Handler.java:95)
 at 
org.apache.hadoop.ozone.web.ozShell.volume.CreateVolumeHandler.call(CreateVolumeHandler.java:66)
 at 
org.apache.hadoop.ozone.web.ozShell.volume.CreateVolumeHandler.call(CreateVolumeHandler.java:38)
 at picocli.CommandLine.execute(CommandLine.java:919)
 at picocli.CommandLine.access$700(CommandLine.java:104)
 at picocli.CommandLine$RunLast.handle(CommandLine.java:1083)
 at picocli.CommandLine$RunLast.handle(CommandLine.java:1051)
 at 
picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:959)
 at picocli.CommandLine.parseWithHandlers(CommandLine.java:1242)
 at picocli.CommandLine.parseWithHandler(CommandLine.java:1181)
 at org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:61)
 at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:52)
 at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:77)
 Caused by: org.apache.http.conn.HttpHostConnectException: Connect to 
0.0.0.0:9874 [/0.0.0.0] failed: Connection refused (Connection refused)
 at 
org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:158)
 at 
org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
 at 
org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
 at 
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
 at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
 at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
 at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
 at 
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
 at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
 at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
 at 
org.apache.hadoop.ozone.client.rest.RestClient.executeHttpRequest(RestClient.java:848)
 at 
org.apache.hadoop.ozone.client.rest.RestClient.getOzoneRestServerAddress(RestClient.java:180)
 at org.apache.hadoop.ozone.client.rest.RestClient.(RestClient.java:156)
 ... 19 more
 Caused by: java.net.ConnectException: Connection refused (Connection refused)
 at java.net.PlainSocketImpl.socketConnect(Native Method)
 at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:204)
 at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
 at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
 at java.net.Socket.connect(Socket.java:589)
 at 
org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
 at 
org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:141)
{quote}
Most probably a wrong configuration is used to get the address of the 
ozoneManager. 0.0.0.0 seems to be the listener address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-443) Create reusable ProgressBar utility for freon tests

2018-09-13 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-443:
-

 Summary: Create reusable ProgressBar utility for freon tests
 Key: HDDS-443
 URL: https://issues.apache.org/jira/browse/HDDS-443
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: test
Reporter: Elek, Marton


Since HDDS-398 we can support multiple type of freon tests. But to add more 
test we need common utilities for generic task.

One of the most important is to provide a reusable Progressbar utility.

Currently the ProgressBar class is part the RandomKeyGenerator. It should be 
moved out from the class and all the thread start/stop logic should be moved to 
the ProgressBar.

{{ProgressBar bar = new ProgressBar(System.out, () ->  ... , 200);}}
{{bar.start(); // thred should be started here}}{{bar.stop(); // thread should 
be stopped.}}

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-444) Add rest service to the s3gateway

2018-09-13 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-444:
-

 Summary: Add rest service to the s3gateway
 Key: HDDS-444
 URL: https://issues.apache.org/jira/browse/HDDS-444
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Elek, Marton


The next step is after HDDS-441 is to add a rest server to the s3gateway 
service.

For the http server the obvious choice is to use 
org.apache.hadoop.http.HttpServer2. We also have a 
org.apache.hadoop.hdds.server.BaseHttpServer which helps to create the 
HttpServer2.

In hadoop usually the jersey 1.19 is used. I prefer to exclude jersey 
dependency from the s3gateway and add the latest jersey2. Hopefully it also 
could be initialized easily, similar to HttpServer2.addJerseyResourcePackage

The trickiest part is the resource handling. By default the input parameter of 
the jersey is the JAX-RS resource class and jersey creates new instances from 
the specified resource classes.

But with this approach we can't inject other components (such as the 
OzoneClient) to the resource classes. In Hadoop usually a singleton is used or 
the reference object is injected to the ServletContext. Both of these are just 
workaround and make the testing harder.

I propose to use some lightweight managed dependency injection:
 # If we can use and JettyApi to instantiate the resource classes, that would 
be the easiest one.
 # Using a simple CDI framework like dagger, also would help. Dagger is very 
lightweight, it doesn't support request scoped objects just simple @Inject 
annotations, but hopefully we won't need fancy new features.
 # The most complex solution would be to use CDI or Guice. CDI seems to be more 
nature choice for the JAX-RS. It can be checked how easy is to integrate Weld 
to the Jetty + Jersey combo.

The expected end result of this task is a new HttpServer subcomponent inside 
the s3gateway which could be started/stopped. We need an example simple service 
(for exampe a /health endpoint which returns with an 'OK' string) which can 
demonstrate how our own utilitites (such as OzoneClient) could be injected to 
the REST resources.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-445) Create a logger to print out all of the incomming requests

2018-09-13 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-445:
-

 Summary: Create a logger to print out all of the incomming requests
 Key: HDDS-445
 URL: https://issues.apache.org/jira/browse/HDDS-445
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
 Environment: For the Http servier of HDDS-444 we need an option to 
print out all the HttpRequests (header + body).

To create a 100% s3 compatible interface, we need to test it with multiple 
external tools (such as s3cli). While mitmproxy is always our best friend, to 
make it more easier to identify the problems we need a method to log all the 
incoming requests with a logger which could be turned on.

Most probably we already have such kind of filter in hadoop/jetty the only 
thing what we need is to configure it.
Reporter: Elek, Marton






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-446) Provide shaded artifact to start ozone service as a datanode plugin

2018-09-13 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-446:
-

 Summary: Provide shaded artifact to start ozone service as a 
datanode plugin
 Key: HDDS-446
 URL: https://issues.apache.org/jira/browse/HDDS-446
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Elek, Marton
Assignee: Elek, Marton
 Fix For: 0.2.1


Ozone datanode service could be started in two different way:
 # as a standalone process
 # as a datanode plugin

We tested the second scenarion by compiling both ozone + hadoop trunk together 
and created a combined artifact. But we have no more full hadoop distribution 
in the ozone release package any more. And we had no answer how it could be 
started with an existing hadoop release.

We need a well defined way to add the datanode-service to the classpath of the 
hadoop datanode.

I propose here to create a lightweight shaded artifact (and here the shading 
only combines all the classes together without package name refactoring) to 
make it easier to extend an existing hadoop installation.

In this patch I add the shade plugin execution to the ozone objectstore-service 
and the shaded file will be copied to the distribution.

A new docker-compose based execution environment (ozone-hdfs) 
demonstrates/tests how it could work.

Tested with hadoop 3.1.0 and datanode service could be started without any 
problem



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-447) separate ozone-dist and hadoop-dist projects

2018-09-13 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-447:
-

 Summary: separate ozone-dist and hadoop-dist projects
 Key: HDDS-447
 URL: https://issues.apache.org/jira/browse/HDDS-447
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Elek, Marton


Currently we have the same hadoop-dist project to create both the ozone and and 
the hadoop distribution.

To decouple ozone and hadoop build it would be great to create two different 
dist project.

It could be done easily without modifying the current distribution schema:

The hadoop-dist should be cloned to hadoop-ozone/dist and from 
hadoop-dist/pom.xml we can remove the hdds/ozone related items and from 
hadoop-ozone/dist/pom.xml we can remove the core hadoop related part.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-448) Move NodeStat to NodeStatemanager from SCMNodeManager.

2018-09-13 Thread LiXin Ge (JIRA)
LiXin Ge created HDDS-448:
-

 Summary: Move NodeStat to NodeStatemanager from SCMNodeManager.
 Key: HDDS-448
 URL: https://issues.apache.org/jira/browse/HDDS-448
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: LiXin Ge


This issue try to make the SCMNodeManager clear and clean, as the stat 
information should be kept by NodeStatemanager (NodeStateMap). It's also 
described by [~nandakumar131] as something \{{TODO}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-449) Add a NULL check to protect DeadNodeHandler#onMessage

2018-09-13 Thread LiXin Ge (JIRA)
LiXin Ge created HDDS-449:
-

 Summary: Add a NULL check to protect DeadNodeHandler#onMessage
 Key: HDDS-449
 URL: https://issues.apache.org/jira/browse/HDDS-449
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: LiXin Ge
Assignee: LiXin Ge


Add a NULL check to protect the situation below(may only happened in the case 
of unit test):
 1.A new datanode register to SCM.
 2. There is no container allocated in the new datanode temporarily.
 3.The new datanode dead and an event was fired to {{DeadNodeHandler}}
 4.In function {{DeadNodeHandler#onMessage}}, there will get nothing in 
{{node2ContainerMap}} and {{containers}} will be {{NULL}}
 5.NullPointerException will be throwen in the following iterate of 
{{containers}} like:
{noformat}
[ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.535 s 
<<< FAILURE! - in org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler
[ERROR] 
testStatisticsUpdate(org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler)  Time 
elapsed: 0.33 s  <<< ERROR!
java.lang.NullPointerException
at 
org.apache.hadoop.hdds.scm.node.DeadNodeHandler.onMessage(DeadNodeHandler.java:68)
at 
org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler.testStatisticsUpdate(TestDeadNodeHandler.java:179)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-450) Generate BlockCommitSequenceId in ContainerStateMachine for every commit operation in Ratis Leader

2018-09-13 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-450:


 Summary: Generate BlockCommitSequenceId in ContainerStateMachine 
for every commit operation in Ratis Leader
 Key: HDDS-450
 URL: https://issues.apache.org/jira/browse/HDDS-450
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee


BlockCommitSequenceId will be monotonically increasing long value generated in 
the Ratis leader for every commit operation of a block and will be replicated 
over to the followers. This id will be updated in OM and will be reported to 
SCM which will help the most consistent copy of a open container replica in 
case the majority of the datanodes fail .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-451) PutKey failed due to error "Rejecting write chunk request. Chunk overwrite without explicit request"

2018-09-13 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-451:
---

 Summary: PutKey failed due to error "Rejecting write chunk 
request. Chunk overwrite without explicit request"
 Key: HDDS-451
 URL: https://issues.apache.org/jira/browse/HDDS-451
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Affects Versions: 0.2.1
Reporter: Nilotpal Nandi


steps taken :

--
 # Ran Put Key command to write 50GB data. Put Key client operation failed 
after 17 mins.

error seen  ozone.log :



 
{noformat}
2018-09-13 12:11:53,734 [ForkJoinPool.commonPool-worker-20] DEBUG 
(ChunkManagerImpl.java:85) - writing 
chunk:bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_1
 chunk stage:COMMIT_DATA chunk 
file:/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/16/chunks/bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_1
 tmp chunk file
2018-09-13 12:11:56,576 [pool-3-thread-60] DEBUG (ChunkManagerImpl.java:85) - 
writing 
chunk:bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
 chunk stage:WRITE_DATA chunk 
file:/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/16/chunks/bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
 tmp chunk file
2018-09-13 12:11:56,739 [ForkJoinPool.commonPool-worker-20] DEBUG 
(ChunkManagerImpl.java:85) - writing 
chunk:bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
 chunk stage:COMMIT_DATA chunk 
file:/tmp/hadoop-root/dfs/data/hdds/de0a9e01-4a12-40e3-b567-51b9bd83248e/current/containerDir0/16/chunks/bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2
 tmp chunk file
2018-09-13 12:12:21,410 [Datanode State Machine Thread - 0] DEBUG 
(DatanodeStateMachine.java:148) - Executing cycle Number : 206
2018-09-13 12:12:51,411 [Datanode State Machine Thread - 0] DEBUG 
(DatanodeStateMachine.java:148) - Executing cycle Number : 207
2018-09-13 12:12:53,525 [BlockDeletingService#1] DEBUG 
(TopNOrderedContainerDeletionChoosingPolicy.java:79) - Stop looking for next 
container, there is no pending deletion block contained in remaining containers.
2018-09-13 12:12:55,048 [Datanode ReportManager Thread - 1] DEBUG 
(ContainerSet.java:191) - Starting container report iteration.
2018-09-13 12:13:02,626 [pool-3-thread-1] ERROR (ChunkUtils.java:244) - 
Rejecting write chunk request. Chunk overwrite without explicit request. 
ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
 offset=0, len=16777216}
2018-09-13 12:13:03,035 [pool-3-thread-1] INFO (ContainerUtils.java:149) - 
Operation: WriteChunk : Trace ID: 54834b29-603d-4ba9-9d68-0885215759d8 : 
Message: Rejecting write chunk request. OverWrite flag 
required.ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
 offset=0, len=16777216} : Result: OVERWRITE_FLAG_REQUIRED
2018-09-13 12:13:03,037 [ForkJoinPool.commonPool-worker-11] ERROR 
(ChunkUtils.java:244) - Rejecting write chunk request. Chunk overwrite without 
explicit request. 
ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
 offset=0, len=16777216}
2018-09-13 12:13:03,037 [ForkJoinPool.commonPool-worker-11] INFO 
(ContainerUtils.java:149) - Operation: WriteChunk : Trace ID: 
54834b29-603d-4ba9-9d68-0885215759d8 : Message: Rejecting write chunk request. 
OverWrite flag 
required.ChunkInfo{chunkName='bd80b58a5eba888200a4832a0f2aafb3_stream_5f3b2505-6964-45c9-a7ad-827388a1e6a0_chunk_2,
 offset=0, len=16777216} : Result: OVERWRITE_FLAG_REQUIRED
 
{noformat}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13915) replace datanode failed because of NameNodeRpcServer#getAdditionalDatanode returning excessive datanodeInfo

2018-09-13 Thread Jiandan Yang (JIRA)
Jiandan Yang  created HDFS-13915:


 Summary: replace datanode failed because of  
NameNodeRpcServer#getAdditionalDatanode returning excessive datanodeInfo
 Key: HDFS-13915
 URL: https://issues.apache.org/jira/browse/HDFS-13915
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
 Environment: 

Reporter: Jiandan Yang 
Assignee: Jiandan Yang 


Consider following situation:
1. create a file with ALLSSD policy

2. return [SSD,SSD,DISK] due to lack of SSD space

3. client call NameNodeRpcServer#getAdditionalDatanode when recovering write 
pipeline and replacing bad datanode

4. BlockPlacementPolicyDefault#chooseTarget will call 
StoragePolicy#chooseStorageTypes(3, [SSD,DISK], none, false), but 
chooseStorageTypes return [SSD,SSD]

5. do numOfReplicas = requiredStorageTypes.size() and numOfReplicas is set to 2 
and choose additional two datanodes

6. BlockPlacementPolicyDefault#chooseTarget return four datanodes to client

7. DataStreamer#findNewDatanode find nodes.length != original.length + 1  and 
throw IOException, and finally lead to write failed

client warn logs is:
 \{code:java}

WARN [DataStreamer for file 
/home/yarn/opensearch/in/data/120141286/0_65535/table/ucs_process/MANIFEST-093545
 block BP-1742758844-11.138.8.184-1483707043031:blk_7086344902_6012765313] 
org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception

java.io.IOException: Failed to replace a bad datanode on the existing pipeline 
due to no more good datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[11.138.5.4:50010,DS-04826cfc-1885-4213-a58b-8606845c5c42,SSD],
 
DatanodeInfoWithStorage[11.138.5.9:50010,DS-f6d8eb8b-2550-474b-a692-c991d7a6f6b3,SSD],
 
DatanodeInfoWithStorage[11.138.5.153:50010,DS-f5d77ca0-6fe3-4523-8ca8-5af975f845b6,SSD],
 
DatanodeInfoWithStorage[11.138.9.156:50010,DS-0d15ea12-1bad--84f7-1a4917a1e194,DISK]],
 
original=[DatanodeInfoWithStorage[11.138.5.4:50010,DS-04826cfc-1885-4213-a58b-8606845c5c42,SSD],
 
DatanodeInfoWithStorage[11.138.9.156:50010,DS-0d15ea12-1bad--84f7-1a4917a1e194,DISK]]).
 The current failed datanode replacement policy is DEFAULT, and a client may 
configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' 
in its configuration.

{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-09-13 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/895/

[Sep 12, 2018 5:28:47 AM] (aajisaka) HADOOP-15584. Move httpcomponents versions 
in pom.xml. Contributed by
[Sep 12, 2018 9:11:12 AM] (elek) HDDS-425. Move unit test of the genconf tool 
to hadoop-ozone/tools
[Sep 12, 2018 10:30:59 AM] (sunilg) YARN-6855. [YARN-3409] CLI Proto 
Modifications to support Node
[Sep 12, 2018 10:30:59 AM] (sunilg) YARN-6856. [YARN-3409] Support CLI for Node 
Attributes Mapping.
[Sep 12, 2018 10:30:59 AM] (sunilg) YARN-7842. PB changes to carry 
node-attributes in NM heartbeat.
[Sep 12, 2018 10:31:00 AM] (sunilg) YARN-7840. Update PB for prefix support of 
node attributes. Contributed
[Sep 12, 2018 10:31:00 AM] (sunilg) YARN-7757. Refactor NodeLabelsProvider to 
be more generic and reusable
[Sep 12, 2018 10:31:00 AM] (sunilg) YARN-6858. Attribute Manager to store and 
provide node attributes in RM.
[Sep 12, 2018 10:31:00 AM] (sunilg) YARN-7856. Validate Node Attributes from 
NM. Contributed by Weiwei Yang.
[Sep 12, 2018 10:31:00 AM] (sunilg) YARN-7965. NodeAttributeManager add/get API 
is not working properly.
[Sep 12, 2018 10:31:00 AM] (sunilg) YARN-7871. Node attributes reporting from 
NM to RM. Contributed by
[Sep 12, 2018 10:31:00 AM] (sunilg) YARN-7988. Refactor FSNodeLabelStore code 
for Node Attributes store
[Sep 12, 2018 10:31:00 AM] (sunilg) YARN-8094. Support configuration based Node 
Attribute provider.
[Sep 12, 2018 10:31:00 AM] (sunilg) YARN-8092. Expose Node Attributes info via 
RM nodes REST API.
[Sep 12, 2018 10:31:00 AM] (sunilg) YARN-8033. CLI Integration with 
NodeAttributesManagerImpl. Contributed
[Sep 12, 2018 10:31:00 AM] (sunilg) YARN-8117. Fix TestRMWebServicesNodes test 
failure. Contributed by Bibin
[Sep 12, 2018 10:31:00 AM] (sunilg) YARN-7875. Node Attribute store for storing 
and recovering attributes.
[Sep 12, 2018 10:31:00 AM] (sunilg) YARN-8100. Support API interface to query 
cluster attributes and
[Sep 12, 2018 10:31:00 AM] (sunilg) YARN-8104. Add API to fetch node to 
attribute mapping. Contributed by
[Sep 12, 2018 10:31:00 AM] (sunilg) YARN-7892. Revisit NodeAttribute class 
structure. Contributed by 
[Sep 12, 2018 10:31:00 AM] (sunilg) YARN-8351. Node attribute manager logs are 
flooding RM logs. Contributed
[Sep 12, 2018 10:31:01 AM] (sunilg) YARN-8103. Add CLI interface to query node 
attributes. Contributed by
[Sep 12, 2018 10:31:01 AM] (sunilg) YARN-8574. Allow dot in attribute values. 
Contributed by Bibin A
[Sep 12, 2018 10:31:01 AM] (sunilg) YARN-7863. Modify placement constraints to 
support node attributes.
[Sep 12, 2018 10:31:01 AM] (sunilg) YARN-8721. Relax NE node-attribute check 
when attribute doesn't exist on
[Sep 12, 2018 10:31:01 AM] (sunilg) YARN-7865. Node attributes documentation. 
Contributed by Naganarasimha G
[Sep 12, 2018 10:31:01 AM] (sunilg) YARN-8739. Fix jenkins issues for Node 
Attributes branch. Contributed by
[Sep 12, 2018 10:31:01 AM] (sunilg) YARN-8740. Clear node attribute path after 
each test run. Contributed by
[Sep 12, 2018 1:01:03 PM] (msingh) HDDS-433. 
ContainerStateMachine#readStateMachineData should properly
[Sep 12, 2018 3:12:38 PM] (mackrorysd) HADOOP-15635. s3guard set-capacity 
command to fail fast if bucket is
[Sep 12, 2018 5:38:36 PM] (aengineer) HDDS-428. OzoneManager lock optimization. 
Contributed by Nanda Kumar.
[Sep 12, 2018 5:58:39 PM] (liuml07) HADOOP-15750. Remove obsolete S3A test 
ITestS3ACredentialsInURL.
[Sep 12, 2018 6:18:55 PM] (templedf) HDFS-13846. Safe blocks counter is not 
decremented correctly if the
[Sep 12, 2018 6:40:24 PM] (aengineer) HDDS-436. Allow SCM chill mode to be 
disabled by configuration.
[Sep 12, 2018 6:46:35 PM] (gifuma) YARN-8658. [AMRMProxy] Metrics for 
AMRMClientRelayer inside
[Sep 12, 2018 9:12:28 PM] (skumpf) YARN-8768. Javadoc error in node attributes. 
Contributed by Sunil
[Sep 12, 2018 9:19:01 PM] (aengineer) HDDS-395. TestOzoneRestWithMiniCluster 
fails with "Unable to read ROCKDB
[Sep 12, 2018 11:36:01 PM] (fabbri) HADOOP-14734 add option to tag DDB table(s) 
created. (Contributed by




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine
 
   Unread field:FSBasedSubmarineStorageImpl.java:[line 39] 
   Found reliance on default encoding in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component):in 
org.apache.hadoo

[jira] [Created] (HDDS-452) 'ozone scm' with incorrect argument first logs all the STARTUP_MSG

2018-09-13 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-452:
--

 Summary: 'ozone scm' with incorrect argument first logs all the 
STARTUP_MSG
 Key: HDDS-452
 URL: https://issues.apache.org/jira/browse/HDDS-452
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari
Assignee: Namit Maheshwari
 Fix For: 0.2.1, 0.3.0


 bin/ozone om with incorrect argument first logs all the STARTUP_MSG
{code:java}

➜ ozone-0.2.1-SNAPSHOT bin/ozone om -hgfj
2018-09-07 12:56:12,391 [main] INFO - STARTUP_MSG:
/
STARTUP_MSG: Starting OzoneManager
STARTUP_MSG: host = HW11469.local/10.22.16.67
STARTUP_MSG: args = [-hgfj]
STARTUP_MSG: version = 3.2.0-SNAPSHOT
STARTUP_MSG: classpath = 
/private/tmp/ozone-0.2.1-SNAPSHOT/etc/hadoop:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/curator-framework-2.12.0.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/curator-client-2.12.0.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/xz-1.0.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/snappy-java-1.0.5.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/jetty-servlet-9.3.19.v20170502.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/commons-logging-1.1.3.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/avro-1.7.7.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/zookeeper-3.4.9.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/jackson-annotations-2.9.5.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/kerb-server-1.0.1.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/jetty-security-9.3.19.v20170502.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/log4j-1.2.17.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/commons-cli-1.2.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/netty-3.10.5.Final.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/jetty-xml-9.3.19.v20170502.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/httpclient-4.5.2.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/kerby-config-1.0.1.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/guava-11.0.2.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/json-smart-2.3.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/kerb-util-1.0.1.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/commons-compress-1.4.1.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/jetty-http-9.3.19.v20170502.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/jersey-json-1.19.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/jersey-servlet-1.19.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/commons-io-2.5.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/stax2-api-3.1.4.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/hadoop-annotations-3.2.0-SNAPSHOT.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/jsp-api-2.1.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/commons-codec-1.11.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/asm-5.0.4.jar:/private/tmp/ozone-0.2.1-SNAPSHOT/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/private/tmp/

[jira] [Created] (HDDS-453) om and scm should use piccoli to parse arguments

2018-09-13 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-453:
--

 Summary: om and scm should use piccoli to parse arguments
 Key: HDDS-453
 URL: https://issues.apache.org/jira/browse/HDDS-453
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Manager, SCM
Reporter: Arpit Agarwal


SCM and OM can use the picocli to parse command-line arguments.

Suggested in HDDS-415 by [~anu].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-454) TestChunkStreams.testErrorReadGroupInputStream & TestChunkStreams.testReadGroupInputStream are failing

2018-09-13 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-454:


 Summary: TestChunkStreams.testErrorReadGroupInputStream & 
TestChunkStreams.testReadGroupInputStream are failing
 Key: HDDS-454
 URL: https://issues.apache.org/jira/browse/HDDS-454
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Client
Reporter: Nanda kumar


TestChunkStreams.testErrorReadGroupInputStream & 
TestChunkStreams.testReadGroupInputStream test-cases are failing with the below 
error.

{code}
[ERROR] 
testErrorReadGroupInputStream(org.apache.hadoop.ozone.om.TestChunkStreams)  
Time elapsed: 0.058 s  <<< ERROR!
java.lang.UnsupportedOperationException
at 
org.apache.hadoop.ozone.om.TestChunkStreams$2.getPos(TestChunkStreams.java:188)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupInputStream$ChunkInputStreamEntry.getPos(ChunkGroupInputStream.java:245)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupInputStream$ChunkInputStreamEntry.getRemaining(ChunkGroupInputStream.java:217)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupInputStream.read(ChunkGroupInputStream.java:118)
at 
org.apache.hadoop.ozone.om.TestChunkStreams.testErrorReadGroupInputStream(TestChunkStreams.java:214)

[ERROR] testReadGroupInputStream(org.apache.hadoop.ozone.om.TestChunkStreams)  
Time elapsed: 0.001 s  <<< ERROR!
java.lang.UnsupportedOperationException
at 
org.apache.hadoop.ozone.om.TestChunkStreams$1.getPos(TestChunkStreams.java:134)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupInputStream$ChunkInputStreamEntry.getPos(ChunkGroupInputStream.java:245)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupInputStream$ChunkInputStreamEntry.getRemaining(ChunkGroupInputStream.java:217)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupInputStream.read(ChunkGroupInputStream.java:118)
at 
org.apache.hadoop.ozone.om.TestChunkStreams.testReadGroupInputStream(TestChunkStreams.java:159)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-455) genconf tool must use picocli

2018-09-13 Thread Dinesh Chitlangia (JIRA)
Dinesh Chitlangia created HDDS-455:
--

 Summary: genconf tool must use picocli
 Key: HDDS-455
 URL: https://issues.apache.org/jira/browse/HDDS-455
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia


Like ozone shell, genconf tool should use picocli to be consistent with other 
cli usage in the ozone world.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-456) TestOzoneShell#init is breaking due to Null Pointer Exception

2018-09-13 Thread Dinesh Chitlangia (JIRA)
Dinesh Chitlangia created HDDS-456:
--

 Summary: TestOzoneShell#init is breaking due to Null Pointer 
Exception
 Key: HDDS-456
 URL: https://issues.apache.org/jira/browse/HDDS-456
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Dinesh Chitlangia
Assignee: Dinesh Chitlangia


Run TestOzoneShell in IDE and all tests will fail with following stacktrace:

 
{noformat}
java.lang.NullPointerException
 at java.util.Objects.requireNonNull(Objects.java:203)
 at java.util.Arrays$ArrayList.(Arrays.java:3813)
 at java.util.Arrays.asList(Arrays.java:3800)
 at 
org.apache.hadoop.util.StringUtils.createStartupShutdownMessage(StringUtils.java:746)
 at 
org.apache.hadoop.util.StringUtils.startupShutdownMessage(StringUtils.java:714)
 at 
org.apache.hadoop.util.StringUtils.startupShutdownMessage(StringUtils.java:707)
 at org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:308)
 at 
org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createOM(MiniOzoneClusterImpl.java:419)
 at 
org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:348)
 at org.apache.hadoop.ozone.ozShell.TestOzoneShell.init(TestOzoneShell.java:146)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
 at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
 at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
 at 
com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
 at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
 at 
com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-457) ozone om -help, scm -help commands cant run unless the service is stopped

2018-09-13 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-457:
-

 Summary: ozone om -help, scm -help commands cant run unless the 
service is stopped 
 Key: HDDS-457
 URL: https://issues.apache.org/jira/browse/HDDS-457
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Namit Maheshwari


{code:java}
➜ ozone-0.3.0-SNAPSHOT bin/ozone om -help
om is running as process 89242. Stop it first.

➜ ozone-0.3.0-SNAPSHOT bin/ozone scm -help
scm is running as process 73361. Stop it first.
{code}
It runs fine once the service is stopped
{code:java}
➜ ozone-0.3.0-SNAPSHOT bin/ozone --daemon stop scm
➜ ozone-0.3.0-SNAPSHOT bin/ozone scm -help
Usage:
ozone scm [genericOptions] [ -init [ -clusterid  ] ]
ozone scm [genericOptions] [ -genclusterid ]
ozone scm [ -help ]


Generic options supported are:
-conf  specify an application configuration file
-D  define a value for a given property
-fs  specify default filesystem URL to use, 
overrides 'fs.defaultFS' property from configurations.
-jt  specify a ResourceManager
-files  specify a comma-separated list of files to be copied to the 
map reduce cluster
-libjars  specify a comma-separated list of jar files to be included 
in the classpath
-archives  specify a comma-separated list of archives to be 
unarchived on the compute machines

The general command line syntax is:
command [genericOptions] [commandOptions]

{code}
 

Ideally help command should run fine without the service being stopped.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Next Hadoop Contributors Meetup on September 25th

2018-09-13 Thread Jason Lowe
I am happy to announce that Oath will be hosting the next Hadoop
Contributors meetup on Tuesday, September 25th at Yahoo Building G, 589
Java Drive, Sunnyvale CA from 8:00AM to 6:00PM.

The agenda will look roughly as follows:

08:00AM - 08:30AM Arrival and Check-in
08:30AM - 12:00PM A series of brief talks with some of the topics including:
  - HDFS scalability and security
  - Use cases and future directions for Docker on YARN
  - Submarine (Deep Learning on YARN)
  - Hadoop in the cloud
  - Oath's use of machine learning, Vespa, and Storm
11:45PM - 12:30PM Lunch Break
12:30PM - 02:00PM Brief talks series resume
02:00PM - 04:30PM Parallel breakout sessions to discuss topics suggested by
attendees.  Some proposed topics include:
  - Improved security credentials management for long-running YARN
applications
  - Improved management of parallel development lines
  - Proposals for the next bug bash
  - Tez shuffle handler and DAG aware scheduler overview
 04:30PM - 06:00PM Closing Reception

RSVP at https://www.meetup.com/Hadoop-Contributors/events/254012512/ is
REQUIRED to attend and spots are limited.  Security will be checking the
attendee list as you enter the building.

We will host a Google Hangouts/Meet so people who are interested but unable
to attend in person can participate remotely.  Details will be posted to
the meetup event.

Hope to see you there!

Jason


[jira] [Created] (HDDS-458) numberofKeys is 0 for all containers even when keys are present

2018-09-13 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-458:
---

 Summary: numberofKeys is 0 for all containers even when keys are 
present
 Key: HDDS-458
 URL: https://issues.apache.org/jira/browse/HDDS-458
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM Client
Affects Versions: 0.2.1
Reporter: Nilotpal Nandi


 

numberofKeys field is 0 for all containers even when keys are present

 
{noformat}
[root@ctr-e138-1518143905142-459606-01-05 bin]# ./ozone scmcli list 
--count=40 --start=1 | grep numberOfKeys
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,
 "numberOfKeys" : 0,{noformat}
 

 

 
{noformat}
[root@ctr-e138-1518143905142-459606-01-05 bin]# ./ozone oz key list 
/fs-volume/fs-bucket/ | grep keyName
2018-09-13 19:10:33,502 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
 "keyName" : "15GBFILE"
 "keyName" : "15GBFILE1"
 "keyName" : "1GB1"
 "keyName" : "1GB10"
 "keyName" : "1GB11"
 "keyName" : "1GB12"
 "keyName" : "1GB13"
 "keyName" : "1GB14"
 "keyName" : "1GB15"
 "keyName" : "1GB2"
 "keyName" : "1GB3"
 "keyName" : "1GB4"
 "keyName" : "1GB5"
 "keyName" : "1GB6"
 "keyName" : "1GB7"
 "keyName" : "1GB8"
 "keyName" : "1GB9"
 "keyName" : "1GBsecond1"
 "keyName" : "1GBsecond10"
 "keyName" : "1GBsecond11"
 "keyName" : "1GBsecond12"
 "keyName" : "1GBsecond13"
 "keyName" : "1GBsecond14"
 "keyName" : "1GBsecond15"
 "keyName" : "1GBsecond2"
 "keyName" : "1GBsecond3"
 "keyName" : "1GBsecond4"
 "keyName" : "1GBsecond5"
 "keyName" : "1GBsecond6"
 "keyName" : "1GBsecond7"
 "keyName" : "1GBsecond8"
 "keyName" : "1GBsecond9"
 "keyName" : "2GBFILE"
 "keyName" : "2GBFILE2"
 "keyName" : "50GBFILE2"
 "keyName" : "passwd1"{noformat}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13916) SnapshotDiff not completely implemented for supporting WebHdfs

2018-09-13 Thread Xun REN (JIRA)
Xun REN created HDFS-13916:
--

 Summary: SnapshotDiff not completely implemented for supporting 
WebHdfs
 Key: HDFS-13916
 URL: https://issues.apache.org/jira/browse/HDFS-13916
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: distcp, webhdfs
Affects Versions: 3.1.1, 3.0.1
Reporter: Xun REN


[~ljain] has worked on the JIRA: 
https://issues.apache.org/jira/browse/HDFS-13052 to provide the possibility to 
make DistCP of SnapshotDiff with WebHDFSFileSystem. However, in the patch, 
there is no modification for the real java class which is used by launching the 
command "hadoop distcp ..."

 

You can check in the latest version here:

[https://github.com/apache/hadoop/blob/branch-3.1.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java#L96-L100]

In the method "preSyncCheck" of the class "DistCpSync", we still check if the 
file system is DFS. 

So I propose to change the class DistCpSync in order to take into consideration 
what was committed by Lokesh Jain.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-459) Ozone website should support SSL

2018-09-13 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-459:
--

 Summary: Ozone website should support SSL
 Key: HDDS-459
 URL: https://issues.apache.org/jira/browse/HDDS-459
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: website
Reporter: Arpit Agarwal


The Ozone website at 
[http://ozone.hadoop.apache.org|http://ozone.hadoop.apache.org/] should support 
SSL.

Browsers complain that the certificate is invalid, although it does seem to be 
a wild-card cert.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org