[jira] [Reopened] (HDFS-14497) Write lock held by metasave impact following RPC processing

2019-08-25 Thread He Xiaoqiao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao reopened HDFS-14497:


> Write lock held by metasave impact following RPC processing
> ---
>
> Key: HDFS-14497
> URL: https://issues.apache.org/jira/browse/HDFS-14497
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14497-addendum.001.patch, HDFS-14497.001.patch
>
>
> NameNode meta save hold global write lock currently, so following RPC r/w 
> request or inner-thread of NameNode could be paused if they try to acquire 
> global read/write lock and have to wait before metasave release it.
> I propose to change write lock to read lock and let some read request could 
> be process normally. I think it could not change informations which meta save 
> try to get if we try to open read request.
> Actually, we need ensure that there are only one thread to execute metaSave, 
> otherwise, output streams could meet exception especially both streams hold 
> the same file handle or some other same output stream.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14775) Add Timestamp for longest FSN write/read lock held log

2019-08-25 Thread Chen Zhang (Jira)
Chen Zhang created HDFS-14775:
-

 Summary: Add Timestamp for longest FSN write/read lock held log
 Key: HDFS-14775
 URL: https://issues.apache.org/jira/browse/HDFS-14775
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Chen Zhang


HDFS-13946 improved the log for longest read/write lock held time, it's very 
useful improvement.

In some condition, we need to locate the detailed call information(user, ip, 
path, etc.) for longest lock holder, but the default throttle interval(10s) is 
too long to find the corresponding audit log. We need to add the timestamp for 
the {{longestWriteLockHeldStackTrace}}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14776) Log more detail for slow RPC

2019-08-25 Thread Chen Zhang (Jira)
Chen Zhang created HDFS-14776:
-

 Summary: Log more detail for slow RPC
 Key: HDFS-14776
 URL: https://issues.apache.org/jira/browse/HDFS-14776
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Chen Zhang


Current implementation only log process time
{code:java}
if ((rpcMetrics.getProcessingSampleCount() > minSampleSize) &&
(processingTime > threeSigma)) {
  LOG.warn("Slow RPC : {} took {} {} to process from client {}",
  methodName, processingTime, RpcMetrics.TIMEUNIT, call);
  rpcMetrics.incrSlowRpc();
}
{code}
We need to log more details to help us locate the problem (eg. how long it take 
to request lock, holding lock, or do other things)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14776) Log more detail for slow RPC

2019-08-25 Thread Chen Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang resolved HDFS-14776.
---
Resolution: Abandoned

> Log more detail for slow RPC
> 
>
> Key: HDFS-14776
> URL: https://issues.apache.org/jira/browse/HDFS-14776
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Zhang
>Priority: Major
>
> Current implementation only log process time
> {code:java}
> if ((rpcMetrics.getProcessingSampleCount() > minSampleSize) &&
> (processingTime > threeSigma)) {
>   LOG.warn("Slow RPC : {} took {} {} to process from client {}",
>   methodName, processingTime, RpcMetrics.TIMEUNIT, call);
>   rpcMetrics.incrSlowRpc();
> }
> {code}
> We need to log more details to help us locate the problem (eg. how long it 
> take to request lock, holding lock, or do other things)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Make EOL branches to public?

2019-08-25 Thread Konstantin Shvachko
I would also suggest making an explicit commit to the branch stating it is
EOL.

Thanks,
--Konstantin

On Tue, Aug 20, 2019 at 7:59 PM Wangda Tan  wrote:

> Thank you all for suggestions. Let me send a vote email to mark 2.6, 2.7,
> 3.0 EOL.
>
> - Wangda
>
> On Wed, Aug 21, 2019 at 9:34 AM Akira Ajisaka  wrote:
>
> > +1
> >
> > Thank you for the discussion.
> >
> > -Akira
> >
> > On Wed, Aug 21, 2019 at 5:51 AM Wei-Chiu Chuang 
> > wrote:
> > >
> > > +1
> > > I feel like one year of inactivity is a good sign that the community is
> > not
> > > interested in the branch any more.
> > >
> > > On Fri, Aug 16, 2019 at 3:14 AM Wangda Tan 
> wrote:
> > >
> > > > Hi folks,
> > > >
> > > > Want to hear your thoughts about what we should do to make some
> > branches
> > > > EOL. We discussed a couple of times before in dev lists and PMC list.
> > > > However, we couldn't get a formal process of EOL. According to the
> > > > discussion. It is hard to decide it based on time like "After 1st
> > release,
> > > > EOL in 2 years". Because community members still want to maintain it
> > and
> > > > developers still want to get a newer version released.
> > > >
> > > > However, without a public place to figure out which release will be
> > EOL, it
> > > > is very hard for users to choose the right releases to upgrade and
> > develop.
> > > >
> > > > So I want to propose to make an agreement about making a public EOL
> > wiki
> > > > page and create a process to EOL a release:
> > > >
> > > > The process I'm thinking is very simple: If no volunteer to do a
> > > > maintenance release in a short to mid-term (like 3 months to 1 or 1.5
> > > > year). We will claim a release is EOL. After EOL community can still
> > choose
> > > > to do a security-only release.
> > > >
> > > > Here's a list which I can think about:
> > > >
> > > > 1) 2.6.x (Or any release older than 2.6) (Last released at Oct 2016)
> > > > 2) 2.7.x (Last released at Apr 2018)
> > > > 4) 3.0.x (Last released at May 2018)
> > > >
> > > > Thoughts?
> > > >
> > > > Thanks,
> > > > Wangda
> > > >
> >
>


Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-08-25 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/424/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.cli.TestRMAdminCLI 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/424/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/424/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/424/artifact/out/diff-compile-cc-root-jdk1.8.0_222.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/424/artifact/out/diff-compile-javac-root-jdk1.8.0_222.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/424/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/424/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/424/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/424/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/424/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/424/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/424/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/424/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/424/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/424/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/424/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/424/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_222.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/424/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [236K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/424/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/424/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [20K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/424/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [40K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/424/artifact/out/patch-unit-ha

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-08-25 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1239/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

Failed CTEST tests :

   test_test_libhdfs_ops_hdfs_static 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_libhdfs_threaded_hdfspp_test_shim_static 
   test_hdfspp_mini_dfs_smoke_hdfspp_test_shim_static 
   libhdfs_mini_stress_valgrind_hdfspp_test_static 
   memcheck_libhdfs_mini_stress_valgrind_hdfspp_test_static 
   test_libhdfs_mini_stress_hdfspp_test_shim_static 
   test_hdfs_ext_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.hdfs.tools.TestDFSZKFailoverController 
   hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup 
   hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken 
   hadoop.fs.ozone.contract.ITestOzoneContractMkdir 
   hadoop.fs.ozone.TestOzoneFSInputStream 
   hadoop.fs.ozone.contract.ITestOzoneContractRootDir 
   hadoop.fs.ozone.contract.ITestOzoneContractRename 
   hadoop.fs.ozone.contract.ITestOzoneContractDistCp 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1239/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1239/artifact/out/diff-compile-javac-root.txt
  [332K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1239/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1239/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1239/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1239/artifact/out/diff-patch-pylint.txt
  [220K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1239/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1239/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1239/artifact/out/whitespace-eol.txt
  [9.6M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1239/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1239/artifact/out/xml.txt
  [16K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1239/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1239/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [68K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1239/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1239/artifact/out/branc

[jira] [Created] (HDDS-2031) Choose datanode for pipeline creation based on network topology

2019-08-25 Thread Sammi Chen (Jira)
Sammi Chen created HDDS-2031:


 Summary: Choose datanode for pipeline creation based on network 
topology
 Key: HDDS-2031
 URL: https://issues.apache.org/jira/browse/HDDS-2031
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Sammi Chen
Assignee: Sammi Chen


There are regular heartbeats between datanodes in a pipeline. Choose datanodes 
based on network topology, to guarantee data reliability and reduce heartbeat 
network traffic latency.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Move Submarine source code, documentation, etc. to a separate Apache Git repo

2019-08-25 Thread Xun Liu
Thank you for the voting initiated by Wangda.

I am very much looking forward to submarine having its own independent git 
repository, 
through which we can manage and publish submarine independent websites 
in addition to managing documentation and code.

+1

Xun Liu
Best Regards,

> On Aug 24, 2019, at 10:05 AM, Wangda Tan  wrote:
> 
> Hi devs,
> 
> This is a voting thread to move Submarine source code, documentation from
> Hadoop repo to a separate Apache Git repo. Which is based on discussions of
> https://lists.apache.org/thread.html/e49d60b2e0e021206e22bb2d430f4310019a8b29ee5020f3eea3bd95@%3Cyarn-dev.hadoop.apache.org%3E
> 
> Contributors who have permissions to push to Hadoop Git repository will
> have permissions to push to the new Submarine repository.
> 
> This voting thread will run for 7 days and will end at Aug 30th.
> 
> Please let me know if you have any questions.
> 
> Thanks,
> Wangda Tan



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Move Submarine source code, documentation, etc. to a separate Apache Git repo

2019-08-25 Thread Akira Ajisaka
+1

Thanks,
Akira

On Sat, Aug 24, 2019 at 11:06 AM Wangda Tan  wrote:
>
> Hi devs,
>
> This is a voting thread to move Submarine source code, documentation from
> Hadoop repo to a separate Apache Git repo. Which is based on discussions of
> https://lists.apache.org/thread.html/e49d60b2e0e021206e22bb2d430f4310019a8b29ee5020f3eea3bd95@%3Cyarn-dev.hadoop.apache.org%3E
>
> Contributors who have permissions to push to Hadoop Git repository will
> have permissions to push to the new Submarine repository.
>
> This voting thread will run for 7 days and will end at Aug 30th.
>
> Please let me know if you have any questions.
>
> Thanks,
> Wangda Tan

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2032) Ozone client retry writes in case of any ratis/stateMachine exceptions

2019-08-25 Thread Shashikant Banerjee (Jira)
Shashikant Banerjee created HDDS-2032:
-

 Summary: Ozone client retry writes in case of any 
ratis/stateMachine exceptions
 Key: HDDS-2032
 URL: https://issues.apache.org/jira/browse/HDDS-2032
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Affects Versions: 0.5.0
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.5.0


Currently, Ozone client retry writes on a different pipeline or container in 
case of some specific exceptions. But in case, it sees exception such as 
DISK_FULL, CONTAINER_UNHEALTHY or any corruption , it just aborts the right. In 
general, the every such exception on the client should be a retriable  
exception in ozone client and on some specific exceptions, it should take some 
more specific exception like excluding certain containers or pipelines while 
retrying or informing SCM of a corrupt replica etc.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2033) Support join multiple pipelines on datanode

2019-08-25 Thread Sammi Chen (Jira)
Sammi Chen created HDDS-2033:


 Summary: Support join multiple pipelines on datanode
 Key: HDDS-2033
 URL: https://issues.apache.org/jira/browse/HDDS-2033
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Sammi Chen






--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Move Submarine source code, documentation, etc. to a separate Apache Git repo

2019-08-25 Thread kevin su
+1 (non-binding)

Thanks,
Kevin

Akira Ajisaka  於 2019年8月26日 週一 下午1:09寫道:

> +1
>
> Thanks,
> Akira
>
> On Sat, Aug 24, 2019 at 11:06 AM Wangda Tan  wrote:
> >
> > Hi devs,
> >
> > This is a voting thread to move Submarine source code, documentation from
> > Hadoop repo to a separate Apache Git repo. Which is based on discussions
> of
> >
> https://lists.apache.org/thread.html/e49d60b2e0e021206e22bb2d430f4310019a8b29ee5020f3eea3bd95@%3Cyarn-dev.hadoop.apache.org%3E
> >
> > Contributors who have permissions to push to Hadoop Git repository will
> > have permissions to push to the new Submarine repository.
> >
> > This voting thread will run for 7 days and will end at Aug 30th.
> >
> > Please let me know if you have any questions.
> >
> > Thanks,
> > Wangda Tan
>
> -
> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
>
>


[jira] [Created] (HDDS-2034) Add create pipeline command dispatcher and handle

2019-08-25 Thread Sammi Chen (Jira)
Sammi Chen created HDDS-2034:


 Summary: Add create pipeline command dispatcher and handle
 Key: HDDS-2034
 URL: https://issues.apache.org/jira/browse/HDDS-2034
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Sammi Chen






--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2035) Improve CLI listPipeline

2019-08-25 Thread Sammi Chen (Jira)
Sammi Chen created HDDS-2035:


 Summary: Improve CLI listPipeline
 Key: HDDS-2035
 URL: https://issues.apache.org/jira/browse/HDDS-2035
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Sammi Chen


1. filter pipeline by datanode




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org