Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2022-07-25 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/733/

No changes




-1 overall


The following subsystems voted -1:
hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.TestFileUtil 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   
hadoop.yarn.server.resourcemanager.scheduler.TestSchedulingWithAllocationRequestId
 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.mapreduce.lib.input.TestLineRecordReader 
   hadoop.mapred.TestLineRecordReader 
   hadoop.mapreduce.v2.app.webapp.TestAMWebServices 
   hadoop.mapreduce.v2.app.TestFetchFailure 
   hadoop.mapreduce.v2.app.webapp.TestAMWebServicesJobConf 
   hadoop.mapreduce.v2.app.webapp.TestAMWebServicesAttempts 
   hadoop.mapreduce.v2.hs.webapp.TestHsWebServices 
   hadoop.streaming.TestStreamingSeparator 
   hadoop.tools.TestIntegration 
   hadoop.tools.TestHadoopArchives 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/733/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/733/artifact/out/diff-compile-javac-root.txt
  [488K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/733/artifact/out/diff-checkstyle-root.txt
  [14M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/733/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/733/artifact/out/patch-mvnsite-root.txt
  [564K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/733/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/733/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/733/artifact/out/diff-patch-shellcheck.txt
  [72K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/733/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/733/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/733/artifact/out/patch-javadoc-root.txt
  [40K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/733/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [220K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/733/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [428K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/733/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [16K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/733/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/733/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/733/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [124K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/733/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/733/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt
  [184K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-

how to proceed with HDFS-10719

2022-07-25 Thread Jan Van Besien
Hi,

I left a comment in HDFS-10719 where I explain that I think the problem still 
exists and hence I think the ticket was incorrectly closed as duplicate of 
HDFS-4210. See comment in HDFS-10719 for details.

How do I proceed with this: can someone reopen this ticket or do I log a new 
one?

I will try to apply the patch today to verify if it actually fixes the problem 
I observed, and will report back.

thanks in advance,
Jan Van Besien - NGDATA
-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: how to proceed with HDFS-10719

2022-07-25 Thread Jan Van Besien
I just realized this is what HDFS-4957 is about.

Jan Van Besien - NGDATA


From: Jan Van Besien
Sent: Monday, July 25, 2022 09:09
To: hdfs-dev@hadoop.apache.org
Subject: how to proceed with HDFS-10719

Hi,

I left a comment in HDFS-10719 where I explain that I think the problem still 
exists and hence I think the ticket was incorrectly closed as duplicate of 
HDFS-4210. See comment in HDFS-10719 for details.

How do I proceed with this: can someone reopen this ticket or do I log a new 
one?

I will try to apply the patch today to verify if it actually fixes the problem 
I observed, and will report back.

thanks in advance,
Jan Van Besien - NGDATA

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16655) OIV: print out erasure coding policy name in oiv Delimited output

2022-07-25 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He resolved HDFS-16655.

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Committed to trunk. Thanks [~max2049] for your contributions.

> OIV: print out erasure coding policy name in oiv Delimited output
> -
>
> Key: HDFS-16655
> URL: https://issues.apache.org/jira/browse/HDFS-16655
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.4.0
>Reporter: Max  Xie
>Assignee: Max  Xie
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> By adding erasure coding policy name to oiv output, it will help with oiv 
> post-analysis to have a overview of all folders/files with specified ec 
> policy and to apply internal regulation based on this information. In 
> particular, it wiil be convenient for the platform to calculate the real 
> storage size of the ec file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16682) [SBN Read] make estimated transactions configurable

2022-07-25 Thread zhengchenyu (Jira)
zhengchenyu created HDFS-16682:
--

 Summary: [SBN Read] make estimated transactions configurable
 Key: HDFS-16682
 URL: https://issues.apache.org/jira/browse/HDFS-16682
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: zhengchenyu
Assignee: zhengchenyu


In GlobalStateIdContext, ESTIMATED_TRANSACTIONS_PER_SECOND and 

ESTIMATED_SERVER_TIME_MULTIPLIER should be configured.

These parameter depends  on different cluster's load. In the other way, these 
config will help use to simulate observer namenode was far behind.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16683) All method metrics related to the rpc protocol should be initialized

2022-07-25 Thread Shuyan Zhang (Jira)
Shuyan Zhang created HDFS-16683:
---

 Summary: All method metrics related to the rpc protocol should be 
initialized
 Key: HDFS-16683
 URL: https://issues.apache.org/jira/browse/HDFS-16683
 Project: Hadoop HDFS
  Issue Type: Bug
 Environment: When an RPC protocol is used, the metric of 
protocol-related methods should be initialized; otherwise, metric information 
will be incomplete. For example, when we call 
HAServiceProtocol#monitorHealth(), only the metric of monitorHealth() are 
initialized, and the metric of transitionToStandby() are still not reported. 
This incompleteness caused a little trouble for our monitoring system.
The root cause is that the parameter passed by RpcEngine to 
MutableRatesWithAggregation#init(java.lang.Class)  is always XXXProtocolPB, 
which is inherited from BlockingInterface and does not implement any methods. 
We should fix this bug.
Reporter: Shuyan Zhang






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16684) Exclude self from JournalNodeSyncer when using a bind host

2022-07-25 Thread Steve Vaughan (Jira)
Steve Vaughan created HDFS-16684:


 Summary: Exclude self from JournalNodeSyncer when using a bind host
 Key: HDFS-16684
 URL: https://issues.apache.org/jira/browse/HDFS-16684
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: journal-node
 Environment: Running with Java 11 and bind addresses set to 0.0.0.0.
Reporter: Steve Vaughan


The JournalNodeSyncer will include the local instance in syncing when using a 
bind host (e.g. 0.0.0.0).  There is a mechanism that is supposed to exclude the 
local instance, but it doesn't recognize the meta-address as a local address.

Running with bind addresses set to 0.0.0.0, the JournalNodeSyncer will log 
attempts to sync with itself as part of the normal syncing rotation.  For an HA 
configuration running 3 JournalNodes, the "other" list used by the 
JournalNodeSyncer will include 3 proxies.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16685) DataNode registration fails because getHostName returns an IP address

2022-07-25 Thread Steve Vaughan (Jira)
Steve Vaughan created HDFS-16685:


 Summary: DataNode registration fails because getHostName returns 
an IP address
 Key: HDFS-16685
 URL: https://issues.apache.org/jira/browse/HDFS-16685
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
 Environment: Run in Kubernetes using Java 11.  
Reporter: Steve Vaughan


The call to dnAddress.getHostName() can return an IP address encoded as a 
string, which is then rejected because unresolved addresses can result in 
performance impacts due to repetitive DNS lookups later.  We can detect when 
this situation occurs, and perform a DNS reverse name lookup to fix the issue.

Bouncing a DataNode in a managed environment results in a new IP address 
allocation, and the new instance fails to register with the NameNode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16686) GetJournalEditServlet fails to authorize valid Kerberos request

2022-07-25 Thread Steve Vaughan (Jira)
Steve Vaughan created HDFS-16686:


 Summary: GetJournalEditServlet fails to authorize valid Kerberos 
request
 Key: HDFS-16686
 URL: https://issues.apache.org/jira/browse/HDFS-16686
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: journal-node
 Environment: Running in Kubernetes using Java 11 in an HA 
configuration.  JournalNodes run on separate pods and have their own Kerberos 
principal "jn/@".
Reporter: Steve Vaughan


GetJournalEditServlet uses request.getRemoteuser() to determine the 
remoteShortName for Kerberos authorization, which fails to match when the 
JournalNode uses its own Kerberos principal (e.g. jn/@).

This can be fixed by using the UserGroupInformation provided by the base 
DfsServlet class using the getUGI(request, conf) call.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16687) RouterFsckServlet replicates code from DfsServlet base class

2022-07-25 Thread Steve Vaughan (Jira)
Steve Vaughan created HDFS-16687:


 Summary: RouterFsckServlet replicates code from DfsServlet base 
class
 Key: HDFS-16687
 URL: https://issues.apache.org/jira/browse/HDFS-16687
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: federation
Reporter: Steve Vaughan


RouterFsckServlet replicates the method "getUGI(HttpServletRequest request, 
Configuration conf)" from DfsServlet instead of just extending DfsServlet.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16688) Unresolved Hosts during startup are not synced by JournalNodes

2022-07-25 Thread Steve Vaughan (Jira)
Steve Vaughan created HDFS-16688:


 Summary: Unresolved Hosts during startup are not synced by 
JournalNodes
 Key: HDFS-16688
 URL: https://issues.apache.org/jira/browse/HDFS-16688
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: journal-node
 Environment: Running in Kubernetes using Java 11, with an HA 
configuration.
Reporter: Steve Vaughan


During the JournalNode startup, it builds the list of servers in the 
JournalNode set, ignoring hostnames that cannot be resolved.  In environments 
with dynamic IP address allocations this means that the JournalNodeSyncer will 
never sync with hosts that aren't resolvable during startup.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Is there a 3.3.4 release planned?

2022-07-25 Thread Lei Zhang
Hi,

Is there a 3.3.4 release planned?

Lei Zhang


Re: Is there a 3.3.4 release planned?

2022-07-25 Thread Ashutosh Gupta
Hi Lei

Releasing 3.3.4 is already in process.

- Ashutosh

On Mon, 25 Jul, 2022, 4:37 pm Lei Zhang,  wrote:

> Hi,
>
> Is there a 3.3.4 release planned?
>
> Lei Zhang
>


3.2.x upgrade to 3.3

2022-07-25 Thread Jason Wen
Hi All,

I see a significant change on 3.3 is using protobuf 3.7.1

  *   HADOOP-16557 | Major 
| [pb-upgrade] Upgrade protobuf.version to 3.7.1
Will this cause any compatibility issue when upgrading from Hadoop 3.2.x to 
3.3? And will rolling upgrade be supported in this upgrade case?

Thanks,
Jason


Re: [External Sender] Re: Is there a 3.3.4 release planned?

2022-07-25 Thread Jason Wen
How about 3.4 release?

-Jason

From: Ashutosh Gupta 
Date: Monday, July 25, 2022 at 8:51 AM
To: Lei Zhang 
Cc: Hdfs-dev 
Subject: [External Sender] Re: Is there a 3.3.4 release planned?
Hi Lei

Releasing 3.3.4 is already in process.

- Ashutosh

On Mon, 25 Jul, 2022, 4:37 pm Lei Zhang,  wrote:

> Hi,
>
> Is there a 3.3.4 release planned?
>
> Lei Zhang
>


Re: Is there a 3.3.4 release planned?

2022-07-25 Thread Lei Zhang
Hi, Ashutosh

Thanks for your reply, this is really good news.

Lei Zhang

Ashutosh Gupta  于2022年7月25日周一 23:52写道:

> Hi Lei
>
> Releasing 3.3.4 is already in process.
>
> - Ashutosh
>
> On Mon, 25 Jul, 2022, 4:37 pm Lei Zhang,  wrote:
>
> > Hi,
> >
> > Is there a 3.3.4 release planned?
> >
> > Lei Zhang
> >
>


[jira] [Created] (HDFS-16689) Standby NameNode crashes when transitioning to Active with in-progress tailer

2022-07-25 Thread ZanderXu (Jira)
ZanderXu created HDFS-16689:
---

 Summary: Standby NameNode crashes when transitioning to Active 
with in-progress tailer
 Key: HDFS-16689
 URL: https://issues.apache.org/jira/browse/HDFS-16689
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: ZanderXu
Assignee: ZanderXu


Standby NameNode crashes when transitioning to Active with a in-progress 
tailer. And the error message like blew:


{code:java}
Caused by: java.lang.IllegalStateException: Cannot start writing at txid X when 
there is a stream available for read: ByteStringEditLog[X, Y], 
ByteStringEditLog[X, 0]
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.openForWrite(FSEditLog.java:344)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.openForWrite(FSEditLogAsync.java:113)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:1423)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:2132)
... 36 more
{code}

After tracing and found there is a critical bug in 
*EditlogTailer#catchupDuringFailover()* when *DFS_HA_TAILEDITS_INPROGRESS_KEY* 
is true. Because *catchupDuringFailover()* try to replay all missed edits from 
JournalNodes with *onlyDurableTxns=true*. It may cannot replay any edits when 
they are some abnormal JournalNodes. 

Reproduce method, suppose:
- There are 2 namenode, namely NN0 and NN1, and the status of echo namenode is 
Active, Standby respectively. And there are 3 JournalNodes, namely JN0, JN1 and 
JN2. 
- NN0 try to sync 3 edits to JNs with started txid 3, but only successfully 
synced them to JN1 and JN3. And JN0 is abnormal, such as GC, bad network or 
restarted.
- NN1's lastAppliedTxId is 2, and at the moment, we are trying failover active 
from NN0 to NN1. 
- NN1 only got two responses from JN0 and JN1 when it try to selecting 
inputStreams with *fromTxnId=3*  and *onlyDurableTxns=true*, and the count txid 
of response is 0, 3 respectively. JN2 is abnormal, such as GC,  bad network or 
restarted.
- NN1 will cannot replay any Edits with *fromTxnId=3* from JournalNodes because 
the *maxAllowedTxns* is 0.


So I think Standby NameNode should *catchupDuringFailover()* with 
*onlyDurableTxns=true* , so that it can replay all missed edits from 
JournalNode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16690) Automatically format new unformatted JournalNodes using JournalNodeSyncer

2022-07-25 Thread Steve Vaughan (Jira)
Steve Vaughan created HDFS-16690:


 Summary: Automatically format new unformatted JournalNodes using 
JournalNodeSyncer 
 Key: HDFS-16690
 URL: https://issues.apache.org/jira/browse/HDFS-16690
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: journal-node
 Environment: Demonstrated in a Kubernetes environment running Java 11.
 # Start new cluster, but short 1 JN (minimum quorum, and the missing JN won’t 
resolve). VERIFY:

 - NN formats the 2 existing JN and stabilizes.  NOTE: Formatting using just a 
quorum will be a separate submission
 - Messages show sync between JN-0 and JN-1, and NN -> JN.

 # Scale JN stateful set to add missing JN. VERIFY:

 - New JN starts
 - All other JN and all NN report IP address change (IP Address resolution).  
NOTE: require HADOOP-18365 and HDFS-16688
 - Messages show sync between all JN, and NN -> JN
 - New JN is formatted at least once (possibly by multiple other JN)
 - New JN storage directory is formatted only once
 - New JN joins cluster (lastWriterEpoch is non-zero)
Reporter: Steve Vaughan


If an unformatted JournalNode is added to an existing JournalNode set, 
instances of the JournalNodeSyncer are unable to sync to the new node.  When a 
sync receives a JournalNotFormattedException, we can initiate a format 
operation, and then retry the synchronization.

Conceptually this means that the JournalNodes and their data can be managed 
independently from the rest of the system, as the JournalNodes will incorporate 
new JournalNode instances.  Once the new JournalNode is formatted, it can 
participate in shared edits from the NameNodes. 

I've been testing an update to the InterQJournalProtocol to add a format call 
like that used by the NameNode.  Current tests include starting an HA cluster 
from scratch, but with 2 JournalNode instances.  Once the cluster is up, I can 
add the 3rd JournalNode (which is unformatted), and the other 2 JournalNodes 
will eventually attempt to sync which results in a formatting and subsequent 
sync.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16691) Use quorum instead of requiring full JN set for NN format

2022-07-25 Thread Steve Vaughan (Jira)
Steve Vaughan created HDFS-16691:


 Summary: Use quorum instead of requiring full JN set for NN format
 Key: HDFS-16691
 URL: https://issues.apache.org/jira/browse/HDFS-16691
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
 Environment: Demonstrated in a Kubernetes environment running Java 11. 
 Using an HA configuration:
 # Start new cluster, but short 1 JN (minimum quorum, and the missing JN won’t 
resolve). VERIFY:
- NN formats the 2 existing JN and stabilizes
- Messages show sync between JN-0 and JN-1, and NN -> JN
 # Scale JN stateful set to add missing JN.  NOTE: Requires HDFS-16690
Reporter: Steve Vaughan


Currently a format request fails if any of the JournalNodes is unresolvable.  
For dynamic cluster environments where a JournalNode may not be available 
during the initial formatting step but JournalNodes can self-heal, it makes 
sense to allow the format to succeed when a quorum of JournalNodes is available.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16681) Do not pass GCC flags for MSVC in libhdfspp

2022-07-25 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16681.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged PR https://github.com/apache/hadoop/pull/4615 to trunk.

> Do not pass GCC flags for MSVC in libhdfspp
> ---
>
> Key: HDFS-16681
> URL: https://issues.apache.org/jira/browse/HDFS-16681
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The tests in HDFS native client uses the *-Wno-missing-field-initializers* 
> flag to ignore warnings about uninitialized members - 
> https://github.com/apache/hadoop/blob/8f83d9f56d775c73af6e3fa1d6a9aa3e64eebc37/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/CMakeLists.txt#L27-L28
> {code}
> set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-missing-field-initializers")
> set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wno-missing-field-initializers")
> {code}
> This leads to the following error on Visual C++.
> {code}
> [exec] 
> "E:\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\ALL_BUILD.vcxproj"
>  (default target) (1) ->
> [exec] 
> "E:\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfspp\tests\x-platform\x_platform_dirent_test_obj.vcxproj"
>  (default target) (24) ->
> [exec]   cl : command line error D8021: invalid numeric argument 
> '/Wno-missing-field-initializers' 
> [E:\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\main\native\libhdfspp\tests\x-platform\x_platform_dirent_test_obj.vcxproj]
> {code}
> Thus, we need to pass this flag only when the compiler isn't Visual C++.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2022-07-25 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/932/

No changes




-1 overall


The following subsystems voted -1:
blanks pathlen xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 
  

   cc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/932/artifact/out/results-compile-cc-root.txt
 [96K]

   javac:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/932/artifact/out/results-compile-javac-root.txt
 [540K]

   blanks:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/932/artifact/out/blanks-eol.txt
 [14M]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/932/artifact/out/blanks-tabs.txt
 [2.0M]

   checkstyle:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/932/artifact/out/results-checkstyle-root.txt
 [14M]

   pathlen:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/932/artifact/out/results-pathlen.txt
 [16K]

   pylint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/932/artifact/out/results-pylint.txt
 [20K]

   shellcheck:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/932/artifact/out/results-shellcheck.txt
 [28K]

   xml:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/932/artifact/out/xml.txt
 [24K]

   javadoc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/932/artifact/out/results-javadoc-javadoc-root.txt
 [400K]

Powered by Apache Yetus 0.14.0-SNAPSHOT   https://yetus.apache.org

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64

2022-07-25 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/339/

[Jul 23, 2022, 2:19:37 PM] (noreply) HDFS-15079. RBF: Namenode needs to use the 
actual client Id and callId when going through RBF proxy. (#4530)
[Jul 23, 2022, 2:58:45 PM] (noreply) YARN-11203. Fix typo in 
hadoop-yarn-server-router module. (#4510). Contributed by fanshilun.
[Jul 23, 2022, 5:50:15 PM] (noreply) HDFS-16467. Ensure Protobuf generated 
headers are included first (#4601)
[Jul 23, 2022, 5:52:13 PM] (noreply) HDFS-16680. Skip libhdfspp Valgrind tests 
on Windows (#4611)




-1 overall


The following subsystems voted -1:
blanks pathlen spotbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Redundant nullcheck of oldLock, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory))
 Redundant null check at DataStorage.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory))
 Redundant null check at DataStorage.java:[line 695] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:[line 138] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:[line 75] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:[line 85] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$$PmemMappedRegion,,
 long, FileInputStream, FileChannel, String) Redundant null check at 
NativePmemMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$$PmemMappedRegion,,
 long, FileInputStream, FileChannel, String) Redundant null check at 
NativePmemMappableBlockLoader.java:[line 130] 
   
org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager$UserCounts
  doesn't override java.util.ArrayList.equals(Object) At 
RollingWindowManager.java:At RollingWindowManager.java:[line 1] 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResou