[ https://issues.apache.org/jira/browse/HDFS-16266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Takanobu Asanuma resolved HDFS-16266. ------------------------------------- Fix Version/s: 3.4.0 Resolution: Fixed > Add remote port information to HDFS audit log > --------------------------------------------- > > Key: HDFS-16266 > URL: https://issues.apache.org/jira/browse/HDFS-16266 > Project: Hadoop HDFS > Issue Type: Improvement > Reporter: tomscut > Assignee: tomscut > Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 8.5h > Remaining Estimate: 0h > > In our production environment, we occasionally encounter a problem where a > user submits an abnormal computation task, causing a sudden flood of > requests, which causes the queueTime and processingTime of the Namenode to > rise very high, causing a large backlog of tasks. > We usually locate and kill specific Spark, Flink, or MapReduce tasks based on > metrics and audit logs. Currently, IP and UGI are recorded in audit logs, but > there is no port information, so it is difficult to locate specific processes > sometimes. Therefore, I propose that we add the port information to the audit > log, so that we can easily track the upstream process. > Currently, some projects contain port information in audit logs, such as > Hbase and Alluxio. I think it is also necessary to add port information for > HDFS audit logs. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org