[ https://issues.apache.org/jira/browse/HDFS-16868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Xiaoqiao He resolved HDFS-16868. -------------------------------- Fix Version/s: 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed > Fix audit log duplicate issue when an ACE occurs in FSNamesystem. > ----------------------------------------------------------------- > > Key: HDFS-16868 > URL: https://issues.apache.org/jira/browse/HDFS-16868 > Project: Hadoop HDFS > Issue Type: Bug > Reporter: Beibei Zhao > Assignee: Beibei Zhao > Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > checkSuperuserPrivilege call logAuditEvent and throw ace when an > AccessControlException occurs. > {code:java} > // This method logs operationName without super user privilege. > // It should be called without holding FSN lock. > void checkSuperuserPrivilege(String operationName, String path) > throws IOException { > if (isPermissionEnabled) { > try { > FSPermissionChecker.setOperationType(operationName); > FSPermissionChecker pc = getPermissionChecker(); > pc.checkSuperuserPrivilege(path); > } catch(AccessControlException ace){ > logAuditEvent(false, operationName, path); > throw ace; > } > } > } > {code} > It' s callers like metaSave call it like this: > {code:java} > /** > * Dump all metadata into specified file > * @param filename > */ > void metaSave(String filename) throws IOException { > String operationName = "metaSave"; > checkSuperuserPrivilege(operationName); > ...... > try { > ...... > metaSave(out); > ...... > } > } finally { > readUnlock(operationName, getLockReportInfoSupplier(null)); > } > logAuditEvent(true, operationName, null); > } > {code} > but setQuota, addCachePool, modifyCachePool, removeCachePool, > createEncryptionZone and reencryptEncryptionZone catch the ace and log the > same msg again, it' s a waste of memory I think: > {code:java} > /** > * Set the namespace quota and storage space quota for a directory. > * See {@link ClientProtocol#setQuota(String, long, long, StorageType)} for > the > * contract. > * > * Note: This does not support ".inodes" relative path. > */ > void setQuota(String src, long nsQuota, long ssQuota, StorageType type) > throws IOException { > ...... > try { > if(!allowOwnerSetQuota) { > checkSuperuserPrivilege(operationName, src); > } > ...... > } catch (AccessControlException ace) { > logAuditEvent(false, operationName, src); > throw ace; > } > getEditLog().logSync(); > logAuditEvent(true, operationName, src); > } > {code} > Maybe we should move the checkSuperuserPrivilege out of the try block as > metaSave and other callers do. -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org