Arpit Agarwal created HDDS-138: ---------------------------------- Summary: createVolume bug with non-existent user Key: HDDS-138 URL: https://issues.apache.org/jira/browse/HDDS-138 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Arpit Agarwal
When createVolume is invoked for a non-existent user, it fails with {{PartialGroupNameException}}. However the volume appears to be created and the volume name cannot be reused: {code} hadoop@9a70d9aa6bf9:~$ ozone oz -createVolume /vol4 -user nosuchuser 2018-05-31 20:40:16 WARN NativeCodeLoader:60 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2018-05-31 20:40:17 WARN ShellBasedUnixGroupsMapping:210 - unable to return groups for user nosuchuser PartialGroupNameException The user name 'nosuchuser' is not found. id: ‘nosuchuser’: no such user id: ‘nosuchuser’: no such user at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:294) at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:207) at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:97) at org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:51) at org.apache.hadoop.security.Groups$GroupCacheLoader.fetchGroupList(Groups.java:384) at org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:319) at org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:269) at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568) at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350) at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313) at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228) at com.google.common.cache.LocalCache.get(LocalCache.java:3965) at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969) at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829) at org.apache.hadoop.security.Groups.getGroups(Groups.java:227) at org.apache.hadoop.security.UserGroupInformation.getGroups(UserGroupInformation.java:1545) at org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1533) at org.apache.hadoop.ozone.client.rpc.RpcClient.createVolume(RpcClient.java:190) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54) at com.sun.proxy.$Proxy11.createVolume(Unknown Source) at org.apache.hadoop.ozone.client.ObjectStore.createVolume(ObjectStore.java:77) at org.apache.hadoop.ozone.web.ozShell.volume.CreateVolumeHandler.execute(CreateVolumeHandler.java:98) at org.apache.hadoop.ozone.web.ozShell.Shell.dispatch(Shell.java:395) at org.apache.hadoop.ozone.web.ozShell.Shell.run(Shell.java:135) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:114) 2018-05-31 20:40:17 INFO RpcClient:210 - Creating Volume: vol4, with nosuchuser as owner and quota set to 1152921504606846976 bytes. hadoop@9a70d9aa6bf9:~$ ozone oz -createVolume /vol4 -user hadoop 2018-05-31 20:40:23 WARN NativeCodeLoader:60 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2018-05-31 20:40:24 INFO RpcClient:210 - Creating Volume: vol4, with hadoop as owner and quota set to 1152921504606846976 bytes. Command Failed : Volume creation failed, error:VOLUME_ALREADY_EXISTS {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org