Eric Yang created HDDS-1609: ------------------------------- Summary: Remove hard coded uid from Ozone docker image Key: HDDS-1609 URL: https://issues.apache.org/jira/browse/HDDS-1609 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Eric Yang
Hadoop-runner image is hard coded to [USER hadoop|https://github.com/apache/hadoop/blob/docker-hadoop-runner-jdk11/Dockerfile#L45] and user hadoop is hard coded to uid 1000. This arrangement complicates development environment where host user is different uid from 1000. External bind mount locations are written data as uid 1000. This can prevent development environment from clean up test data. Docker documentation stated that "The best way to prevent privilege-escalation attacks from within a container is to configure your container’s applications to run as unprivileged users." From Ozone architecture point of view, there is no reason to run Ozone daemon to require privileged user or hard coded user. h3. Solution 1 It would be best to support running docker container as host user to reduce friction. User should be able to run: {code} docker run -u $(id -u):$(id -g) ... {code} or in docker-compose file: {code} user: "${UID}:${GID}" {code} By doing this, the user will be name less in docker container. Some commands may warn that user does not have a name. This can be resolved by mounting /etc/passwd or a file that looks like /etc/passwd that contain host user entry. h3. Solution 2 Move the hard coded user to range between 199 and 500. The default linux profile reserves service users between 199 and 500 to have umask that keep data private to service user or group writable, if service shares group with other service users. Register the service user with Linux vendors to ensure that there is a reserved uid for Hadoop user. This is a longer route to pursuit, and may not be fruitful. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org