Hi Team ,

Current HDFS NFS gateway Supports exporting only one Directory..

Example :
<property>
<name>nfs.export.point</name>
<value>/user</value>
</property>

This property helps us to export particular directory ..

Code Block :

public RpcProgramMountd(NfsConfiguration config,
DatagramSocket registrationSocket, boolean allowInsecurePorts)
throws IOException
{ // Note that RPC cache is not enabled super("mountd", "localhost",
config.getInt( NfsConfigKeys.DFS_NFS_MOUNTD_PORT_KEY,
NfsConfigKeys.DFS_NFS_MOUNTD_PORT_DEFAULT), PROGRAM, VERSION_1, VERSION_3,
registrationSocket, allowInsecurePorts); exports = new ArrayList<String>();
exports.add(config.get(NfsConfigKeys.DFS_NFS_EXPORT_POINT_KEY,
NfsConfigKeys.DFS_NFS_EXPORT_POINT_DEFAULT)); this.hostsMatcher =
NfsExports.getInstance(config); this.mounts =
Collections.synchronizedList(new ArrayList<MountEntry>());
UserGroupInformation.setConfiguration(config); SecurityUtil.login(config,
NfsConfigKeys.DFS_NFS_KEYTAB_FILE_KEY,
NfsConfigKeys.DFS_NFS_KERBEROS_PRINCIPAL_KEY); this.dfsClient = new
DFSClient(NameNode.getAddress(config), config); }

Export List:
exports.add(config.get(NfsConfigKeys.DFS_NFS_EXPORT_POINT_KEY,
NfsConfigKeys.DFS_NFS_EXPORT_POINT_DEFAULT));

Current Code is supporting only one directory to be exposed ... Based on
our example /user can be exported ..

Most of the production environment expects more number of directories
should be exported and the same can be mounted for different clients..

Example:

<property>
<name>nfs.export.point</name>
<value>/user,/data/web_crawler,/app-logs</value>
</property>

Here i have three directories to be exposed ..

1) /user
2) /data/web_crawler
3) /app-logs

This would help us to mount directories for particular client ( Say client
A wants to write data in /app-logs - Hadoop Admin can mount and handover to
clients ).

Please advise here..


Have created JIRA for this issue :
https://issues.apache.org/jira/browse/HDFS-10721.


--Senthil

Reply via email to