Pradeep Nayak Udupi Kadbet created HADOOP-12345:
---------------------------------------------------

             Summary: Credential length in CredentialsSys.java incorrect
                 Key: HADOOP-12345
                 URL: https://issues.apache.org/jira/browse/HADOOP-12345
             Project: Hadoop Common
          Issue Type: Bug
          Components: nfs
    Affects Versions: 2.7.0
            Reporter: Pradeep Nayak Udupi Kadbet
            Priority: Blocker
             Fix For: 2.7.0


Hi -

There is a bug in the way hadoop-nfs sets the credential length in 
"Credentials" field of the NFS RPC packet when using AUTH_SYS

In CredentialsSys.java, when we are writing the creds in to XDR object, we set 
the length as follows:

 // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
96     mCredentialsLength = 20 + mHostName.getBytes().length;

(20 corresponds to 4 bytes for mStamp, 4 bytes for mUID, 4 bytes for mGID, 4 
bytes for length field of hostname, 4 bytes for number of aux 4 gids) and this 
is okay.

However when we add the length of the hostname to this, we are not adding the 
extra padded bytes for the hostname (If the length is not a multiple of 4) and 
thus when the NFS server reads the packet, it returns GARBAGE_ARGS because it 
doesn't read the uid field when it is expected to read. I can reproduce this 
issue constantly on machines where the hostname length is not a multiple of 4.

A possible fix is to do something this:
int pad = mHostName.getBytes().length % 4;
 // mStamp + mHostName.length + mHostName + mUID + mGID + mAuxGIDs.count
mCredentialsLength = 20 + mHostName.getBytes().length + pad;

I would be happy to submit the patch but I need some help to commit into 
mainline. I haven't committed into Hadoop yet.

Cheers!
Pradeep



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to