mukvin created HDFS-17625:
-----------------------------

             Summary: Failed to read expected SASL data transfer protection 
handshake
                 Key: HDFS-17625
                 URL: https://issues.apache.org/jira/browse/HDFS-17625
             Project: Hadoop HDFS
          Issue Type: Bug
    Affects Versions: 3.3.6
            Reporter: mukvin


I using the libhdfspp to connect a secure (kerberos) hdfs.

And I found that  I can get the data correclty by command `hdfs dfs -cat 
/user/data.csv`.

But If I using the libhdfspp/examples/cat to cat the /user/data.csv then the 
error is following:
```

$./cat /user/test_tbl1.csv
Error reading the file: Connection reset by peer
[WARN  ][BlockReader   ][Fri Sep 13 20:57:02 2024][Thread id = 
139632020002560][libhdfspp/lib/connection/datanodeconnection.h:50]    Error 
disconnecting socket: shutdown() threwshutdown: Transport endpoint is not 
connected

```


```

2024-09-13 20:57:02,346 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Failed to read expected SASL data transfer protection handshake from client at 
/127.0.0.1:59037. Perhaps the client is running an older version of Hadoop 
which does not support SASL data transfer protection
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.InvalidMagicNumberException: 
Received 1c51a1 instead of deadbeef from client.
    at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.doSaslHandshake(SaslDataTransferServer.java:374)
    at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.getSaslStreams(SaslDataTransferServer.java:308)
    at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.receive(SaslDataTransferServer.java:135)
    at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:236)
    at java.lang.Thread.run(Thread.java:750)

```

and hdfs-site.xml
```

$ cat hdfs-site.xml
<configuration>
    <property>
        <name>dfs.namenode.rpc-address</name>
        <value>0.0.0.0:8020</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.block.access.token.enable</name>
        <value>true</value>
    </property>
    <property>
        <name>dfs.namenode.keytab.file</name>
        <value>/data/1/hadoop-kerberos/keytabs/hdfs.keytab</value>
    </property>
    <property>
        <name>dfs.namenode.kerberos.principal</name>
        <value>hdfs/had...@xxx.com</value>
    </property>
    <property>
        <name>dfs.namenode.kerberos.https.principal</name>
        <value>hdfs/had...@xxx.com</value>
    </property>
    <property>
        <name>dfs.secondary.namenode.keytab.file</name>
        <value>/data/1/hadoop-kerberos/keytabs/hdfs.keytab</value>
    </property>
    <property>
        <name>dfs.secondary.namenode.kerberos.principal</name>
        <value>hdfs/had...@xxx.com</value>
    </property>
    <property>
        <name>dfs.secondary.namenode.kerberos.https.principal</name>
        <value>hdfs/had...@xxx.com</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir.perm</name>
        <value>700</value>
    </property>
    <property>
        <name>dfs.datanode.keytab.file</name>
        <value>/data/1/hadoop-kerberos/keytabs/hdfs.keytab</value>
    </property>
    <property>
        <name>dfs.datanode.kerberos.principal</name>
        <value>hdfs/had...@xxx.com</value>
    </property>
    <property>
        <name>dfs.datanode.kerberos.https.principal</name>
        <value>hdfs/had...@xxx.com</value>
    </property>
    <property>
        <name>dfs.encrypt.data.transfer</name>
        <value>false</value>
    </property>
    <property>
<name>dfs.data.transfer.protection</name>
<value>integrity</value>
    </property>
    <property>
        <name>dfs.http.policy</name>
        <value>HTTPS_ONLY</value>
    </property>
<property>
  <name>dfs.datanode.address</name>
  <value>0.0.0.0:61004</value>
</property>
<property>
  <name>dfs.datanode.http.address</name>
  <value>0.0.0.0:61006</value>
</property>
<property>
  <name>dfs.datanode.https.address</name>
  <value>0.0.0.0:61010</value>
</property>
 
<!-- extra added -->
     <property>
         <name>dfs.client.https.need-auth</name>
         <value>false</value>
     </property>
</configuration>
```

and core-site.xml

```

$ cat core-site.xml

<configuration>
<property><name>fs.default.name</name><value>hdfs://0.0.0.0</value></property>
<property><name>fs.defaultFS</name><value>hdfs://0.0.0.0</value></property>
<property><name>hadoop.tmp.dir</name><value>/data/1/hadoop-kerberos/temp_data/336</value></property>
<property><name>hadoop.security.authentication</name><value>kerberos</value></property>
<property><name>hadoop.security.authorization</name><value>true</value></property>
</configuration>

```

Can anyone help, pls.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Reply via email to