This is an automated email from the ASF dual-hosted git repository.
morningman pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git
The following commit(s) were added to refs/heads/master by this push:
new 38661c4b3c4 [opt](lh) add dfs.client.use.datanode.hostname doc (#2397)
38661c4b3c4 is described below
commit 38661c4b3c47445d56c1cfcb1ba24629d0d14a1c
Author: Mingyu Chen (Rayner) <[email protected]>
AuthorDate: Mon May 19 15:54:27 2025 +0800
[opt](lh) add dfs.client.use.datanode.hostname doc (#2397)
## Versions
- [x] dev
- [ ] 3.0
- [ ] 2.1
- [ ] 2.0
## Languages
- [x] Chinese
- [x] English
## Docs Checklist
- [ ] Checked by AI
- [ ] Test Cases Built
---
docs/faq/lakehouse-faq.md | 28 +++++++++++++++++-----
.../current/faq/lakehouse-faq.md | 28 +++++++++++++++++-----
2 files changed, 44 insertions(+), 12 deletions(-)
diff --git a/docs/faq/lakehouse-faq.md b/docs/faq/lakehouse-faq.md
index cf7544c14c1..df27b39002f 100644
--- a/docs/faq/lakehouse-faq.md
+++ b/docs/faq/lakehouse-faq.md
@@ -296,16 +296,32 @@ ln -s
/etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt /etc/ssl/certs/ca-
Possible solutions include:
- Use `hdfs fsck file -files -blocks -locations` to check if the file is
healthy.
- Check connectivity with datanodes using `telnet`.
+
+ The following error may be printed in the error log:
+
+ ```
+ No live nodes contain current block Block locations:
DatanodeInfoWithStorage[10.70.150.122:50010,DS-7bba8ffc-651c-4617-90e1-6f45f9a5f896,DISK]
+ ```
+
+ You can first check the connectivity between the Doris cluster and
`10.70.150.122:50010`.
+
+ In addition, in some cases, the HDFS cluster uses dual network with
internal and external IPs. In this case, domain names are required for
communication, and the following needs to be added to the Catalog properties:
`"dfs.client.use.datanode.hostname" = "true"`.
+
+ At the same time, please check whether the parameter is true in the
`hdfs-site.xml` file placed under `fe/conf` and `be/conf`.
+
- Check datanode logs.
- If you encounter the following error:
+ If you encounter the following error:
+
+ ```
+ org.apache.hadoop.hdfs.server.datanode.DataNode: Failed to read
expected SASL data transfer protection handshake from client at
/XXX.XXX.XXX.XXX:XXXXX. Perhaps the client is running an older version of
Hadoop which does not support SASL data transfer protection
+ ```
- `org.apache.hadoop.hdfs.server.datanode.DataNode: Failed to read expected
SASL data transfer protection handshake from client at /XXX.XXX.XXX.XXX:XXXXX.
Perhaps the client is running an older version of Hadoop which does not support
SASL data transfer protection`
- it means that the current hdfs has enabled encrypted transmission, but the
client has not, causing the error.
+ it means that the current hdfs has enabled encrypted transmission, but
the client has not, causing the error.
- Use any of the following solutions:
- - Copy hdfs-site.xml and core-site.xml to be/conf and fe/conf directories.
(Recommended)
- - In hdfs-site.xml, find the corresponding configuration
`dfs.data.transfer.protection` and set this parameter in the catalog.
+ Use any of the following solutions:
+ - Copy `hdfs-site.xml` and `core-site.xml` to `fe/conf` and `be/conf`.
(Recommended)
+ - In `hdfs-site.xml`, find the corresponding configuration
`dfs.data.transfer.protection` and set this parameter in the catalog.
## DLF Catalog
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/faq/lakehouse-faq.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/faq/lakehouse-faq.md
index d867858a15e..d1531cfd4f4 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/faq/lakehouse-faq.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/faq/lakehouse-faq.md
@@ -329,16 +329,32 @@ ln -s
/etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt /etc/ssl/certs/ca-
可能的处理方式有:
- 通过 `hdfs fsck file -files -blocks -locations` 来查看具体该文件是否健康。
- 通过 `telnet` 来检查与 datanode 的连通性。
+
+ 在错误日志中可能会打印如下错误:
+
+ ```
+ No live nodes contain current block Block locations:
DatanodeInfoWithStorage[10.70.150.122:50010,DS-7bba8ffc-651c-4617-90e1-6f45f9a5f896,DISK]
+ ```
+
+ 可以先检查 Doris 集群与 `10.70.150.122:50010` 的连通性。
+
+ 另外,某些情况下,HDFS 集群会使用双网卡,有对内和对外 IP。此时,需使用域名来进行通信,需要在 Catalog
属性中添加:`"dfs.client.use.datanode.hostname" = "true"`。
+
+ 同时,请检查 `fe/conf` 和 `be/conf` 下放置的 `hdfs-site.xml` 文件中,该参数是否为 true。
+
- 查看 datanode 日志。
- 如果出现以下错误:
+ 如果出现以下错误:
+
+ ```
+ org.apache.hadoop.hdfs.server.datanode.DataNode: Failed to read
expected SASL data transfer protection handshake from client at
/XXX.XXX.XXX.XXX:XXXXX. Perhaps the client is running an older version of
Hadoop which does not support SASL data transfer protection
+ ```
- `org.apache.hadoop.hdfs.server.datanode.DataNode: Failed to read expected
SASL data transfer protection handshake from client at /XXX.XXX.XXX.XXX:XXXXX.
Perhaps the client is running an older version of Hadoop which does not support
SASL data transfer protection`
- 则为当前 hdfs 开启了加密传输方式,而客户端未开启导致的错误。
+ 则为当前 hdfs 开启了加密传输方式,而客户端未开启导致的错误。
- 使用下面的任意一种解决方案即可:
- - 拷贝 hdfs-site.xml 以及 core-site.xml 到 be/conf 和 fe/conf 目录。(推荐)
- - 在 hdfs-site.xml 找到相应的配置 `dfs.data.transfer.protection`,并且在 catalog
里面设置该参数。
+ 使用下面的任意一种解决方案即可:
+ - 拷贝 `hdfs-site.xml` 以及 `core-site.xml` 到 `fe/conf` 和 `be/conf` 目录。(推荐)
+ - 在 `hdfs-site.xml` 找到相应的配置 `dfs.data.transfer.protection`,并且在 catalog
里面设置该参数。
## DLF Catalog
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]