This is an automated email from the ASF dual-hosted git repository.

luzhijing pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 2843dcbce1 [fix]Add FAQ solutions of doris-kafka-connector (#830)
2843dcbce1 is described below

commit 2843dcbce1a1bf8bf1f7289beec2a137204a21a9
Author: wudongliang <46414265+donglian...@users.noreply.github.com>
AuthorDate: Thu Jul 11 00:07:39 2024 +0800

    [fix]Add FAQ solutions of doris-kafka-connector (#830)
    
    Co-authored-by: Luzhijing <82810928+luzhij...@users.noreply.github.com>
---
 docs/ecosystem/doris-kafka-connector.md            | 24 ++++++++++++++++++--
 .../current/ecosystem/doris-kafka-connector.md     | 26 +++++++++++++++++++---
 .../version-2.0/ecosystem/doris-kafka-connector.md | 24 ++++++++++++++++++--
 .../version-2.1/ecosystem/doris-kafka-connector.md | 24 ++++++++++++++++++--
 .../version-2.0/ecosystem/doris-kafka-connector.md | 24 ++++++++++++++++++--
 .../version-2.1/ecosystem/doris-kafka-connector.md | 24 ++++++++++++++++++--
 6 files changed, 133 insertions(+), 13 deletions(-)

diff --git a/docs/ecosystem/doris-kafka-connector.md 
b/docs/ecosystem/doris-kafka-connector.md
index c3d254f797..0706335d7d 100644
--- a/docs/ecosystem/doris-kafka-connector.md
+++ b/docs/ecosystem/doris-kafka-connector.md
@@ -52,11 +52,12 @@ Configure config/connect-standalone.properties
 bootstrap.servers=127.0.0.1:9092
 
 # Modify to the created plugins directory
+# Note: Please fill in the direct path to Kafka here. For example: 
plugin.path=/opt/kafka/plugins
 plugin.path=$KAFKA_HOME/plugins
 ```
 
 Configure doris-connector-sink.properties
-<br />
+
 Create doris-connector-sink.properties in the config directory and configure 
the following content:
 
 ```properties
@@ -99,6 +100,7 @@ bootstrap.servers=127.0.0.1:9092
 group.id=connect-cluster
 
 # Modify to the created plugins directory
+# Note: Please fill in the direct path to Kafka here. For example: 
plugin.path=/opt/kafka/plugins
 plugin.path=$KAFKA_HOME/plugins
 ```
 
@@ -327,4 +329,22 @@ curl -i http://127.0.0.1:8083/connectors -H "Content-Type: 
application/json" -X
     "value.converter.schema.registry.url":"http://127.0.0.1:8081";
   } 
 }'
-```
\ No newline at end of file
+```
+
+## FAQ
+**1. The following error occurs when reading Json type data:**
+```
+Caused by: org.apache.kafka.connect.errors.DataException: JsonConverter with 
schemas.enable requires "schema" and "payload" fields and may not contain 
additional fields. If you are trying to deserialize plain JSON data, set 
schemas.enable=false in your converter configuration.
+       at 
org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:337)
+       at 
org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:91)
+       at 
org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$4(WorkerSinkTask.java:536)
+       at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:180)
+       at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:214)
+```
+**reason:**
+This is because using the `org.apache.kafka.connect.json.JsonConverter` 
converter requires matching the "schema" and "payload" fields.
+
+**Two solutions, choose one:**
+  1. Replace `org.apache.kafka.connect.json.JsonConverter` with 
`org.apache.kafka.connect.storage.StringConverter`
+  2. If the startup mode is **Standalone** mode, change 
`value.converter.schemas.enable` or `key.converter.schemas.enable` in 
config/connect-standalone.properties to false;
+   If the startup mode is **Distributed** mode, change 
`value.converter.schemas.enable` or `key.converter.schemas.enable` in 
config/connect-distributed.properties to false
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/doris-kafka-connector.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/doris-kafka-connector.md
index 7eb53a5215..d55c2e0fc7 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/doris-kafka-connector.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/doris-kafka-connector.md
@@ -52,11 +52,12 @@ maven 依赖
 bootstrap.servers=127.0.0.1:9092
 
 # 修改为创建的 plugins 目录
+# 注意:此处请填写 Kafka 的直接路径。例如:plugin.path=/opt/kafka/plugins
 plugin.path=$KAFKA_HOME/plugins
 ```
 
 配置 doris-connector-sink.properties
-<br />
+
 在 config 目录下创建 doris-connector-sink.properties,并配置如下内容:
 
 ```properties
@@ -100,6 +101,7 @@ bootstrap.servers=127.0.0.1:9092
 group.id=connect-cluster
 
 # 修改为创建的 plugins 目录
+# 注意:此处请填写 Kafka 的直接路径。例如:plugin.path=/opt/kafka/plugins
 plugin.path=$KAFKA_HOME/plugins
 ```
 
@@ -250,7 +252,7 @@ Doris-kafka-connector 使用逻辑或原始类型映射来解析列的数据类
 
 
 ## 最佳实践
-### 同步 Json 序列化数据
+### 同步 JSON 序列化数据
 ```
 curl -i http://127.0.0.1:8083/connectors -H "Content-Type: application/json" 
-X POST -d '{ 
   "name":"doris-json-test", 
@@ -327,4 +329,22 @@ curl -i http://127.0.0.1:8083/connectors -H "Content-Type: 
application/json" -X
     "value.converter.schema.registry.url":"http://127.0.0.1:8081";
   } 
 }'
-```
\ No newline at end of file
+```
+
+## 常见问题
+**1. 读取 JSON 类型的数据报如下错误:**
+```
+Caused by: org.apache.kafka.connect.errors.DataException: JsonConverter with 
schemas.enable requires "schema" and "payload" fields and may not contain 
additional fields. If you are trying to deserialize plain JSON data, set 
schemas.enable=false in your converter configuration.
+       at 
org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:337)
+       at 
org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:91)
+       at 
org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$4(WorkerSinkTask.java:536)
+       at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:180)
+       at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:214)
+```
+**原因:**
+    是因为使用 `org.apache.kafka.connect.json.JsonConverter` 转换器需要匹配 "schema" 和 
"payload" 字段。
+
+**两种解决方案,任选其一:**
+  1. 将 `org.apache.kafka.connect.json.JsonConverter` 更换为 
`org.apache.kafka.connect.storage.StringConverter`
+  2. 启动模式为 **Standalone** 模式,则将 config/connect-standalone.properties 中 
`value.converter.schemas.enable` 或 `key.converter.schemas.enable` 改成false;
+    启动模式为 **Distributed** 模式,则将 config/connect-distributed.properties 中 
`value.converter.schemas.enable` 或 `key.converter.schemas.enable` 改成false
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/ecosystem/doris-kafka-connector.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/ecosystem/doris-kafka-connector.md
index 7eb53a5215..cb602b49a6 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/ecosystem/doris-kafka-connector.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/ecosystem/doris-kafka-connector.md
@@ -52,11 +52,12 @@ maven 依赖
 bootstrap.servers=127.0.0.1:9092
 
 # 修改为创建的 plugins 目录
+# 注意:此处请填写 Kafka 的直接路径。例如:plugin.path=/opt/kafka/plugins
 plugin.path=$KAFKA_HOME/plugins
 ```
 
 配置 doris-connector-sink.properties
-<br />
+
 在 config 目录下创建 doris-connector-sink.properties,并配置如下内容:
 
 ```properties
@@ -100,6 +101,7 @@ bootstrap.servers=127.0.0.1:9092
 group.id=connect-cluster
 
 # 修改为创建的 plugins 目录
+# 注意:此处请填写 Kafka 的直接路径。例如:plugin.path=/opt/kafka/plugins
 plugin.path=$KAFKA_HOME/plugins
 ```
 
@@ -327,4 +329,22 @@ curl -i http://127.0.0.1:8083/connectors -H "Content-Type: 
application/json" -X
     "value.converter.schema.registry.url":"http://127.0.0.1:8081";
   } 
 }'
-```
\ No newline at end of file
+```
+
+## 常见问题
+**1. 读取 Json 类型的数据报如下错误:**
+```
+Caused by: org.apache.kafka.connect.errors.DataException: JsonConverter with 
schemas.enable requires "schema" and "payload" fields and may not contain 
additional fields. If you are trying to deserialize plain JSON data, set 
schemas.enable=false in your converter configuration.
+       at 
org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:337)
+       at 
org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:91)
+       at 
org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$4(WorkerSinkTask.java:536)
+       at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:180)
+       at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:214)
+```
+**原因:**
+是因为使用 `org.apache.kafka.connect.json.JsonConverter` 转换器需要匹配 "schema" 和 
"payload" 字段。
+
+**两种解决方案,任选其一:**
+  1. 将 `org.apache.kafka.connect.json.JsonConverter` 更换为 
`org.apache.kafka.connect.storage.StringConverter`
+  2. 启动模式为 **Standalone** 模式,则将 config/connect-standalone.properties 中 
`value.converter.schemas.enable` 或 `key.converter.schemas.enable` 改成false;
+   启动模式为 **Distributed** 模式,则将 config/connect-distributed.properties 中 
`value.converter.schemas.enable` 或 `key.converter.schemas.enable` 改成false
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/doris-kafka-connector.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/doris-kafka-connector.md
index 7eb53a5215..cb602b49a6 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/doris-kafka-connector.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/doris-kafka-connector.md
@@ -52,11 +52,12 @@ maven 依赖
 bootstrap.servers=127.0.0.1:9092
 
 # 修改为创建的 plugins 目录
+# 注意:此处请填写 Kafka 的直接路径。例如:plugin.path=/opt/kafka/plugins
 plugin.path=$KAFKA_HOME/plugins
 ```
 
 配置 doris-connector-sink.properties
-<br />
+
 在 config 目录下创建 doris-connector-sink.properties,并配置如下内容:
 
 ```properties
@@ -100,6 +101,7 @@ bootstrap.servers=127.0.0.1:9092
 group.id=connect-cluster
 
 # 修改为创建的 plugins 目录
+# 注意:此处请填写 Kafka 的直接路径。例如:plugin.path=/opt/kafka/plugins
 plugin.path=$KAFKA_HOME/plugins
 ```
 
@@ -327,4 +329,22 @@ curl -i http://127.0.0.1:8083/connectors -H "Content-Type: 
application/json" -X
     "value.converter.schema.registry.url":"http://127.0.0.1:8081";
   } 
 }'
-```
\ No newline at end of file
+```
+
+## 常见问题
+**1. 读取 Json 类型的数据报如下错误:**
+```
+Caused by: org.apache.kafka.connect.errors.DataException: JsonConverter with 
schemas.enable requires "schema" and "payload" fields and may not contain 
additional fields. If you are trying to deserialize plain JSON data, set 
schemas.enable=false in your converter configuration.
+       at 
org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:337)
+       at 
org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:91)
+       at 
org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$4(WorkerSinkTask.java:536)
+       at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:180)
+       at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:214)
+```
+**原因:**
+是因为使用 `org.apache.kafka.connect.json.JsonConverter` 转换器需要匹配 "schema" 和 
"payload" 字段。
+
+**两种解决方案,任选其一:**
+  1. 将 `org.apache.kafka.connect.json.JsonConverter` 更换为 
`org.apache.kafka.connect.storage.StringConverter`
+  2. 启动模式为 **Standalone** 模式,则将 config/connect-standalone.properties 中 
`value.converter.schemas.enable` 或 `key.converter.schemas.enable` 改成false;
+   启动模式为 **Distributed** 模式,则将 config/connect-distributed.properties 中 
`value.converter.schemas.enable` 或 `key.converter.schemas.enable` 改成false
\ No newline at end of file
diff --git a/versioned_docs/version-2.0/ecosystem/doris-kafka-connector.md 
b/versioned_docs/version-2.0/ecosystem/doris-kafka-connector.md
index c3d254f797..0706335d7d 100644
--- a/versioned_docs/version-2.0/ecosystem/doris-kafka-connector.md
+++ b/versioned_docs/version-2.0/ecosystem/doris-kafka-connector.md
@@ -52,11 +52,12 @@ Configure config/connect-standalone.properties
 bootstrap.servers=127.0.0.1:9092
 
 # Modify to the created plugins directory
+# Note: Please fill in the direct path to Kafka here. For example: 
plugin.path=/opt/kafka/plugins
 plugin.path=$KAFKA_HOME/plugins
 ```
 
 Configure doris-connector-sink.properties
-<br />
+
 Create doris-connector-sink.properties in the config directory and configure 
the following content:
 
 ```properties
@@ -99,6 +100,7 @@ bootstrap.servers=127.0.0.1:9092
 group.id=connect-cluster
 
 # Modify to the created plugins directory
+# Note: Please fill in the direct path to Kafka here. For example: 
plugin.path=/opt/kafka/plugins
 plugin.path=$KAFKA_HOME/plugins
 ```
 
@@ -327,4 +329,22 @@ curl -i http://127.0.0.1:8083/connectors -H "Content-Type: 
application/json" -X
     "value.converter.schema.registry.url":"http://127.0.0.1:8081";
   } 
 }'
-```
\ No newline at end of file
+```
+
+## FAQ
+**1. The following error occurs when reading Json type data:**
+```
+Caused by: org.apache.kafka.connect.errors.DataException: JsonConverter with 
schemas.enable requires "schema" and "payload" fields and may not contain 
additional fields. If you are trying to deserialize plain JSON data, set 
schemas.enable=false in your converter configuration.
+       at 
org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:337)
+       at 
org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:91)
+       at 
org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$4(WorkerSinkTask.java:536)
+       at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:180)
+       at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:214)
+```
+**reason:**
+This is because using the `org.apache.kafka.connect.json.JsonConverter` 
converter requires matching the "schema" and "payload" fields.
+
+**Two solutions, choose one:**
+  1. Replace `org.apache.kafka.connect.json.JsonConverter` with 
`org.apache.kafka.connect.storage.StringConverter`
+  2. If the startup mode is **Standalone** mode, change 
`value.converter.schemas.enable` or `key.converter.schemas.enable` in 
config/connect-standalone.properties to false;
+   If the startup mode is **Distributed** mode, change 
`value.converter.schemas.enable` or `key.converter.schemas.enable` in 
config/connect-distributed.properties to false
\ No newline at end of file
diff --git a/versioned_docs/version-2.1/ecosystem/doris-kafka-connector.md 
b/versioned_docs/version-2.1/ecosystem/doris-kafka-connector.md
index c3d254f797..0706335d7d 100644
--- a/versioned_docs/version-2.1/ecosystem/doris-kafka-connector.md
+++ b/versioned_docs/version-2.1/ecosystem/doris-kafka-connector.md
@@ -52,11 +52,12 @@ Configure config/connect-standalone.properties
 bootstrap.servers=127.0.0.1:9092
 
 # Modify to the created plugins directory
+# Note: Please fill in the direct path to Kafka here. For example: 
plugin.path=/opt/kafka/plugins
 plugin.path=$KAFKA_HOME/plugins
 ```
 
 Configure doris-connector-sink.properties
-<br />
+
 Create doris-connector-sink.properties in the config directory and configure 
the following content:
 
 ```properties
@@ -99,6 +100,7 @@ bootstrap.servers=127.0.0.1:9092
 group.id=connect-cluster
 
 # Modify to the created plugins directory
+# Note: Please fill in the direct path to Kafka here. For example: 
plugin.path=/opt/kafka/plugins
 plugin.path=$KAFKA_HOME/plugins
 ```
 
@@ -327,4 +329,22 @@ curl -i http://127.0.0.1:8083/connectors -H "Content-Type: 
application/json" -X
     "value.converter.schema.registry.url":"http://127.0.0.1:8081";
   } 
 }'
-```
\ No newline at end of file
+```
+
+## FAQ
+**1. The following error occurs when reading Json type data:**
+```
+Caused by: org.apache.kafka.connect.errors.DataException: JsonConverter with 
schemas.enable requires "schema" and "payload" fields and may not contain 
additional fields. If you are trying to deserialize plain JSON data, set 
schemas.enable=false in your converter configuration.
+       at 
org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:337)
+       at 
org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:91)
+       at 
org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$4(WorkerSinkTask.java:536)
+       at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:180)
+       at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:214)
+```
+**reason:**
+This is because using the `org.apache.kafka.connect.json.JsonConverter` 
converter requires matching the "schema" and "payload" fields.
+
+**Two solutions, choose one:**
+  1. Replace `org.apache.kafka.connect.json.JsonConverter` with 
`org.apache.kafka.connect.storage.StringConverter`
+  2. If the startup mode is **Standalone** mode, change 
`value.converter.schemas.enable` or `key.converter.schemas.enable` in 
config/connect-standalone.properties to false;
+   If the startup mode is **Distributed** mode, change 
`value.converter.schemas.enable` or `key.converter.schemas.enable` in 
config/connect-distributed.properties to false
\ No newline at end of file


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org

Reply via email to