This is an automated email from the ASF dual-hosted git repository.

jiafengzheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 99b376a60d9 fix doc link
99b376a60d9 is described below

commit 99b376a60d9f615be90453c36073fe850c893415
Author: jiafeng.zhang <zhang...@gmail.com>
AuthorDate: Sat Oct 8 11:35:16 2022 +0800

    fix doc link
---
 docs/ecosystem/logstash.md                         |  2 +-
 docs/ecosystem/udf/contribute-udf.md               |  2 +-
 docs/faq/install-faq.md                            |  5 +----
 docs/get-starting/get-starting.md                  |  2 +-
 .../Alter/ALTER-TABLE-PARTITION.md                 |  2 +-
 .../Alter/ALTER-TABLE-ROLLUP.md                    |  4 ++--
 .../Create/CREATE-MATERIALIZED-VIEW.md             |  2 +-
 .../Drop/DROP-DATABASE.md                          |  2 +-
 .../Load/BROKER-LOAD.md                            | 22 +++++++++++-----------
 .../Load/CREATE-SYNC-JOB.md                        |  2 +-
 .../sql-reference/Show-Statements/SHOW-STATUS.md   |  2 +-
 .../Show-Statements/SHOW-STREAM-LOAD.md            |  2 +-
 12 files changed, 23 insertions(+), 26 deletions(-)

diff --git a/docs/ecosystem/logstash.md b/docs/ecosystem/logstash.md
index d5dfe17a41c..dbf7901625e 100644
--- a/docs/ecosystem/logstash.md
+++ b/docs/ecosystem/logstash.md
@@ -28,7 +28,7 @@ under the License.
 
 This plugin is used to output data to Doris for logstash, use the HTTP 
protocol to interact with the Doris FE Http interface, and import data through 
Doris's stream load.
 
-[Learn more about Doris Stream Load 
](../data-operate/import/import-way/stream-load-manual.html)
+[Learn more about Doris Stream Load 
](../../data-operate/import/import-way/stream-load-manual)
 
 [Learn more about Doris](../)
 
diff --git a/docs/ecosystem/udf/contribute-udf.md 
b/docs/ecosystem/udf/contribute-udf.md
index 075bd41a5fe..1db7a8b9642 100644
--- a/docs/ecosystem/udf/contribute-udf.md
+++ b/docs/ecosystem/udf/contribute-udf.md
@@ -121,4 +121,4 @@ The user manual needs to include: UDF function definition 
description, applicabl
 
 When you meet the conditions and prepare the code, you can contribute UDF to 
the Doris community after the document. Simply submit the request (PR) on 
[Github](https://github.com/apache/incubator-doris). See the specific 
submission method: [Pull Request 
(PR)](https://help.github.com/articles/about-pull-requests/).
 
-Finally, when the PR assessment is passed and merged. Congratulations, your 
UDF becomes a third-party UDF supported by Doris. You can check it out in the 
ecological expansion section of [Doris official website](/en)~.
+Finally, when the PR assessment is passed and merged. Congratulations, your 
UDF becomes a third-party UDF supported by Doris. You can check it out in the 
ecological expansion section of [Doris official website](/)~.
diff --git a/docs/faq/install-faq.md b/docs/faq/install-faq.md
index dd3fb581ef6..d877c6adc0e 100644
--- a/docs/faq/install-faq.md
+++ b/docs/faq/install-faq.md
@@ -83,7 +83,7 @@ Here we provide 3 ways to solve this problem:
 
 3. Manually migrate data using the API
 
-   Doris provides [HTTP 
API](../admin-manual/http-actions/tablet-migration-action.md), which can 
manually specify the migration of data shards on one disk to another disk.
+   Doris provides [HTTP 
API](../admin-manual/http-actions/tablet-migration-action), which can manually 
specify the migration of data shards on one disk to another disk.
 
 ### Q5. How to read FE/BE logs correctly?
 
@@ -155,9 +155,6 @@ In many cases, we need to troubleshoot problems through 
logs. The format and vie
 
       Logs starting with F are Fatal logs. For example, F0916 , indicating the 
Fatal log on September 16th. Fatal logs usually indicate a program assertion 
error, and an assertion error will directly cause the process to exit 
(indicating a bug in the program). Welcome to the WeChat group, github 
discussion or dev mail group for help.
 
-   4. Minidump(removed)
-
-      Mindump is a function added after Doris version 0.15. For details, 
please refer to 
[document](https://doris.apache.org/zh-CN/developer-guide/minidump.html).
 
 2. FE
 
diff --git a/docs/get-starting/get-starting.md 
b/docs/get-starting/get-starting.md
index 221f7b88fbe..1ae61dbea67 100644
--- a/docs/get-starting/get-starting.md
+++ b/docs/get-starting/get-starting.md
@@ -122,7 +122,7 @@ mysql -uroot -P9030 -h127.0.0.1
 
 >Note: 
 >
->1. The root user used here is the default user built into doris, and is also 
the super administrator user, see [Rights Management](...) 
/admin-manual/privilege-ldap/user-privilege)
+>1. The root user used here is the default user built into doris, and is also 
the super administrator user, see [Rights 
Management](../admin-manual/privilege-ldap/user-privilege)
 >2. -P: Here is our query port to connect to Doris, the default port is 9030, 
 >which corresponds to `query_port` in fe.conf
 >3. -h: Here is the IP address of the FE we are connecting to, if your client 
 >and FE are installed on the same node you can use 127.0.0.1, this is also 
 >provided by Doris if you forget the root password, you can connect directly 
 >to the login without the password in this way and reset the root password
 
diff --git 
a/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md
 
b/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md
index 8538352a062..69b81b5c18a 100644
--- 
a/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md
+++ 
b/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md
@@ -62,7 +62,7 @@ Notice:
 - The partition is left closed and right open. If the user only specifies the 
right boundary, the system will automatically determine the left boundary
 - If the bucketing method is not specified, the bucketing method and bucket 
number used for creating the table would be automatically used
 - If the bucketing method is specified, only the number of buckets can be 
modified, not the bucketing method or the bucketing column. If the bucketing 
method is specified but the number of buckets not be specified, the default 
value `10` will be used for bucket number instead of the number specified when 
the table is created. If the number of buckets modified, the bucketing method 
needs to be specified simultaneously.
-- The ["key"="value"] section can set some attributes of the partition, see 
[CREATE 
TABLE](./sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md)
+- The ["key"="value"] section can set some attributes of the partition, see 
[CREATE TABLE](../Create/CREATE-TABLE)
 - If the user does not explicitly create a partition when creating a table, 
adding a partition by ALTER is not supported
 
 2. Delete the partition
diff --git 
a/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md
 
b/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md
index 1cbe887736d..4d56d7d12ec 100644
--- 
a/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md
+++ 
b/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-ROLLUP.md
@@ -32,7 +32,7 @@ ALTER TABLE ROLLUP
 
 ### Description
 
-This statement is used to perform a rollup modification operation on an 
existing table. The rollup is an asynchronous operation, and the task is 
returned when the task is submitted successfully. After that, you can use the 
[SHOW ALTER](../../Show-Statements/SHOW-ALTER.md) command to view the progress.
+This statement is used to perform a rollup modification operation on an 
existing table. The rollup is an asynchronous operation, and the task is 
returned when the task is submitted successfully. After that, you can use the 
[SHOW ALTER](../../Show-Statements/SHOW-ALTER) command to view the progress.
 
 grammar:
 
@@ -68,7 +68,7 @@ Notice:
 
 - If from_index_name is not specified, it will be created from base index by 
default
 - Columns in rollup table must be columns already in from_index
-- In properties, the storage format can be specified. For details, see [CREATE 
TABLE](../Create/CREATE-TABLE.html#create-table)
+- In properties, the storage format can be specified. For details, see [CREATE 
TABLE](../Create/CREATE-TABLE)
 
 3. Delete rollup index
 
diff --git 
a/docs/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW.md
 
b/docs/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW.md
index 2c2a6036daf..8ad419fcc72 100644
--- 
a/docs/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW.md
+++ 
b/docs/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW.md
@@ -34,7 +34,7 @@ CREATE MATERIALIZED VIEW
 
 This statement is used to create a materialized view.
 
-This operation is an asynchronous operation. After the submission is 
successful, you need to view the job progress through [SHOW ALTER TABLE 
MATERIALIZED 
VIEW](../../Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW.md). After 
displaying FINISHED, you can use the `desc [table_name] all` command to view 
the schema of the materialized view.
+This operation is an asynchronous operation. After the submission is 
successful, you need to view the job progress through [SHOW ALTER TABLE 
MATERIALIZED VIEW](../../Show-Statements/SHOW-ALTER-TABLE-MATERIALIZED-VIEW). 
After displaying FINISHED, you can use the `desc [table_name] all` command to 
view the schema of the materialized view.
 
 grammar:
 
diff --git 
a/docs/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-DATABASE.md
 
b/docs/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-DATABASE.md
index e0877d0b831..05fdaed73ae 100644
--- 
a/docs/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-DATABASE.md
+++ 
b/docs/sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-DATABASE.md
@@ -41,7 +41,7 @@ DROP DATABASE [IF EXISTS] db_name [FORCE];
 
 illustrate:
 
-- During the execution of DROP DATABASE, the deleted database can be recovered 
through the RECOVER statement. See the 
[RECOVER](../../Data-Definition-Statements/Backup-and-Restore/RECOVER.md) 
statement for details
+- During the execution of DROP DATABASE, the deleted database can be recovered 
through the RECOVER statement. See the 
[RECOVER](../../Data-Definition-Statements/Backup-and-Restore/RECOVER) 
statement for details
 - If you execute DROP DATABASE FORCE, the system will not check the database 
for unfinished transactions, the database will be deleted directly and cannot 
be recovered, this operation is generally not recommended
 
 ### Example
diff --git 
a/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md
 
b/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md
index b8e967c498f..881ff65a56e 100644
--- 
a/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md
+++ 
b/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md
@@ -100,7 +100,7 @@ WITH BROKER broker_name
 
   - `column list`
 
-    Used to specify the column order in the original file. For a detailed 
introduction to this part, please refer to the [Column Mapping, Conversion and 
Filtering](..../../../data-operate/import/import-scenes/load-data-convert.md) 
document.
+    Used to specify the column order in the original file. For a detailed 
introduction to this part, please refer to the [Column Mapping, Conversion and 
Filtering](../../../../data-operate/import/import-scenes/load-data-convert) 
document.
 
     `(k1, k2, tmpk1)`
 
@@ -110,7 +110,7 @@ WITH BROKER broker_name
 
   - `PRECEDING FILTER predicate`
 
-    Pre-filter conditions. The data is first concatenated into raw data rows 
in order according to `column list` and `COLUMNS FROM PATH AS`. Then filter 
according to the pre-filter conditions. For a detailed introduction to this 
part, please refer to the [Column Mapping, Conversion and 
Filtering](../../../data-operate/import/import-scenes/load-data-convert.md) 
document.
+    Pre-filter conditions. The data is first concatenated into raw data rows 
in order according to `column list` and `COLUMNS FROM PATH AS`. Then filter 
according to the pre-filter conditions. For a detailed introduction to this 
part, please refer to the [Column Mapping, Conversion and 
Filtering](../../../../data-operate/import/import-scenes/load-data-convert) 
document.
 
   - `SET (column_mapping)`
 
@@ -118,7 +118,7 @@ WITH BROKER broker_name
 
   - `WHERE predicate`
 
-    Filter imported data based on conditions. For a detailed introduction to 
this part, please refer to the [Column Mapping, Conversion and 
Filtering](../../../data-operate/import/import-scenes/load-data-convert.md) 
document.
+    Filter imported data based on conditions. For a detailed introduction to 
this part, please refer to the [Column Mapping, Conversion and 
Filtering](../../../../data-operate/import/import-scenes/load-data-convert) 
document.
 
   - `DELETE ON expr`
 
@@ -134,7 +134,7 @@ WITH BROKER broker_name
 
 - `broker_properties`
 
-  Specifies the information required by the broker. This information is 
usually used by the broker to be able to access remote storage systems. Such as 
BOS or HDFS. See the [Broker](../../../advanced/broker.md) documentation for 
specific information.
+  Specifies the information required by the broker. This information is 
usually used by the broker to be able to access remote storage systems. Such as 
BOS or HDFS. See the [Broker](../../../../advanced/broker) documentation for 
specific information.
 
   ````text
   (
@@ -166,7 +166,7 @@ WITH BROKER broker_name
 
   - `timezone`
 
-    Specify the time zone for some functions that are affected by time zones, 
such as `strftime/alignment_timestamp/from_unixtime`, etc. Please refer to the 
[timezone](../../advanced/time-zone.md) documentation for details. If not 
specified, the "Asia/Shanghai" timezone is used
+    Specify the time zone for some functions that are affected by time zones, 
such as `strftime/alignment_timestamp/from_unixtime`, etc. Please refer to the 
[timezone](../../../../advanced/time-zone) documentation for details. If not 
specified, the "Asia/Shanghai" timezone is used
 
 ### Example
 
@@ -400,29 +400,29 @@ WITH BROKER broker_name
 
 1. Check the import task status
 
-   Broker Load is an asynchronous import process. The successful execution of 
the statement only means that the import task is submitted successfully, and 
does not mean that the data import is successful. The import status needs to be 
viewed through the [SHOW LOAD](../../Show-Statements/SHOW-LOAD.md) command.
+   Broker Load is an asynchronous import process. The successful execution of 
the statement only means that the import task is submitted successfully, and 
does not mean that the data import is successful. The import status needs to be 
viewed through the [SHOW LOAD](../../Show-Statements/SHOW-LOAD) command.
 
 2. Cancel the import task
 
-   Import tasks that have been submitted but not yet completed can be canceled 
by the [CANCEL LOAD](./CANCEL-LOAD.md) command. After cancellation, the written 
data will also be rolled back and will not take effect.
+   Import tasks that have been submitted but not yet completed can be canceled 
by the [CANCEL LOAD](./CANCEL-LOAD) command. After cancellation, the written 
data will also be rolled back and will not take effect.
 
 3. Label, import transaction, multi-table atomicity
 
-   All import tasks in Doris are atomic. And the import of multiple tables in 
the same import task can also guarantee atomicity. At the same time, Doris can 
also use the Label mechanism to ensure that the data imported is not lost or 
heavy. For details, see the [Import Transactions and 
Atomicity](../../../data-operate/import/import-scenes/load-atomicity.md) 
documentation.
+   All import tasks in Doris are atomic. And the import of multiple tables in 
the same import task can also guarantee atomicity. At the same time, Doris can 
also use the Label mechanism to ensure that the data imported is not lost or 
heavy. For details, see the [Import Transactions and 
Atomicity](../../../../data-operate/import/import-scenes/load-atomicity) 
documentation.
 
 4. Column mapping, derived columns and filtering
 
-   Doris can support very rich column transformation and filtering operations 
in import statements. Most built-in functions and UDFs are supported. For how 
to use this function correctly, please refer to the [Column Mapping, Conversion 
and Filtering](../../../data-operate/import/import-scenes/load-data-convert.md) 
document.
+   Doris can support very rich column transformation and filtering operations 
in import statements. Most built-in functions and UDFs are supported. For how 
to use this function correctly, please refer to the [Column Mapping, Conversion 
and Filtering](../../../../data-operate/import/import-scenes/load-data-convert) 
document.
 
 5. Error data filtering
 
    Doris' import tasks can tolerate a portion of malformed data. Tolerated via 
`max_filter_ratio` setting. The default is 0, which means that the entire 
import task will fail when there is an error data. If the user wants to ignore 
some problematic data rows, the secondary parameter can be set to a value 
between 0 and 1, and Doris will automatically skip the rows with incorrect data 
format.
 
-   For some calculation methods of the tolerance rate, please refer to the 
[Column Mapping, Conversion and 
Filtering](../../../data-operate/import/import-scenes/load-data-convert.md) 
document.
+   For some calculation methods of the tolerance rate, please refer to the 
[Column Mapping, Conversion and 
Filtering](../../../../data-operate/import/import-scenes/load-data-convert) 
document.
 
 6. Strict Mode
 
-   The `strict_mode` attribute is used to set whether the import task runs in 
strict mode. The format affects the results of column mapping, transformation, 
and filtering. For a detailed description of strict mode, see the [strict 
mode](../../../data-operate/import/import-scenes/load-strict-mode.md) 
documentation.
+   The `strict_mode` attribute is used to set whether the import task runs in 
strict mode. The format affects the results of column mapping, transformation, 
and filtering. For a detailed description of strict mode, see the [strict 
mode](../../../../data-operate/import/import-scenes/load-strict-mode) 
documentation.
 
 7. Timeout
 
diff --git 
a/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md
 
b/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md
index 633c4509f07..d694dde49fc 100644
--- 
a/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md
+++ 
b/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/CREATE-SYNC-JOB.md
@@ -36,7 +36,7 @@ The data synchronization (Sync Job) function supports users 
to submit a resident
 
 Currently, the data synchronization job only supports connecting to Canal, 
obtaining the parsed Binlog data from the Canal Server and importing it into 
Doris.
 
-Users can view the data synchronization job status through [SHOW SYNC 
JOB](../../../sql-manual/sql-reference/Show-Statements/SHOW-SYNC-JOB.md).
+Users can view the data synchronization job status through [SHOW SYNC 
JOB](../../sql-manual/sql-reference/Show-Statements/SHOW-SYNC-JOB).
 
 grammar:
 
diff --git a/docs/sql-manual/sql-reference/Show-Statements/SHOW-STATUS.md 
b/docs/sql-manual/sql-reference/Show-Statements/SHOW-STATUS.md
index 7e5324f5fe8..3cdbbb51b8e 100644
--- a/docs/sql-manual/sql-reference/Show-Statements/SHOW-STATUS.md
+++ b/docs/sql-manual/sql-reference/Show-Statements/SHOW-STATUS.md
@@ -32,7 +32,7 @@ SHOW ALTER TABLE MATERIALIZED VIEW
 
 ### Description
 
-This command is used to view the execution of the Create Materialized View job 
submitted through the 
[CREATE-MATERIALIZED-VIEW](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW.md)
 statement.
+This command is used to view the execution of the Create Materialized View job 
submitted through the 
[CREATE-MATERIALIZED-VIEW](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-MATERIALIZED-VIEW)
 statement.
 
 > This statement is equivalent to `SHOW ALTER TABLE ROLLUP`;
 
diff --git a/docs/sql-manual/sql-reference/Show-Statements/SHOW-STREAM-LOAD.md 
b/docs/sql-manual/sql-reference/Show-Statements/SHOW-STREAM-LOAD.md
index bb74e405d17..69f37d6a2af 100644
--- a/docs/sql-manual/sql-reference/Show-Statements/SHOW-STREAM-LOAD.md
+++ b/docs/sql-manual/sql-reference/Show-Statements/SHOW-STREAM-LOAD.md
@@ -50,7 +50,7 @@ SHOW STREAM LOAD
 
 illustrate:
 
-1. By default, BE does not record Stream Load records. If you want to view 
records that need to be enabled on BE, the configuration parameter is: 
`enable_stream_load_record=true`. For details, please refer to [BE 
Configuration Items](https://doris.apache. 
org/zh-CN/docs/admin-manual/config/be-config)
+1. By default, BE does not record Stream Load records. If you want to view 
records that need to be enabled on BE, the configuration parameter is: 
`enable_stream_load_record=true`. For details, please refer to [BE 
Configuration Items](../../../../admin-manual/config/be-config)
 1. If db_name is not specified, the current default db is used
 2. If LABEL LIKE is used, it will match the tasks whose label of the Stream 
Load task contains label_matcher
 3. If LABEL = is used, it will match the specified label exactly


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org

Reply via email to