This is an automated email from the ASF dual-hosted git repository.

dataroaring pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 43728b3ac80 [opt] improvement docs for tiered storage (#1567)
43728b3ac80 is described below

commit 43728b3ac80b0360036437ed65a47584d5f5b6f4
Author: Yongqiang YANG <yangyongqi...@selectdb.com>
AuthorDate: Tue Dec 24 17:18:45 2024 +0800

    [opt] improvement docs for tiered storage (#1567)
    
    ## Versions
    
    - [ ] dev
    - [ ] 3.0
    - [ ] 2.1
    - [ ] 2.0
    
    ## Languages
    
    - [ ] Chinese
    - [ ] English
    
    ## Docs Checklist
    
    - [ ] Checked by AI
    - [ ] Test Cases Built
    
    ---------
    
    Co-authored-by: Yongqiang YANG <yangyogqi...@selectdb.com>
---
 .../tiered-storage/diff-disk-medium-migration.md   | 107 ++++++++++----
 docs/table-design/tiered-storage/overview.md       |  35 +++++
 docs/table-design/tiered-storage/remote-storage.md | 156 +++++++++++----------
 .../docusaurus-plugin-content-docs/current.json    |   4 +
 .../tiered-storage/diff-disk-medium-migration.md   |  86 ++++++++++--
 .../table-design/tiered-storage/overview.md        |  35 +++++
 .../table-design/tiered-storage/remote-storage.md  | 123 ++++++++--------
 .../version-2.1.json                               |   4 +
 .../version-3.0.json                               |   4 +
 sidebars.json                                      |   1 +
 10 files changed, 381 insertions(+), 174 deletions(-)

diff --git a/docs/table-design/tiered-storage/diff-disk-medium-migration.md 
b/docs/table-design/tiered-storage/diff-disk-medium-migration.md
index ed24404e6d8..64f2ed1b896 100644
--- a/docs/table-design/tiered-storage/diff-disk-medium-migration.md
+++ b/docs/table-design/tiered-storage/diff-disk-medium-migration.md
@@ -1,7 +1,7 @@
 ---
 {
-"title": "SSD and HDD tiered storage",
-"language": "en"
+    "title": "Tiered Storage of SSD and HDD",
+    "language": "en-US"
 }
 ---
 
@@ -24,39 +24,94 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-You can set parameters for dynamic partitions across different disk types, 
facilitating data migration from SSDs to HDDs based on the parameters. This 
strategy improves read and write performance in Doris while lowering costs.
+Doris supports tiered storage between different disk types (SSD and HDD), 
combining dynamic partitioning features to dynamically migrate data from SSD to 
HDD based on the characteristics of hot and cold data. This approach reduces 
storage costs while maintaining high performance for hot data reads and writes.
 
-By configuring `dynamic_partition.hot_partition_num` and 
`dynamic_partition.storage_medium`, you can use SSD and HDD tiered storage. For 
specific usage, please refer to [Data Partitioning - Dynamic 
Partitioning](../../table-design/data-partitioning/dynamic-partitioning).
+## Dynamic Partitioning and Tiered Storage
 
-*`dynamic_partition.hot_partition_num`*
+By configuring dynamic partitioning parameters of a table, users can set which 
partitions are stored on SSD and automatically migrate to HDD after cooling.
 
-:::tip
+- **Hot Partitions**: Recently active partitions, prioritized to be stored on 
SSD to ensure high performance.
+- **Cold Partitions**: Partitions that are accessed less frequently, which 
will gradually migrate to HDD to reduce storage costs.
 
-  If the storage path does not include an SSD disk path, configuring this 
parameter will result in the failure of dynamic partition creation.
+For more information on dynamic partitioning, please refer to: [Data 
Partitioning - Dynamic 
Partitioning](../../table-design/data-partitioning/dynamic-partitioning).
 
-  :::
+## Parameter Description
 
-  `hot_partition_num` indicates that the current partition and the previous 
hot_partition_num - 1 partitions, along with all future partitions, will be 
stored on SSD media.
+### `dynamic_partition.hot_partition_num`
 
-  Let us give an example. Suppose today is 2021-05-20, partition by day, and 
the properties of dynamic partition are set to: hot_partition_num=2, end=3, 
start=-3. Then the system will automatically create the following partitions, 
and set the `storage_medium` and `storage_cooldown_time` properties:
+- **Function**:
+  - Specifies how many of the most recent partitions are hot partitions, which 
are stored on SSD, while the remaining partitions are stored on HDD.
 
-  ```sql
-  p20210517: ["2021-05-17", "2021-05-18") storage_medium=HDD 
storage_cooldown_time=9999-12-31 23:59:59
-  p20210518: ["2021-05-18", "2021-05-19") storage_medium=HDD 
storage_cooldown_time=9999-12-31 23:59:59
-  p20210519: ["2021-05-19", "2021-05-20") storage_medium=SSD 
storage_cooldown_time=2021-05-21 00:00:00
-  p20210520: ["2021-05-20", "2021-05-21") storage_medium=SSD 
storage_cooldown_time=2021-05-22 00:00:00
-  p20210521: ["2021-05-21", "2021-05-22") storage_medium=SSD 
storage_cooldown_time=2021-05-23 00:00:00
-  p20210522: ["2021-05-22", "2021-05-23") storage_medium=SSD 
storage_cooldown_time=2021-05-24 00:00:00
-  p20210523: ["2021-05-23", "2021-05-24") storage_medium=SSD 
storage_cooldown_time=2021-05-25 00:00:00
-  ```
+- **Note**:
+  - `"dynamic_partition.storage_medium" = "HDD"` must be set simultaneously; 
otherwise, this parameter will not take effect.
+  - If there are no SSD devices in the storage path, this configuration will 
cause partition creation to fail.
+
+**Example Description**:
 
-*`dynamic_partition.storage_medium`*
+Assuming the current date is **2021-05-20**, with daily partitioning, the 
dynamic partitioning configuration is as follows:
+```sql
+    "dynamic_partition.time_unit" = "DAY",
+    "dynamic_partition.hot_partition_num" = 2
+    "dynamic_partition.start" = -3
+    "dynamic_partition.end" = 3
+```
 
-  
-:::info Note
-This parameteres is supported since Doris version 1.2.3
-:::
+The system will automatically create the following partitions and configure 
their storage medium and cooling time:
 
-  Specifies the final storage medium for the newly created dynamic partition. 
HDD is the default, but SSD can be selected.
+  ```Plain
+  p20210517:["2021-05-17", "2021-05-18") storage_medium=HDD 
storage_cooldown_time=9999-12-31 23:59:59
+  p20210518:["2021-05-18", "2021-05-19") storage_medium=HDD 
storage_cooldown_time=9999-12-31 23:59:59
+  p20210519:["2021-05-19", "2021-05-20") storage_medium=SSD 
storage_cooldown_time=2021-05-21 00:00:00
+  p20210520:["2021-05-20", "2021-05-21") storage_medium=SSD 
storage_cooldown_time=2021-05-22 00:00:00
+  p20210521:["2021-05-21", "2021-05-22") storage_medium=SSD 
storage_cooldown_time=2021-05-23 00:00:00
+  p20210522:["2021-05-22", "2021-05-23") storage_medium=SSD 
storage_cooldown_time=2021-05-24 00:00:00
+  p20210523:["2021-05-23", "2021-05-24") storage_medium=SSD 
storage_cooldown_time=2021-05-25 00:00:00
+  ```
 
-  Note that when set to SSD, the `hot_partition_num` property will no longer 
take effect, all partitions will default to SSD storage media and the cooldown 
time will be 9999-12-31 23:59:59.
\ No newline at end of file
+### `dynamic_partition.storage_medium`
+
+- **Function**:
+  - Specifies the final storage medium for dynamic partitions. The default is 
HDD, but SSD can be selected.
+
+- **Note**:
+  - When set to SSD, the `hot_partition_num` attribute will no longer take 
effect, and all partitions will default to SSD storage medium with a cooling 
time of 9999-12-31 23:59:59.
+
+## Example
+
+### 1. Create a table with dynamic_partition
+
+```sql
+    CREATE TABLE tiered_table (k DATE)
+    PARTITION BY RANGE(k)()
+    DISTRIBUTED BY HASH (k) BUCKETS 5
+    PROPERTIES
+    (
+        "dynamic_partition.storage_medium" = "hdd",
+        "dynamic_partition.enable" = "true",
+        "dynamic_partition.time_unit" = "DAY",
+        "dynamic_partition.hot_partition_num" = "2",
+        "dynamic_partition.end" = "3",
+        "dynamic_partition.prefix" = "p",
+        "dynamic_partition.buckets" = "5",
+        "dynamic_partition.create_history_partition"= "true",
+        "dynamic_partition.start" = "-3"
+    );
+```
+
+### 2. Check storage medium of partitions
+
+```sql
+    SHOW PARTITIONS FROM tiered_table;
+```
+
+You should have 7 partitions, 5 of which use SSD as the storage medium, while 
the other 2 use HDD.
+
+```Plain
+  p20210517:["2021-05-17", "2021-05-18") storage_medium=HDD 
storage_cooldown_time=9999-12-31 23:59:59
+  p20210518:["2021-05-18", "2021-05-19") storage_medium=HDD 
storage_cooldown_time=9999-12-31 23:59:59
+  p20210519:["2021-05-19", "2021-05-20") storage_medium=SSD 
storage_cooldown_time=2021-05-21 00:00:00
+  p20210520:["2021-05-20", "2021-05-21") storage_medium=SSD 
storage_cooldown_time=2021-05-22 00:00:00
+  p20210521:["2021-05-21", "2021-05-22") storage_medium=SSD 
storage_cooldown_time=2021-05-23 00:00:00
+  p20210522:["2021-05-22", "2021-05-23") storage_medium=SSD 
storage_cooldown_time=2021-05-24 00:00:00
+  p20210523:["2021-05-23", "2021-05-24") storage_medium=SSD 
storage_cooldown_time=2021-05-25 00:00:00
+```
diff --git a/docs/table-design/tiered-storage/overview.md 
b/docs/table-design/tiered-storage/overview.md
new file mode 100644
index 00000000000..6a7d3af05a3
--- /dev/null
+++ b/docs/table-design/tiered-storage/overview.md
@@ -0,0 +1,35 @@
+---
+{
+    "title": "Tiered Storage",
+    "language": "en-US"
+}
+---
+
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements. See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership. The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License. You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied. See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+To help users reduce storage costs, Doris provides flexible options for cold 
data management.
+
+| **Cold Data Options**       | **Applicable Conditions**                      
                                    | **Features**                              
                                                                             |
+|-----------------------------|------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------|
+| **Compute-Storage Separation** | Users have the capability to deploy a 
compute-storage separation setup             | - Data is stored as a single 
replica in object storage<br>- Local caching accelerates hot data access<br>- 
Independent scaling of storage and compute resources significantly reduces 
costs |
+| **Local Tiering**           | In the compute-storage integrated mode, users 
want to further optimize local storage resources | - Supports cooling cold data 
from SSD to HDD<br>- Fully utilizes the tiered characteristics of local storage 
to save high-performance storage costs         |
+| **Remote Tiering**          | In the compute-storage integrated mode, users 
want to reduce costs using affordable object storage or HDFS | - Cold data is 
stored as a single replica in object storage or HDFS<br>- Hot data continues to 
use local storage<br>- Cannot be combined with local tiering for the same table 
|
+
+With the above options, Doris can flexibly adapt to different deployment 
scenarios, achieving a balance between query efficiency and storage cost.
diff --git a/docs/table-design/tiered-storage/remote-storage.md 
b/docs/table-design/tiered-storage/remote-storage.md
index 1380d1dcffc..a25ceef5429 100644
--- a/docs/table-design/tiered-storage/remote-storage.md
+++ b/docs/table-design/tiered-storage/remote-storage.md
@@ -1,7 +1,7 @@
 ---
 {
-"title": "Remote Storage",
-"language": "en"
+    "title": "Remote Storage",
+    "language": "en-US"
 }
 ---
 
@@ -24,17 +24,19 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-### Feature Overview
+## Overview
 
-Remote storage supports placing some data in external storage (such as object 
storage or HDFS), which saves costs without sacrificing functionality.
+Remote storage supports placing cold data in external storage (such as object 
storage, HDFS).
 
 :::warning Note
-Data in remote storage only has one replica. The reliability of the data 
depends on the reliability of the remote storage. You need to ensure that the 
remote storage employs EC (Erasure Coding) or multi-replica technology to 
guarantee data reliability.
+The data in remote storage has only one copy, and the reliability of the data 
depends on the reliability of the remote storage. You need to ensure that the 
remote storage has erasure coding (EC) or multi-replica technology to ensure 
data reliability.
 :::
 
-### Usage Guide
+## Usage
 
-Using S3 object storage as an example, start by creating an S3 RESOURCE:
+### Saving Cold Data to S3 Compatible Storage
+
+*Step 1:* Create S3 Resource.
 
 ```sql
 CREATE RESOURCE "remote_s3"
@@ -54,10 +56,12 @@ PROPERTIES
 ```
 
 :::tip
-When creating the S3 RESOURCE, a remote connection check will be performed to 
ensure the resource is created correctly.
+When creating the S3 RESOURCE, a link verification to the S3 remote will be 
performed to ensure the correctness of the RESOURCE creation.
 :::
 
-Next, create a STORAGE POLICY and associate it with the previously created 
RESOURCE:
+*Step 2:* Create STORAGE POLICY.
+
+Then create a STORAGE POLICY associated with the RESOURCE created above:
 
 ```sql
 CREATE STORAGE POLICY test_policy
@@ -67,7 +71,7 @@ PROPERTIES(
 );
 ```
 
-Finally, specify the STORAGE POLICY when creating a table:
+*Step 3:* Use STORAGE POLICY when creating a table.
 
 ```sql
 CREATE TABLE IF NOT EXISTS create_table_use_created_policy 
@@ -84,11 +88,13 @@ PROPERTIES(
 );
 ```
 
-:::warning
-If the UNIQUE table has `"enable_unique_key_merge_on_write" = "true"`, this 
feature cannot be used.
+:::warning Note
+If the UNIQUE table is set with `"enable_unique_key_merge_on_write" = "true"`, 
this feature cannot be used.
 :::
 
-Create an HDFS RESOURCE:
+### Saving Cold Data to HDFS
+
+*Step 1:* Create HDFS RESOURCE:
 
 ```sql
 CREATE RESOURCE "remote_hdfs" PROPERTIES (
@@ -102,12 +108,20 @@ CREATE RESOURCE "remote_hdfs" PROPERTIES (
         "dfs.namenode.rpc-address.my_ha.my_namenode2" = "nn2_host:rpc_port",
         "dfs.client.failover.proxy.provider.my_ha" = 
"org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
     );
+```
 
+*Step 2:* Create STORAGE POLICY.
+
+```sql
 CREATE STORAGE POLICY test_policy PROPERTIES (
     "storage_resource" = "remote_hdfs",
     "cooldown_ttl" = "300"
-);
+)
+```
+
+*Step 3:* Use STORAGE POLICY to create a table.
 
+```sql
 CREATE TABLE IF NOT EXISTS create_table_use_created_policy (
     k1 BIGINT,
     k2 LARGEINT,
@@ -116,105 +130,101 @@ CREATE TABLE IF NOT EXISTS 
create_table_use_created_policy (
 UNIQUE KEY(k1)
 DISTRIBUTED BY HASH (k1) BUCKETS 3
 PROPERTIES(
-    "enable_unique_key_merge_on_write" = "false",
-    "storage_policy" = "test_policy"
+"enable_unique_key_merge_on_write" = "false",
+"storage_policy" = "test_policy"
 );
 ```
 
-:::warning
-If the UNIQUE table has `"enable_unique_key_merge_on_write" = "true"`, this 
feature cannot be used.
+:::warning Note
+If the UNIQUE table is set with `"enable_unique_key_merge_on_write" = "true"`, 
this feature cannot be used.
 :::
 
-In addition to creating tables with remote storage, Doris also supports 
setting remote storage for existing tables or partitions.
+### Cooling Existing Tables to Remote Storage
 
-For an existing table, associate a remote storage policy by running:
+In addition to new tables supporting the setting of remote storage, Doris also 
supports setting remote storage for an existing table or PARTITION.
+
+For an existing table, set remote storage by associating the created STORAGE 
POLICY with the table:
 
 ```sql
 ALTER TABLE create_table_not_have_policy set ("storage_policy" = 
"test_policy");
 ```
 
-For an existing PARTITION, associate a remote storage policy by running:
+For an existing PARTITION, set remote storage by associating the created 
STORAGE POLICY with the PARTITION:
 
 ```sql
 ALTER TABLE create_table_partition MODIFY PARTITION (*) 
SET("storage_policy"="test_policy");
 ```
 
 :::tip
-Note that if you specify different storage policies for the entire table and 
certain partitions, the storage policy of the table will take precedence for 
all partitions. If you need a partition to use a different storage policy, you 
can modify it using the method above for existing partitions.
+Note that if the user specifies different Storage Policies for the entire 
Table and some Partitions when creating the table, the Storage Policy set for 
the Partition will be ignored, and all Partitions of the table will use the 
table's Policy. If you need a Partition's Policy to differ from others, you can 
modify it using the method described above for associating a Storage Policy 
with an existing Partition.
+
+For more details, please refer to the Docs directory under 
[RESOURCE](../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE),
 
[POLICY](../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-POLICY),
 [CREATE 
TABLE](../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE),
 [ALTER 
TABLE](../../sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN),
 etc.
 :::
 
-For more details, please refer to the documentation in the **Docs** directory, 
such as 
[RESOURCE](../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE),
 
[POLICY](../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-POLICY),
 [CREATE 
TABLE](../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE),
 and [ALTER 
TABLE](../../sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN),
 which provide detai [...]
+### Configuring Compaction
 
-### Limitations
+-   The BE parameter `cold_data_compaction_thread_num` can set the concurrency 
for executing remote storage Compaction, with a default of 2.
 
-- A single table or partition can only be associated with one storage policy. 
Once associated, the storage policy cannot be dropped until the association is 
removed.
+-   The BE parameter `cold_data_compaction_interval_sec` can set the time 
interval for executing remote storage Compaction, with a default of 1800 
seconds, which is half an hour.
 
-- The storage path information associated with a storage policy (e.g., bucket, 
endpoint, root_path) cannot be modified after the policy is created.
+## Limitations
 
-- Storage policies support creation, modification, and deletion. However, 
before deleting a policy, you need to ensure that no tables are referencing 
this storage policy.
+-   Tables using remote storage do not support backup.
 
-- The Unique model with Merge-on-write enabled may face restrictions... 
+-   Modifying the location information of remote storage, such as endpoint, 
bucket, and path, is not supported.
 
-## Viewing Remote Storage Usage
+-   Unique model tables do not support setting remote storage when the 
Merge-on-Write feature is enabled.
 
-Method 1: You can view the size uploaded to the object storage by each BE by 
using `show proc '/backends'`, specifically the `RemoteUsedCapacity` item. Note 
that this method may have some delay.
+## Cold Data Space
 
-Method 2: You can view the object size used by each tablet of a table by using 
`show tablets from tableName`, specifically the `RemoteDataSize` item.
+### Viewing
 
-## Remote Storage Cache
+Method 1: You can view the size uploaded to the object by each BE through 
`show proc '/backends'`, in the RemoteUsedCapacity item, this method has a 
slight delay.
 
-To optimize query performance and save object storage resources, the concept 
of cache is introduced. When querying data from remote storage for the first 
time, Doris will load the data from remote storage to the BE's local disk as a 
cache. The cache has the following characteristics:
+Method 2: You can view the size of each tablet occupied by the table through 
`show tablets from tableName`, in the RemoteDataSize item.
 
-- The cache is stored on the BE's disk and does not occupy memory space.
-- The cache can be limited in size, with data cleanup performed using an LRU 
(Least Recently Used) policy.
-- The implementation of the cache is the same as the federated query catalog 
cache. For more information, refer to the 
[documentation](../../lakehouse/filecache).
+### Garbage Collection
 
-## Remote Storage Compaction
+There may be situations that generate garbage data on remote storage:
 
-The data in remote storage is considered to be "ingested" at the moment the 
rowset file is written to the local disk, plus the cooldown time. Since data is 
not written and cooled all at once, to avoid the small file problem in object 
storage, Doris will perform compaction on remote storage data. However, the 
frequency and priority of remote storage compaction are not very high. It is 
recommended to perform compaction on local hot data before executing cooldown. 
The following BE parameter [...]
+1.  Rowset upload fails but some segments are successfully uploaded.
 
-- The BE parameter `cold_data_compaction_thread_num` sets the concurrency for 
performing compaction on remote storage. The default value is 2.
-- The BE parameter `cold_data_compaction_interval_sec` sets the time interval 
for executing remote storage compaction. The default value is 1800 seconds (30 
minutes).
+2.  The uploaded rowset did not reach consensus in multiple replicas.
 
-## Remote Storage Schema Change
+3.  Rowsets participating in compaction after compaction is completed.
 
-Remote storage schema changes are supported. These include:
+Garbage data will not be cleaned up immediately. The BE parameter 
`remove_unused_remote_files_interval_sec` can set the time interval for garbage 
collection on remote storage, with a default of 21600 seconds, which is 6 hours.
 
-- Adding or removing columns
-- Modifying column types
-- Adjusting column order
-- Adding or modifying indexes
+## Query and Performance Optimization
 
-## Remote Storage Garbage Collection
+To optimize query performance and save object storage resources, local Cache 
has been introduced. When querying data from remote storage for the first time, 
Doris will load the data from remote storage to the local disk of the BE for 
caching. The Cache has the following characteristics:
 
-Remote storage garbage data refers to data that is not being used by any 
replica. Garbage data may occur on object storage in the following cases:
+-   The Cache is actually stored on the local disk of the BE and does not 
occupy memory space.
 
-1. Rowsets upload fails but some segments are successfully uploaded.
-2. The FE re-selects a CooldownReplica, causing an inconsistency between the 
rowset versions of the old and new CooldownReplica. FollowerReplicas 
synchronize the CooldownMeta of the new CooldownReplica, and the rowsets with 
version mismatches in the old CooldownReplica become garbage data.
-3. After a remote storage compaction, the rowsets before merging cannot be 
immediately deleted because they may still be used by other replicas. 
Eventually, once all FollowerReplicas use the latest merged rowset, the 
pre-merge rowsets become garbage data.
+-   The Cache is managed through LRU and does not support TTL.
 
-Additionally, garbage data on objects will not be cleaned up immediately. The 
BE parameter `remove_unused_remote_files_interval_sec` sets the time interval 
for remote storage garbage collection, with a default value of 21600 seconds (6 
hours).
+For specific configurations, please refer to (../../lakehouse/filecache).
 
 ## Common Issues
 
-1. `ERROR 1105 (HY000): errCode = 2, detailMessage = Failed to create 
repository: connect to s3 failed: Unable to marshall request to JSON: host must 
not be null.`
-
-   The S3 SDK uses the virtual-hosted style access method by default. However, 
some object storage systems (such as MinIO) may not have virtual-hosted style 
access enabled or supported. In this case, you can add the `use_path_style` 
parameter to force path-style access:
-
-   ```sql
-   CREATE RESOURCE "remote_s3"
-   PROPERTIES
-   (
-       "type" = "s3",
-       "s3.endpoint" = "bj.s3.com",
-       "s3.region" = "bj",
-       "s3.bucket" = "test-bucket",
-       "s3.root.path" = "path/to/root",
-       "s3.access_key" = "bbb",
-       "s3.secret_key" = "aaaa",
-       "s3.connection.maximum" = "50",
-       "s3.connection.request.timeout" = "3000",
-       "s3.connection.timeout" = "1000",
-       "use_path_style" = "true"
-   );
-   ```
\ No newline at end of file
+1.  `ERROR 1105 (HY000): errCode = 2, detailMessage = Failed to create 
repository: connect to s3 failed: Unable to marshall request to JSON: host must 
not be null.`
+
+The S3 SDK defaults to using the virtual-hosted style method. However, some 
object storage systems (such as MinIO) may not have virtual-hosted style access 
enabled or supported. In this case, we can add the `use_path_style` parameter 
to force the use of the path style method:
+
+```sql
+CREATE RESOURCE "remote_s3"
+PROPERTIES
+(
+    "type" = "s3",
+    "s3.endpoint" = "bj.s3.com",
+    "s3.region" = "bj",
+    "s3.bucket" = "test-bucket",
+    "s3.root.path" = "path/to/root",
+    "s3.access_key" = "bbb",
+    "s3.secret_key" = "aaaa",
+    "s3.connection.maximum" = "50",
+    "s3.connection.request.timeout" = "3000",
+    "s3.connection.timeout" = "1000",
+    "use_path_style" = "true"
+);
+```
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current.json 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current.json
index d2503b3ac6b..7ea628614d7 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current.json
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current.json
@@ -494,5 +494,9 @@
   "sidebar.docs.category.Cross Cluster Replication": {
     "message": "跨集群复制",
     "description": "The label for category Cross Cluster Replication in 
sidebar docs"
+  },
+  "sidebar.docs.category.Tiered Storage": {
+    "message": "分层存储",
+    "description": "The label for category Tiered Storage in sidebar docs"
   }
 }
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/table-design/tiered-storage/diff-disk-medium-migration.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/table-design/tiered-storage/diff-disk-medium-migration.md
index 0b9472e2ebf..35106238adb 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/table-design/tiered-storage/diff-disk-medium-migration.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/table-design/tiered-storage/diff-disk-medium-migration.md
@@ -24,23 +24,39 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-可以配置动态分区参数,在不同的磁盘类型上创建动态分区,Doris 
会根据配置参数将冷数据从SSD迁移到HDD。这样的做法在降低成本的同时,也提升了Doris的读写性能。
+Doris 支持在不同磁盘类型(SSD 和 HDD)之间进行分层存储,结合动态分区功能,根据冷热数据的特性将数据从 SSD 动态迁移到 
HDD。这种方式既降低了存储成本,又在热数据的读写上保持了高性能。
 
-动态分区参数可以参考[数据划分-动态分区](../../table-design/data-partitioning/dynamic-partitioning)
+## 动态分区与层级存储
 
-`dynamic_partition.hot_partition_num`
+通过配置动态分区参数,用户可以设置哪些分区存储在 SSD 上,以及冷却后自动迁移到 HDD 上。
 
-:::caution
-  注意,dynamic_partition.storage_medium 必须设置为HDD,否则 hot_partition_num 将不会生效
-:::
+- **热分区**:最近活跃的分区,优先存储在 SSD 上,保证高性能。
+- **冷分区**:较少访问的分区,会逐步迁移到 HDD,以降低存储开销。
 
-  指定最新的多少个分区为热分区。对于热分区,系统会自动设置其 `storage_medium` 参数为 SSD,并且设置 
`storage_cooldown_time`。
+有关动态分区的更多信息,请参考:[数据划分 - 
动态分区](../../table-design/data-partitioning/dynamic-partitioning)。
 
-  注意:若存储路径下没有 SSD 磁盘路径,配置该参数会导致动态分区创建失败。
 
-  `hot_partition_num` 表示当前时间所在分区及之前的 hot_partition_num - 1 个分区,以及所有未来的分区,将被存储在 
SSD 介质上。
+## 参数说明
 
-  我们举例说明。假设今天是 2021-05-20,按天分区,动态分区的属性设置为:hot_partition_num=2, end=3, 
start=-3。则系统会自动创建以下分区,并且设置 `storage_medium` 和 `storage_cooldown_time` 参数:
+### `dynamic_partition.hot_partition_num`
+
+- **功能**:
+  - 指定最近的多少个分区为热分区,这些分区存储在 SSD 上,其余分区存储在 HDD 上。
+
+- **注意**:
+  - 必须同时设置 `dynamic_partition.storage_medium = HDD`,否则此参数不会生效。
+  - 如果存储路径下没有 SSD 设备,则该配置会导致分区创建失败。
+
+**示例说明**:
+
+假设当前日期为 **2021-05-20**,按天分区,动态分区配置如下:
+```sql
+dynamic_partition.hot_partition_num = 2
+dynamic_partition.start = -3
+dynamic_partition.end = 3
+```
+
+系统会自动创建以下分区,并配置其存储介质和冷却时间:
 
   ```Plain
   p20210517:["2021-05-17", "2021-05-18") storage_medium=HDD 
storage_cooldown_time=9999-12-31 23:59:59
@@ -52,10 +68,50 @@ under the License.
   p20210523:["2021-05-23", "2021-05-24") storage_medium=SSD 
storage_cooldown_time=2021-05-25 00:00:00
   ```
 
--   `dynamic_partition.storage_medium`
+### `dynamic_partition.storage_medium`
+
+- **功能**:
+  - 指定动态分区的最终存储介质。默认是 HDD,可选择 SSD。
+
+- **注意**:
+  - 当设置为 SSD 时,`hot_partition_num` 属性将不再生效,所有分区将默认为 SSD 存储介质并且冷却时间为 9999-12-31 
23:59:59。
+
+## 示例
+
+### 1. 创建一个分层存储表
+
+```sql
+    CREATE TABLE tiered_table (k DATE)
+    PARTITION BY RANGE(k)()
+    DISTRIBUTED BY HASH (k) BUCKETS 5
+    PROPERTIES
+    (
+        "dynamic_partition.storage_medium" = "hdd",
+        "dynamic_partition.enable" = "true",
+        "dynamic_partition.time_unit" = "DAY",
+        "dynamic_partition.hot_partition_num" = "2",
+        "dynamic_partition.end" = "3",
+        "dynamic_partition.prefix" = "p",
+        "dynamic_partition.buckets" = "5",
+        "dynamic_partition.create_history_partition"= "true",
+        "dynamic_partition.start" = "-3"
+    );
+```
 
-指定创建的动态分区的默认存储介质。默认是 HDD,可选择 SSD。
+### 2. 检查分区存储介质
 
-:::caution
-  注意,当设置为 SSD 时,`hot_partition_num` 属性将不再生效,所有分区将默认为 SSD 存储介质并且冷却时间为 
9999-12-31 23:59:59。
-:::
\ No newline at end of file
+```sql
+    SHOW PARTITIONS FROM tiered_table;
+```
+
+可以看见 7 个分区, 5 个使用 SSD, 其它的 2 个使用 HDD。
+
+```Plain
+  p20210517:["2021-05-17", "2021-05-18") storage_medium=HDD 
storage_cooldown_time=9999-12-31 23:59:59
+  p20210518:["2021-05-18", "2021-05-19") storage_medium=HDD 
storage_cooldown_time=9999-12-31 23:59:59
+  p20210519:["2021-05-19", "2021-05-20") storage_medium=SSD 
storage_cooldown_time=2021-05-21 00:00:00
+  p20210520:["2021-05-20", "2021-05-21") storage_medium=SSD 
storage_cooldown_time=2021-05-22 00:00:00
+  p20210521:["2021-05-21", "2021-05-22") storage_medium=SSD 
storage_cooldown_time=2021-05-23 00:00:00
+  p20210522:["2021-05-22", "2021-05-23") storage_medium=SSD 
storage_cooldown_time=2021-05-24 00:00:00
+  p20210523:["2021-05-23", "2021-05-24") storage_medium=SSD 
storage_cooldown_time=2021-05-25 00:00:00
+```
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/table-design/tiered-storage/overview.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/table-design/tiered-storage/overview.md
new file mode 100644
index 00000000000..a0df890036e
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/table-design/tiered-storage/overview.md
@@ -0,0 +1,35 @@
+---
+{
+    "title": "分层存储",
+    "language": "zh-CN"
+}
+---
+
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+为了帮助用户节省存储成本,Doris 针对冷数据提供了灵活的选择。
+
+| **冷数据选择**          | **适用条件**                                                
                 | **特性**                                                       
                                                    |
+|--------------------|------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------|
+| **存算分离**   | 用户具备部署存算分离的条件                                                   
| - 数据以单副本完全存储在对象存储中<br />- 通过本地缓存加速热数据访问<br />- 存储与计算资源独立扩展,显著降低存储成本        |
+| **本地分层**   | 存算一体模式下,用户希望进一步优化本地存储资源                               | - 
支持将冷数据从 SSD 冷却到 HDD<br />- 充分利用本地存储层级特性,节省高性能存储成本                               
        |
+| **远程分层**   | 存算一体模式下,使用廉价的对象存储或者 HDFS 进一步降低成本                           | - 
冷数据以单副本形式保存到对象存储或者 HDFS中<br />- 热数据继续使用本地存储<br />- 不能对一个表和本地分层混合使用            |
+
+通过上述模式,Doris 能够灵活适配用户的部署条件,实现查询效率与存储成本的平衡。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/table-design/tiered-storage/remote-storage.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/table-design/tiered-storage/remote-storage.md
index b8aa85b94b4..895c2413cd3 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/table-design/tiered-storage/remote-storage.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/table-design/tiered-storage/remote-storage.md
@@ -24,9 +24,9 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-## 功能简介
+## 概述
 
-远程存储支持把部分数据放到外部存储(例如对象存储,HDFS)上,节省成本,不牺牲功能。
+远程存储支持将冷数据放到外部存储(例如对象存储,HDFS)上。
 
 :::warning 注意
 远程存储的数据只有一个副本,数据可靠性依赖远程存储的数据可靠性,您需要保证远程存储有ec(擦除码)或者多副本技术确保数据可靠性。
@@ -34,7 +34,9 @@ under the License.
 
 ## 使用方法
 
-以S3对象存储为例,首先创建S3 RESOURCE:
+### 冷数据保存到 S3 兼容存储
+
+*第一步:* 创建 S3 Resource。
 
 ```sql
 CREATE RESOURCE "remote_s3"
@@ -57,7 +59,9 @@ PROPERTIES
 创建 S3 RESOURCE 的时候,会进行 S3 远端的链接校验,以保证 RESOURCE 创建的正确。
 :::
 
-之后创建STORAGE POLICY,关联上文创建的RESOURCE:
+*第二步:* 创建 STORAGE POLICY。
+
+之后创建 STORAGE POLICY,关联上文创建的 RESOURCE:
 
 ```sql
 CREATE STORAGE POLICY test_policy
@@ -67,7 +71,7 @@ PROPERTIES(
 );
 ```
 
-最后建表的时候指定STORAGE POLICY:
+*第三步:* 建表时使用 STORAGE POLICY。
 
 ```sql
 CREATE TABLE IF NOT EXISTS create_table_use_created_policy 
@@ -88,7 +92,9 @@ PROPERTIES(
 UNIQUE 表如果设置了 `"enable_unique_key_merge_on_write" = "true"` 的话,无法使用此功能。
 :::
 
-创建 HDFS RESOURCE:
+### 冷数据保存到 HDFS
+
+*第一步:* 创建 HDFS RESOURCE:
 
 ```sql
 CREATE RESOURCE "remote_hdfs" PROPERTIES (
@@ -102,30 +108,40 @@ CREATE RESOURCE "remote_hdfs" PROPERTIES (
         "dfs.namenode.rpc-address.my_ha.my_namenode2" = "nn2_host:rpc_port",
         "dfs.client.failover.proxy.provider.my_ha" = 
"org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
     );
+```
 
-    CREATE STORAGE POLICY test_policy PROPERTIES (
-        "storage_resource" = "remote_hdfs",
-        "cooldown_ttl" = "300"
-    )
-
-    CREATE TABLE IF NOT EXISTS create_table_use_created_policy (
-        k1 BIGINT,
-        k2 LARGEINT,
-        v1 VARCHAR(2048)
-    )
-    UNIQUE KEY(k1)
-    DISTRIBUTED BY HASH (k1) BUCKETS 3
-    PROPERTIES(
-    "enable_unique_key_merge_on_write" = "false",
-    "storage_policy" = "test_policy"
-    );
+*第二步:* 创建 STORAGE POLICY。
+
+```sql
+CREATE STORAGE POLICY test_policy PROPERTIES (
+    "storage_resource" = "remote_hdfs",
+    "cooldown_ttl" = "300"
+)
+```
+
+*第三步:* 使用 STORAGE POLICY 创建表。
+
+```sql
+CREATE TABLE IF NOT EXISTS create_table_use_created_policy (
+    k1 BIGINT,
+    k2 LARGEINT,
+    v1 VARCHAR(2048)
+)
+UNIQUE KEY(k1)
+DISTRIBUTED BY HASH (k1) BUCKETS 3
+PROPERTIES(
+"enable_unique_key_merge_on_write" = "false",
+"storage_policy" = "test_policy"
+);
 ```
 
 :::warning 注意
 UNIQUE 表如果设置了 `"enable_unique_key_merge_on_write" = "true"` 的话,无法使用此功能。
 :::
 
-除了新建表支持设置远程存储外,Doris还支持对一个已存在的表或者PARTITION,设置远程存储。
+### 存量表冷却到远程存储
+
+除了新建表支持设置远程存储外,Doris还支持对一个已存在的表或者 PARTITION,设置远程存储。
 
 对一个已存在的表,设置远程存储,将创建好的STORAGE POLICY与表关联:
 
@@ -142,66 +158,54 @@ ALTER TABLE create_table_partition MODIFY PARTITION (*) 
SET("storage_policy"="te
 :::tip
 注意,如果用户在建表时给整张 Table 和部分 Partition 指定了不同的 Storage Policy,Partition 设置的 Storage 
policy 会被无视,整张表的所有 Partition 都会使用 table 的 Policy. 如果您需要让某个 Partition 的 Policy 
和别的不同,则可以使用上文中对一个已存在的 Partition,关联 Storage policy 的方式修改。
 
-具体可以参考 Docs 
目录下[RESOURCE](../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE)、
 
[POLICY](../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-POLICY)、
 [CREATE 
TABLE](../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE)、
 [ALTER 
TABLE](../../sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN)等文档,里面有详细介绍。
+具体可以参考 Docs 
目录下[RESOURCE](../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE)、
 
[POLICY](../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-POLICY)、
 [CREATE 
TABLE](../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE)、
 [ALTER 
TABLE](../../sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN)等文档。
 :::
 
-### 一些限制
-
--   单表或单 Partition 只能关联一个 Storage policy,关联后不能 Drop 掉 Storage 
policy,需要先解除二者的关联。
-
--   Storage policy 关联的对象信息不支持修改数据存储 path 的信息,比如 bucket、endpoint、root_path 等信息
-
--   Storage policy 支持创建、修改和删除,删除前需要先保证没有表引用此 Storage policy。
-
--   Unique 模型在开启 Merge-on-Write 特性时,不支持设置 Storage policy。
-
-## 查看远程存储占用大小
-
-方式一:通过 show proc '/backends'可以查看到每个 BE 上传到对象的大小,RemoteUsedCapacity 项,此方式略有延迟。
+### 配置 compaction
 
-方式二:通过 show tablets from tableName 可以查看到表的每个 tablet 占用的对象大小,RemoteDataSize 项。
+-   BE 参数`cold_data_compaction_thread_num`可以设置执行远程存储的 Compaction 的并发,默认是 2。
 
-## 远程存储的 cache
+-   BE 参数`cold_data_compaction_interval_sec`可以设置执行远程存储的 Compaction 的时间间隔,默认是 
1800,单位:秒,即半个小时。
 
-为了优化查询的性能和对象存储资源节省,引入了 cache 的概念。在第一次查询远程存储的数据时,Doris 会将远程存储的数据加载到 BE 
的本地磁盘做缓存,cache 有以下特性:
+## 限制
 
--   cache 实际存储于 BE 磁盘,不占用内存空间。
+-   使用了远程存储的表不支持备份。
 
--   cache 可以限制膨胀,通过 LRU 进行数据的清理
+-   不支持修改远程存储的位置信息,比如 endpoint、bucket、path。
 
--   cache 的实现和联邦查询 Catalog 的 cache 是同一套实现,文档参考[此处](../../lakehouse/filecache)
+-   Unique 模型表在开启 Merge-on-Write 特性时,不支持设置远程存储。
 
-## 远程存储的 Compaction
+-   Storage policy 支持创建、修改和删除,删除前需要先保证没有表引用此 Storage policy。
 
-远程存储数据传入的时间是 rowset 文件写入本地磁盘时刻起,加上冷却时间。由于数据并不是一次性写入和冷却的,因此避免在对象存储内的小文件问题,Doris 
也会进行远程存储数据的 Compaction。但是,远程存储数据的 Compaction 的频次和资源占用的优先级并不是很高,也推荐本地热数据 
compaction 后再执行冷却。具体可以通过以下 BE 参数调整:
+## 冷数据空间
 
--   BE 参数`cold_data_compaction_thread_num`可以设置执行远程存储的 Compaction 的并发,默认是 2。
+### 查看
 
--   BE 参数`cold_data_compaction_interval_sec`可以设置执行远程存储的 Compaction 的时间间隔,默认是 
1800,单位:秒,即半个小时。。
+方式一:通过 show proc '/backends'可以查看到每个 BE 上传到对象的大小,RemoteUsedCapacity 项,此方式略有延迟。
 
-## 远程存储的 Schema Change
+方式二:通过 show tablets from tableName 可以查看到表的每个 tablet 占用的对象大小,RemoteDataSize 项。
 
-远程存储支持 Schema Change 类型如下:
+### 垃圾回收
 
--   增加、删除列
+远程存储上可能会有如下情况产生垃圾数据:
 
--   修改列类型
+1.  上传 rowset 失败但是有部分 segment 上传成功。
 
--   调整列顺序
+2.  上传的 rowset 没有在多副本达成一致。
 
--   增加、修改索引
+3.  Compaction 完成后,参与 compaction 的 rowset。
 
-## 远程存储的垃圾回收
+垃圾数据并不会立即清理掉。BE 
参数`remove_unused_remote_files_interval_sec`可以设置远程存储的垃圾回收的时间间隔,默认是 21600,单位:秒,即 
6 个小时。
 
-远程存储的垃圾数据是指没有被任何 Replica 使用的数据,对象存储上可能会有如下情况产生的垃圾数据:
+## 查询与性能优化
 
-1.  上传 rowset 失败但是有部分 segment 上传成功。
+为了优化查询的性能和对象存储资源节省,引入了本地 Cache。在第一次查询远程存储的数据时,Doris 会将远程存储的数据加载到 BE 
的本地磁盘做缓存,Cache 有以下特性:
 
-2.  FE 重新选 CooldownReplica 后,新旧 CooldownReplica 的 rowset version 
不一致,FollowerReplica 都去同步新 CooldownReplica 的 CooldownMeta,旧 CooldownReplica 中 
version 不一致的 rowset 没有 Replica 使用成为垃圾数据。
+-   Cache 实际存储于 BE 本地磁盘,不占用内存空间。
 
-3.  远程存储数据 Compaction 后,合并前的 rowset 因为还可能被其他 Replica 使用不能立即删除,但是最终 
FollowerReplica 都使用了最新的合并后的 rowset,合并前的 rowset 成为垃圾数据。
+-   Cache 是通过 LRU 管理的,不支持 TTL。
 
-另外,对象上的垃圾数据并不会立即清理掉。BE 
参数`remove_unused_remote_files_interval_sec`可以设置远程存储的垃圾回收的时间间隔,默认是 21600,单位:秒,即 
6 个小时。
+具体配置请参考(../../lakehouse/filecache)。
 
 ## 常见问题
 
@@ -226,4 +230,3 @@ PROPERTIES
     "use_path_style" = "true"
 );
 ```
-
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1.json 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1.json
index 6015f5aaa94..2a0c0cf8a05 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1.json
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1.json
@@ -602,5 +602,9 @@
   "sidebar.get-starting.category.Building lakehouse": {
     "message": "构建 Lakehouse",
     "description": "The label for category BI and Database IDE in sidebar docs"
+  },
+  "sidebar.docs.category.Tiered Storage": {
+    "message": "分层存储",
+    "description": "The label for category Tiered Storage in sidebar docs"
   }
 }
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0.json 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0.json
index 9ce1ea8a025..73c70b1129c 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0.json
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0.json
@@ -614,5 +614,9 @@
   "sidebar.get-starting.category.Building lakehouse": {
     "message": "构建 Lakehouse",
     "description": "The label for category BI and Database IDE in sidebar docs"
+  },
+  "sidebar.docs.category.Tiered Storage": {
+    "message": "分层存储",
+    "description": "The label for category Tiered Storage in sidebar docs"
   }
 }
\ No newline at end of file
diff --git a/sidebars.json b/sidebars.json
index c745eb0b32e..f44958eb8fb 100644
--- a/sidebars.json
+++ b/sidebars.json
@@ -144,6 +144,7 @@
                             "type": "category",
                             "label": "Tiered Storage",
                             "items": [
+                                "table-design/tiered-storage/overview",
                                 
"table-design/tiered-storage/diff-disk-medium-migration",
                                 "table-design/tiered-storage/remote-storage"
                             ]


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org


Reply via email to