This is an automated email from the ASF dual-hosted git repository.

panjuan pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/shardingsphere-on-cloud.git


The following commit(s) were added to refs/heads/main by this push:
     new d152256  chore: update english docs
     new 400b576  Merge pull request #427 from mlycore/update-docs-en
d152256 is described below

commit d15225687b5dc74cbe3fe6ed35d607b8306fd822
Author: mlycore <[email protected]>
AuthorDate: Wed Jun 28 17:14:56 2023 +0800

    chore: update english docs
    
    Signed-off-by: mlycore <[email protected]>
---
 docs/content/features/DBRE/_index.en.md            |  29 +
 .../features/EcosystemExtensions/_index.en.md      | 117 ++++
 docs/content/features/QuickDeployment/_index.en.md |  32 ++
 .../features/ShardingSphereChaos/_index.en.md      |  51 ++
 .../user-manual/cn-sn-operator/_index.cn.md        |   6 +-
 .../user-manual/cn-sn-operator/_index.en.md        | 631 ++++++++++++---------
 6 files changed, 593 insertions(+), 273 deletions(-)

diff --git a/docs/content/features/DBRE/_index.en.md 
b/docs/content/features/DBRE/_index.en.md
index bb5f7e0..99776f0 100644
--- a/docs/content/features/DBRE/_index.en.md
+++ b/docs/content/features/DBRE/_index.en.md
@@ -4,3 +4,32 @@ title = "Database Reliability Engineering"
 weight = 2
 chapter = true
 +++
+
+## Overview
+
+Database Reliability Engineering (DBRE) aims to improve the stability of 
database-related services using different technical methods, similar to Site 
Reliability Engineering (SRE). In cases where ShardingSphere is deployed on 
Kubernetes, DBRE can be further implemented with the help of Operator.
+
+## High Availability Deployment
+
+Since ShardingSphere Proxy is stateless, it serves as a computing node that 
processes SQL sent by the client to complete relevant data calculations. 
Operator abstracts and describes ShardingSphere Proxy through ComputeNode. 
+
+Currently, the Deployment mode is suitable for ShardingSphere Proxy, due to 
its statelessness. Deployment is a basic method provided by Kubernetes with no 
difference between the Pods it manages. ShardingSphere Proxy can be deployed 
through Deployment, which offers essential capabilities such as health checks, 
readiness checks, rolling upgrades, and version rollbacks.
+
+ComputeNode encompasses various attributes that are essential for deploying 
ShardingSphere Proxy including the number of copies, mirror repo information, 
version information, database driver information, health check, and readiness 
check probes. It also includes port mapping rules, service startup requirements 
server.yaml, logback.xml, and information like Agent-related configuration. 
During the operator tuning process, these pieces of information will be 
rendered through Kubernetes Depl [...]
+
+Deployment’s capabilities result in easy multi-replicas deployment and 
advanced scheduling features such as affinity and taint tolerance, which in 
turn provide basic high availability capabilities for ShardingSphere Proxy.
+
+StorageNode includes configurations related to deploying RDS database 
instances on the public cloud. It specifies the corresponding public cloud 
resource provider through StorageProvider and enables the ability to create, 
automatically register and unregister, and delete database instances on the 
cloud, with automatic elastic expansion now supported. 
+
+## Automatic Elastic Expansion
+
+The Kubernetes community provides a Horizontal Pod Autoscaler (HPA) that 
automatically expands based on CPU and memory, and can also be paired with 
Prometheus Adapter for expansion based on custom indicators. For AWS EC2 
virtual machine deployment scenarios, the community also offers the option to 
expand the capacity of AutoScalingGroup and through the detection mechanism of 
TargetGroup, only Ready instances can receive business traffic.
+
+## Observability
+
+ShardingSphere Proxy, with the assistance of ShardingSphere Agent, can 
effectively compile and display necessary operational information. You can get 
more details on ShardingSphere Agent by clicking 
[here](https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-proxy/observability/).
 Additionally, ShardingSphere on Cloud features Grafana templates that provide 
valuable insights into basic resource monitoring, JVM monitoring, and 
ShardingSphere runtime indicators. [...]
+
+## Chaos Engineering
+
+Chaos engineering allows us to verify the robustness of our system and uncover 
previously unknown issues. ShardingSphere Operator supports CRD Chaos that 
injects different types of faults such as Pod exceptions, CPU pressure, memory 
pressure, and network exceptions, directly into ShardingSphere Proxy. See Chaos 
Engineering for more details.
+
diff --git a/docs/content/features/EcosystemExtensions/_index.en.md 
b/docs/content/features/EcosystemExtensions/_index.en.md
index 6a12e40..738214a 100644
--- a/docs/content/features/EcosystemExtensions/_index.en.md
+++ b/docs/content/features/EcosystemExtensions/_index.en.md
@@ -5,3 +5,120 @@ weight = 4
 chapter = true
 +++
 
+
+## WebAssembly (Wasm) Extensions
+
+WebAssembly (abbreviated as Wasm) has now expanded its application beyond web 
browsers, despite its initial intention of improving JavaScript performance on 
webpages. 
+
+With WebAssembly System Interface (WASI), Wasm can now run in various 
scenarios including trusted computing and edge computing. The majority of 
popular programming languages are compatible with Wasm, while ShardingSphere 
plugins (SPIs) currently only support the Java ecosystem. Introducing Wasm into 
ShardingSphere, can significantly enhance ShardingShphere's pluggable ecosystem 
with better flexibility, and attract more developers to the community.
+
+### Using Wasm for Custom Sharding
+
+Apache ShardingSphere currently uses Service Provider Interface (SPIs) to 
expand its pluggable architecture. For more information, please refer to the 
[ShardingSphere Developer 
Manual](https://shardingsphere.apache.org/document/current/en/dev-manual/). 
+
+We have implemented a custom sharding demo using Wasm for sharding scenarios. 
The demo below shows the custom sharding logic when `sharding_count` is `3`:
+
+1. Extract the sharding SPI logic from Apache ShardingSphere, for example, the 
auto-create sharding algorithm `MOD` from the 
[document](https://shardingsphere.apache.org/document/current/en/dev-manual/sharding/).
 Organize it into a separate 
[directory](https://github.com/apache/shardingsphere-on-cloud/tree/main/wasm/wasm-sharding-java/src/main/java/org/apache/shardingsphere):
+
+```shell
+├── pom.xml
+├── src
+│   └── main
+│       └── java
+│           └── org
+│               └── apache
+│                   └── shardingsphere
+│                       ├── infra 
+│                       ├── sharding 
+```
+
+2. Add 
[demo.java](https://github.com/apache/shardingsphere-on-cloud/blob/main/wasm/wasm-sharding-java/src/main/java/org/apache/shardingsphere/demo.java)
 to the above directory. Instantiate `StandardShardingAlgorithm` using 
`WasmShardingAlgorithm` provided by Wasm for sharding. Run the custom sharding 
logic and view the output. 
+
+```java
+// ...
+        StandardShardingAlgorithm<?> shardingAlgorithm = new 
WasmShardingAlgorithm();
+// ...
+```
+
+3. Write [custom sharding 
logic](https://github.com/apache/shardingsphere-on-cloud/tree/main/wasm/wasm-sharding-java/wasm-sharding)
 in Rust, and compile to Wasm module.
+
+```rust
+#[link(wasm_import_module = "sharding")]
+extern "C" {
+    fn poll_table(addr: i64, len: i32) -> i32;
+}
+
+// The value of sharding_count must be consistent with the value of the 
AvaliableTargetNames
+const SHARDING_COUNT: u8 = 3;
+
+#[no_mangle]
+pub unsafe extern "C" fn do_work() -> i64 {
+// ...
+    let sharding =  column_value % SHARDING_COUNT;
+// ...
+    std::ptr::copy_nonoverlapping(table_name.as_mut_ptr() as *const _, 
buf.as_mut_ptr().add(len as usize), table_name.len());
+    buf_ptr
+}
+```
+
+4. Create 
[WasmShardingAlgorithm.java](https://github.com/apache/shardingsphere-on-cloud/blob/main/wasm/wasm-sharding-java/src/main/java/org/apache/shardingsphere/sharding/WasmShardingAlgorithm.java)
 under `src/main/java/org/apache/shardingsphere/sharding/`, to communicate with 
the custom sharding logic in Wasm:
+
+```java
+//...
+public final class WasmShardingAlgorithm implements 
StandardShardingAlgorithm<Comparable<?>> {
+// ...
+    private static final String WASM_PATH = 
"./wasm-sharding/target/wasm32-wasi/debug/wasm_sharding.wasm";
+    private String wasmDoSharding(final Collection<String> 
availableTargetNames, final PreciseShardingValue<Comparable<?>> shardingValue) {
+// ...
+    }
+
+    @Override
+    public String getType() {
+        return "WASM";
+    }
+}
+
+```
+
+### Extend Custom Sharding Expressions with Wasm
+
+ShardingSphere only supports Groovy for defining sharding rules within the 
Java ecosystem. With Wasm, you can now define sharding logic using your 
preferred language. `WASM-sharding-js` demonstrates how to define the CRC32MOD 
sharding algorithm using JavaScript. 
+
+To make sharding easier, Wasm allows you to use your familiar language, which 
makes the extension of sharding algorithms even more effortless. 
[wasm-sharding-js](https://github.com/apache/shardingsphere-on-cloud/tree/main/wasm/wasm-sharding-js)
 provides an example of how to compile the sharding algorithms in JavaScript 
into Wasm extensions. 
+
+The directory structure is as follows:
+```shell
+├── Cargo.lock
+├── Cargo.toml
+├── README.md
+├── build.rs
+├── lib
+│   └── binding.rs
+├── package-lock.json
+├── package.json
+├── sharding
+│   ├── config.js
+│   ├── crc32.js
+│   ├── sharding.js
+│   └── strgen.js
+└── src
+```
+In the file `sharding/config.js`, two sharding resources are defined: 
`t_order_00${0..2}` and `ms_ds00${crc32(field_id)}`. For `t_order_00${0..2}`, 
it's expected to generate three sharded tables: `t_order_000`, `t_order_001`, 
and `t_order_002` after parsing. For `ms_ds00${crc32(field_id)}`, we expect the 
`field_id` to be hashed with `crc32` before sharding: 
+
+```javascript
+export let cc = "t_order_00${0..2}"
+export let cc_crc32 = "ms_ds00${crc32(field_id)}"
+```
+
+Furthermore, the `pisa_crc32` function declared in the file 
`sharding/sharding.js` shows the parsing of the above two expressions using 
JavaScript:
+
+```javascript
+//...
+function pisa_crc32(str, mod) {
+    let c2 = crc32_str(str)
+    let m = c2 % mod
+    return m < 256 ? 0 : m < 512 ? 1: m<768 ? 2 : 3
+}
+//...
+```
+Thanks to Wasm, not only you can enhance the functionality of ShardingSphere, 
but also extend their technical capabilities to a wider range of stacks.
diff --git a/docs/content/features/QuickDeployment/_index.en.md 
b/docs/content/features/QuickDeployment/_index.en.md
index f71f13a..0bae7d8 100644
--- a/docs/content/features/QuickDeployment/_index.en.md
+++ b/docs/content/features/QuickDeployment/_index.en.md
@@ -4,3 +4,35 @@ title = "Deployment on Cloud"
 weight = 1
 chapter = true
 +++
+
+## Overview
+
+Cloud computing has evolved over the years from IaaS to PaaS, and then to 
SaaS. It not only changed infrastructure compositions but also upgraded 
software development concepts. 
+With Kubernetes leading the cloud-native wave, an increasing number of 
applications, including ShardingSphere, are being deployed using cloud-native 
technology stacks. To deploy ShardingSphere in a cloud environment, we 
recommend adopting Infrastructure as Code (IaC).
+
+### AWS One-Click Deployment
+
+To deploy ShardingSphere on AWS, you should first familiarize yourself with 
various AWS resources and services such as VPC, subnet, security group, elastic 
load balancer, domain name, EC2, RDS, and CloudWatch. You can adopt IaC, like 
AWS's CloudFormation, to quickly describe and deploy a complete set of 
ShardingSphere structures. 
+
+CloudFormation uses json or yaml templates to describe and combine various 
resources required for abstract deployment. It's interpreted and executed by 
related services. You only need to write relevant descriptions using version 
control tools, such as Git, to manage and maintain the deployed code. 
+
+Currently, Apache ShardingSphere's CloudFormation is hosted in the 
ShardingSphere on Cloud repo. You can get the corresponding AMI information on 
the AWS Marketplace by clicking 
[HERE](https://us-east-1.console.aws.amazon.com/marketplace/home?region=ap-southeast-1#/subscriptions/ef146e06-20ca-4da4-8954-78a7c51b3c5a).
+
+See [Quick 
Start](https://shardingsphere.apache.org/document/current/en/quick-start/) to 
learn how to start a ShardingSphere Proxy cluster on AWS with CloudFormation's 
minimal configuration. If you want to learn more about CloudFormation 
parameters or are familiar with Terraform, please refer to [User 
Manual](https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-jdbc/).
+
+### Kubernetes One-Click Deployment
+
+Deploying ShardingSphere on Kubernetes has never been easier, thanks to our 
one-click deployment feature that utilizes the Helm package manager. This tool 
enables users to describe the deployment structure using a set of templates and 
Charts comprised of variable declarations. 
+
+Resource objects involved in the deployment include Kubernetes workloads such 
as Deployment, Service, and ConfigMap. You can produce Charts packages for each 
version release and submit them to public product repos like ArtifactHub. 
+
+Currently, we offer this feature with the relevant source code hosted on the 
ShardingSphere on Cloud repo.
+
+See [Quick 
Start](https://shardingsphere.apache.org/document/current/en/quick-start/) to 
learn how to start a ShardingSphere Proxy cluster on Kubernetes with Helm 
Charts's minimal configuration. If you want to know more about Charts 
parameters or are familiar with Operator, please refer to [User 
Manual](https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-jdbc/).
+
+## Applicable Scenarios
+
+You can use the one-click deployment mode for testing purposes. If you plan to 
use ShardingSphere Proxy in a production environment, please refer to [User 
Manual](https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-jdbc/).
 It is crucial to learn relevant parameters before configuring and deploying. 
+
+
+
diff --git a/docs/content/features/ShardingSphereChaos/_index.en.md 
b/docs/content/features/ShardingSphereChaos/_index.en.md
index 32b12a5..d9b7398 100644
--- a/docs/content/features/ShardingSphereChaos/_index.en.md
+++ b/docs/content/features/ShardingSphereChaos/_index.en.md
@@ -4,3 +4,54 @@ title = "ShardingSphere Chaos"
 weight = 3
 chapter = true
 +++
+
+
+## Overview
+
+System availability is a critical metric for evaluating service reliability. 
There are numerous techniques to ensure availability, such as engineering 
resilience, anti-fragility, and others. 
+
+However, disruptions in hardware and software can still occur, resulting in 
potential damage to the availability and robustness of the system. 
+
+Chaos Engineering is a practice that aims to enhance system robustness by 
detecting the weaknesses in software systems, ultimately optimizing the ability 
to react to stresses and failures. According to the definition from 
[principleofchaos.org](https://principleofchaos.org/): 
+> *Chaos Engineering is the discipline of experimenting on a system in order 
to build confidence in the system’s capability to withstand turbulent 
conditions in production.*
+
+## General Principle
+
+Chaos engineering generally involves five steps, which can be repeated if 
necessary: 
+- defining a steady-state
+- formulating hypotheses about the steady-state
+- running chaos experiments
+- verifying the results
+- fixing the issue if necessary
+
+To save time and increase teams' productivity, we suggest using Continuous 
Verification (CV) in chaos experiments, similar to Continuous Integration (CI). 
+
+We also recommend introducing a diverse range of real-world events into the 
chaos experiments. While conducting experiments, minimize the blast radius to 
contain negative impact on a larger group of customers. 
+
+## CustomResourceDefinitions (CRD) Chaos
+
+ShardingSphere Operator supports `CustomResourceDefinitions` (CRD) chaos. The 
Operator supports multiple types of fault injection, for example, PodChaos 
including experiment actions like Pod Kill, Pod Failure, CPU Stress and Memory 
Stress, and NetworkChaos including network delay and loss. Once the basic 
parameters have been defined, Operator converts them into corresponding chaos 
experiments. For example:
+
+```yaml
+apiVersion: shardingsphere.apache.org/v1alpha1
+kind: Chaos
+metadata:
+  name: cpu-chaos
+  annotations:
+    selector.chaos-mesh.org/mode: one
+spec:
+  podChaos:
+    selector:
+      labelSelectors:
+        app: foo
+      namespaces: 
+      - foo-chaos
+    params:
+      cpuStress:
+        duration: 1m
+        cores: 2
+        load: 50
+    action: "CPUStress"
+```
+
+If you are using Chaos Mesh as the Chaos Engineering platform, you will need 
to deploy it in Kubernetes as the test environment prior to creating and 
submitting ShardingSphere Chaos configuration files. For further information, 
please refer to the user manual.
diff --git a/docs/content/user-manual/cn-sn-operator/_index.cn.md 
b/docs/content/user-manual/cn-sn-operator/_index.cn.md
index 1999ffa..de131e8 100644
--- a/docs/content/user-manual/cn-sn-operator/_index.cn.md
+++ b/docs/content/user-manual/cn-sn-operator/_index.cn.md
@@ -147,7 +147,7 @@ metadata:
   name: shardingsphere-cluster-shardingsphere-proxy
   namespace: shardingsphere-operator
 spec:
-  version: 5.3.1
+  version: 5.4.0
   serviceType:
     type: ClusterIP
   replicas: 3
@@ -270,7 +270,7 @@ spec:
   storageNodeConnector:
     type: mysql
     version: 5.1.47
-  serverVersion: 5.3.1
+  serverVersion: 5.4.0
   replicas: 3
   selector:
     matchLabels:
@@ -318,7 +318,7 @@ StorageNode 是 Operator 对于数据源的描述,提供对数据源的生命
 目前 Operator 想要使用 StorageNode 需要打开相应的 FeatureGate:
 
 ```shell
-helm install [RELEASE_NAME] 
shardingsphere/apache-shardingsphere-operator-charts --set 
operator.featureGates.storageNode=true
+helm install [RELEASE_NAME] 
shardingsphere/apache-shardingsphere-operator-charts --set 
operator.featureGates.storageNode=true --set 
operator.storageNodeProviders.aws.region='' --set 
operator.storageNodeProviders.aws.accessKeyId='' --set 
operator.storageNodeProviders.aws.secretAccessKey='' --set 
operator.storageNodeProviders.aws.enabled=true
 ```
 
 #### 字段说明
diff --git a/docs/content/user-manual/cn-sn-operator/_index.en.md 
b/docs/content/user-manual/cn-sn-operator/_index.en.md
index 50a5643..45b3f1f 100644
--- a/docs/content/user-manual/cn-sn-operator/_index.en.md
+++ b/docs/content/user-manual/cn-sn-operator/_index.en.md
@@ -1,13 +1,21 @@
 +++
 pre = "<b>4.2 </b>"
-title = "ShardingSphere-Cluster Operator User Manual"
+title = "ShardingSphere Operator User Manual"
 weight = 2
 chapter = true
 +++
 
-## ShardingSphere-Cluster Operator Installation
+## Overview 
 
-The following configuration content and configuration file directory are: 
apache-shardingsphere-cluster-operator-charts/values.yaml.
+ShardingSphere Operator is a practical implementation of the Kubernetes 
Operator model. It transforms the maintenance experience of ShardingSphere 
Proxy into an executable program and leverages Kubernetes's declarative and 
"reconcile" features for implementation.
+
+ShardingSphere Operator abstracts computing nodes, storage nodes, and even 
chaos faults as Kubernetes Custom Resource Definitions (CRDs). Users are 
responsible for writing corresponding CRD configurations, while the Operator 
executives and ensures the desired state.
+
+Please refer to the 'Operator Installation' section to get installed and try 
it out, and refer to the 'CRD Introduction' to get a better understanding of 
CRD's configuration.
+
+## Operator Installation
+
+Operator currently supports Helm Charts rapid deployment, configuration 
content and configuration file directory is: 
apache-shardingsphere-operator-charts. Users can adopt online installation or 
source code installation depending on their needs. 
 
 ### Online Installation
 
@@ -28,284 +36,367 @@ cd ../
 helm install shardingsphere-cluster apache-shardingsphere-operator-charts -n 
shardingsphere-operator
 ```
 
-## Parameters
-
-### Common parameters
-| Name              | Description                                              
                                                 | Value                  |
-|-------------------|-----------------------------------------------------------------------------------------------------------|------------------------|
-| `nameOverride`    | nameOverride String to partially override 
common.names.fullname template (will maintain the release name) | 
`shardingsphere-proxy` |
-
-### ShardingSphere Operator Parameters
-
-| Name                              | Description                              
                                                                 | Value        
             |
-|-----------------------------------|-----------------------------------------------------------------------------------------------------------|---------------------------|
-| `operator.replicaCount`           | operator replica count                   
                                                                 | `2`          
             |
-| `operator.image.repository`       | operator image name                      
                                                                  | 
`apache/shardingsphere-operator` |
-| `operator.image.pullPolicy`       | image pull policy                        
                                                                 | 
`IfNotPresent`            |
-| `operator.image.tag`              | image tag                                
                                                                 | `0.2.0`      
             |
-| `operator.imagePullSecrets`       | image pull secret of private repository  
                                                                 | `[]`         
             |
-| `operator.resources`              | operator Resources required by the 
operator                                                               | `{}`   
                   |
-| `operator.health.healthProbePort` | operator health check port               
                                                                 | `8081`       
             |
-
-### ShardingSphere-Proxy Cluster Parameters
-
-| Name                                             | Description               
                                                                                
                                                                                
         | Value       |
-|--------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------|
-| `proxyCluster.replicaCount`                      | ShardingSphere-Proxy 
cluster starts the number of replicas, Note: After you enable automaticScaling, 
this parameter will no longer take effect                                       
              | `3`         |
-| `proxyCluster.proxyVersion`                      | ShardingSphere-Proxy 
cluster version                                                                 
                                                                                
              | `5.3.1`     |
-| `proxyCluster.automaticScaling.enable`           | ShardingSphere-Proxy 
Whether the ShardingSphere-Proxy cluster has auto-scaling enabled               
                                                                                
              | `false`     |
-| `proxyCluster.automaticScaling.scaleUpWindows`   | ShardingSphere-Proxy 
automatically scales the stable window                                          
                                                                                
              | `30`        |
-| `proxyCluster.automaticScaling.scaleDownWindows` | ShardingSphere-Proxy 
automatically shrinks the stabilized window                                     
                                                                                
              | `30`        |
-| `proxyCluster.automaticScaling.target`           | ShardingSphere-Proxy 
auto-scaling threshold, the value is a percentage, note: at this stage, only 
cpu is supported as a metric for scaling                                        
                 | `20`        |
-| `proxyCluster.automaticScaling.maxInstance`      | ShardingSphere-Proxy 
maximum number of scaled-out replicas                                           
                                                                                
              | `4`         |
-| `proxyCluster.automaticScaling.minInstance`      | ShardingSphere-Proxy has 
a minimum number of boot replicas, and the shrinkage will not be less than this 
number of replicas                                                              
          | `1`         |
-| `proxyCluster.resources`                         | ShardingSphere-Proxy 
starts the requirement resource, and after opening automaticScaling, the 
resource of the request multiplied by the percentage of target is used to 
trigger the scaling action | `{}`        |
-| `proxyCluster.service.type`                      | ShardingSphere-Proxy 
external exposure mode                                                          
                                                                                
              | `ClusterIP` |
-| `proxyCluster.service.port`                      | ShardingSphere-Proxy 
exposes  port                                                                   
                                                                                
              | `3307`      |
-| `proxyCluster.startPort`                         | ShardingSphere-Proxy boot 
port                                                                            
                                                                                
         | `3307`      |
-| `proxyCluster.mySQLDriver.version`               | ShardingSphere-Proxy The 
ShardingSphere-Proxy mysql driver version will not be downloaded if it is empty 
                                                                                
          | `5.1.47`    |
-
-### ShardingSphere-Proxy ServerConfig Authority Related Parameters of Compute 
Node
-
-| Name                                                    | Description        
                                                                                
                                            | Value           |
-|---------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------|-----------------|
-| `proxyCluster.serverConfig.authority.privilege.type`    | authority provider 
for storage node, the default value is ALL_PERMITTED                            
                                            | `ALL_PERMITTED` |
-| `proxyCluster.serverConfig.authority.users[0].password` | Password for 
compute node.                                                                   
                                                  | `root`          |
-| `proxyCluster.serverConfig.authority.users[0].user`     | 
Username,authorized host for compute node. Format: <username>@<hostname> 
hostname is % or empty string means do not care about authorized host | 
`root@%`        |
-
-### ShardingSphere-Proxy ServerConfig Mode Related Paraters of Compute Node
-
-| Name                                                                         
  | Description                                                         | Value 
                                                                 |
-|--------------------------------------------------------------------------------|---------------------------------------------------------------------|------------------------------------------------------------------------|
-| `proxyCluster.serverConfig.mode.type`                                        
  | Type of mode configuration. Now only support Cluster mode           | 
`Cluster`                                                              |
-| `proxyCluster.serverConfig.mode.repository.props.namespace`                  
  | Namespace of registry center                                        | 
`governance_ds`                                                        |
-| `proxyCluster.serverConfig.mode.repository.props.server-lists`               
  | Server lists of registry center                                     | `{{ 
printf "%s-zookeeper.%s:2181" .Release.Name .Release.Namespace }}` |
-| `proxyCluster.serverConfig.mode.repository.props.maxRetries`                 
  | Max retries of client connection                                    | `3`   
                                                                 |
-| 
`proxyCluster.serverConfig.mode.repository.props.operationTimeoutMilliseconds` 
| Milliseconds of operation timeout                                   | `5000`  
                                                               |
-| `proxyCluster.serverConfig.mode.repository.props.retryIntervalMilliseconds`  
  | Milliseconds of retry interval                                      | `500` 
                                                                 |
-| `proxyCluster.serverConfig.mode.repository.props.timeToLiveSeconds`          
  | Seconds of ephemeral data live                                      | `600` 
                                                                 |
-| `proxyCluster.serverConfig.mode.repository.type`                             
  | Type of persist repository. Now only support ZooKeeper              | 
`ZooKeeper`                                                            |
-| `proxyCluster.serverConfig.mode.overwrite`                                   
  | Whether overwrite persistent configuration with local configuration | 
`true`                                                                 |
-| `proxyCluster.serverConfig.props.proxy-frontend-database-protocol-type`      
  | Default startup protocol                                            | 
`MySQL`                                                                |
-
-### ZooKeeper Chart Parameters
-
-| Name                                 | Description                           
               | Value               |
-|--------------------------------------|------------------------------------------------------|---------------------|
-| `zookeeper.enabled`                  | Switch to enable or disable the 
ZooKeeper helm chart | `true`              |
-| `zookeeper.replicaCount`             | Number of ZooKeeper nodes             
               | `1`                 |
-| `zookeeper.persistence.enabled`      | Enable persistence on ZooKeeper using 
PVC(s)         | `false`             |
-| `zookeeper.persistence.storageClass` | Persistent Volume storage class       
               | `""`                |
-| `zookeeper.persistence.accessModes`  | Persistent Volume access modes        
               | `["ReadWriteOnce"]` |
-| `zookeeper.persistence.size`         | Persistent Volume size                
               | `8Gi`               |
-
-### ShardingSphere ComputeNode Parameters 
-
-| Name                                        | Description                    
                                                                        | Value 
              |
-| --------------------------------------------| 
------------------------------------------------------------------------------------------------------
 | ------------------- |
-| `computeNode.storageNodeConnector.type`     | ShardingSphere-Proxy driver 
type                                                                       | 
`mysql`             |
-| `computeNode.storageNodeConnector.version`  | ShardingSphere-Proxy driver 
version. The MySQL driver need to be downloaded according to this version  | 
`5.1.47`            |
-| `computeNode.serverVersion`                 | ShardingSphere-Proxy cluster 
version                                                                   | 
`5.3.1`             |
-| `computeNode.portBindings[0].name`          | ShardingSphere-Proxy port name 
                                                                        | 
`3307`              |
-| `computeNode.portBindings[0].containerPort` | ShardingSphere-Proxy port for 
container                                                                | 
`3307`              |
-| `computeNode.portBindings[0].servicePort`   | ShardingSphere-Proxy port for 
service                                                                  | 
`3307`              |
-| `computeNode.portBindings[0].procotol`      | ShardingSphere-Proxy port 
protocol                                                                     | 
`TCP`               |
-| `computeNode.serviceType`                   | ShardingSphere-Proxy service 
type                                                                      | 
`ClusterIP`         |
-
-
-### ShardingSphere ComputeNode Bootstrap Parameters
-
-| Name                                                                         
  | Description                                                         | Value 
                                                                 |
-|--------------------------------------------------------------------------------|
 ------------------------------------------------------------------- | 
---------------------------------------------------------------------- |
-| `computeNode.bootstrap.serverConfig.authority.privilege.type`    | authority 
provider for storage node, the default value is ALL_PERMITTED                   
                                                     | 
`ALL_PRIVILEGES_PERMITTED` |
-| `computeNode.bootstrap.serverConfig.authority.users[0].user`     | 
Username,authorized host for compute node. Format: <username>@<hostname> 
hostname is % or empty string means do not care about authorized host | 
`root@%`                   |
-| `computeNode.bootstrap.serverConfig.authority.users[0].password` | Password 
for compute node.                                                               
                                                      | `root`                  
   |
-| `computeNode.bootstrap.serverConfig.mode.type`                               
           | Type of mode configuration. Now only support Cluster mode          
 | `Cluster`                                                              |
-| `computeNode.bootstrap.serverConfig.mode.repository.type`                    
           | Type of persist repository. Now only support ZooKeeper             
 | `ZooKeeper`                                                            |
-| `computeNode.bootstrap.mode.repository.props.timeToLiveSeconds`            | 
Seconds of ephemeral data live                                      | `600`     
                                                             |
-| `computeNode.bootstrap.serverConfig.mode.repository.props.serverlists`       
          | Server lists of registry center                                     
| `{{ printf "%s-zookeeper.%s:2181" .Release.Name .Release.Namespace }}` |
-| 
`computeNode.bootstrap.serverConfig.mode.repository.props.retryIntervalMilliseconds`
    | Milliseconds of retry interval                                      | 
`500`                                                                  |
-| 
`computeNode.bootstrap.serverConfig.mode.repository.props.operationTimeoutMilliseconds`
 | Milliseconds of operation timeout                                   | `5000` 
                                                                |
-| `computeNode.bootstrap.serverConfig.mode.repository.props.namespace`         
           | Namespace of registry center                                       
 | `governance_ds`                                                        |
-| `computeNode.bootstrap.serverConfig.mode.repository.props.maxRetries`        
           | Max retries of client connection                                   
 | `3`                                                                    |
-| `computeNode.bootstrap.serverConfig.mode.overwrite`                          
           | Whether overwrite persistent configuration with local 
configuration | `true`                                                          
       |
-| 
`computeNode.bootstrap.serverConfig.props.proxy-frontend-database-protocol-type`
        | Default startup protocol                                            | 
`MySQL`                                                                |
-
-## Examples
-
-apache-shardingsphere-clutser-operator-charts/values.yaml
+### Charts Parameters Instruction
+
+#### Common Parameters
+| Name |  Description  | Value |
+|-------------------|-----------------------------------------------------------------------------------------------------------|---------------------------------------|
+| `nameOverride`    | nameOverride String to partially override 
common.names.fullname template will maintain the release name | 
`shardingsphere-proxy` |
+
+#### ShardingSphere Operator Parameters
+| Name | Description | Value|
+|-----------------------------------|------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------|
+| `operator.replicaCount`           | Operstor replica count| `1`              
                                                       |
+| `operator.image.repository`       | Operator image name| 
`apache/shardingsphere-operator` |
+| `operator.image.pullPolicy`       | Image pull policy                        
                                                                 | 
`IfNotPresent`                                                          |
+| `operator.image.tag`              | Image tag| `0.3.0`                       
                                          |
+| `operator.imagePullSecrets`       | Image pull secret of private repository| 
`[]`                                                                    |
+| `operator.resources`              | Operator resources required by the 
operator| `{}`                                                                  
  |
+| `operator.health.healthProbePort` | Operator health check pork| `8080`       
                                                           |
+
+Users can choose whether to install the supporting management center depending 
on their needs when using Operator Charts for installation. The relevant 
parameters are as follows:
+
+| Name | Description | Value|
+| ------------------------------------ | 
---------------------------------------------------- | ------------------- |
+| `zookeeper.enabled`                  | Switch to eable or disable the 
ZooKeeper helm chart| `true`              |
+| `zookeeper.replicaCount`             | Zookeeper replica count               
             | `1`                 |
+| `zookeeper.persistence.enabled`      | Enable persistence on Zookeeper using 
PVC(s)         | `false`             |
+| `zookeeper.persistence.storageClass` | Persistent Volume Storage Class | 
`""`                |
+| `zookeeper.persistence.accessModes`  | Persistent Volume access modes| 
`["ReadWriteOnce"]` |
+| `zookeeper.persistence.size`         | Persistent Volume size| `8Gi`         
      |
+
+Note: currently the persist repository installed by Charts, supports Bitnami 
Zookeeper Charts only.
+
+## CRD Introduction
+
+### ShardingSphereProxy 
+
+ShardingSphereProxy and ShardingSphereProxyServerConfig provide a basic 
description of deployment and configuration of ShardingSphereProxy, Operator 
submits and adds configuration provided by CRD convert to Kubernetes workload. 
ShardingSphereProxy effect relevant configuration of basic resource, and 
ShardingSphereProxyServerConfig effect `server.yaml`.
+
+Note: ShardingSphereProxy and ShardingSphereProxyServerConfig planning end of 
support since 0.4.0 version.
+
+ 
+#### Column Comment
+
+
+##### Programmatic Configuration 
+
+ShardingSphereProxy
+
+Configuration item |  Description | Type | Examples 
+------------------ | 
--------------------------|------------------------------------------------------
 | ----------------------------------------
+`.spec.version`  | ShardingSphere-Proxy version | string | `5.4.0`
+`.spec.serviceType.type` | Service type |  string | `ClusterIP`
+`.spec.serviceType.nodePort` | Node Port service | number | `33307`
+`.spec.replicas` | Operstor replica count | number | `3` 
+`.spec.proxyConfigName` | Mounted configuration  | string  |
+`.spec.port` | Exposed port  | number |
+
+##### Optional Configuration 
+
+Configuration item |  Description | Type | Examples 
+------------------ | 
--------------------------|------------------------------------------------------
 | ----------------------------------------
+`.spec.automaticScaling.enable` | Automatic scaling enable  | bool | `false` 
+`.spec.automaticScaling.scaleUpWindows` |  Maximum automatic scale limit  | 
number | 
+`.spec.automaticScaling.scaleDownWindows` | Minimum automatic scale limit | 
number |
+`.spec.automaticScaling.target` | Automatic scaling target | number |
+`.spec.automaticScaling.maxInstance` | Automatic scaling maxmum instance  | 
number |
+`.spec.automaticScaling.minInstance` | Automatic scaling minimum instance | 
number |
+`.spec.customMetrics` | Custom metrics | []autoscalingv2beta2.MetricSpec | 
+`.spec.imagePullSecrets` | Image pull secrets  | v1.Local,ObjectReference | 
+`.spec.mySQLDriver.version` | MySQL driver version | string |  
+`.spec.resources` | Resources configuration| v1.ResourceRequirements | 
+`.spec.livenssProbe` | Liveness probe | v1.Probe |
+`.spec.readinessProbe` | Readness probe | v1.Probe |
+`.spec.startupProbe` |  Startup probe| v1.Probe |
+
+
+ShardingSphereProxyServerConfig
+
+Configuration item |  Description | Type |  Examples
+------------------ | 
--------------------------|------------------------------------------------------
 | ----------------------------------------
+`.spec.mode.type` | string | Type of mode configuration, supports Standalone 
and Cluster | string | `Cluster`
+`.spec.mode.repository.type` | string  | Type of persist repository, supports 
ZooKeeper and Etcd  |string              | `ZooKeeper`
+`.spec.mode.repository.props.namespace` | string  |Namespace of registry 
center(Not namespace of K8s) | `governance_ds`
+`.spec.mode.repository.props.server-lists` |  string  | Server lists of 
registry center                                    | `zookeeper.default:2181` 
+`.spec.mode.repository.props.retryIntervalMilliseconds` | number  | 
Milliseconds of retry interval                                      | `500`
+`.spec.mode.repository.props.maxRetries` | number  | Max retries of client 
connection                                  | `3`
+`.spec.mode.repository.props.timeToLiveSeconds` | number  | TTL                
                        | `600`
+`.spec.mode.repository.props.operationTimetoutMilliseconds` | number |  
Milliseconds of operation timeout                                 | `5000`
+`.spec.mode.repository.props.digest` | Abstract | string |  
+`.spec.authority.users[0].user` |  Username, authorized host for compute node. 
Format: <username>@<hostname>, hostname is % or empty string means do not care 
about authorized host|string |`root@%`
+`.spec.authority.users[0].password` | Username, authorized host for compute 
node. Format: <username>@<hostname>, hostname is % or empty string means do not 
care about authorized host|string |`root@%`
+`.spec.authority.priviliege.type`  | Authority priviliege for compute node, 
the default value is ALL_PRIVILEGES_PERMITTED  | string                         
                                               | `ALL_PRIVILEGES_PERMITTED` 
+`.spec.props.kernel-executor-size` | Kernel executor size | number | 
+`.spec.props.check-table-metadata-enabled` | Check table metadata enabled | 
bool | 
+`.spec.props.proxy-backend-query-fetch-size` | Back end query fetch size  | 
number | 
+`.spec.props.check-duplicate-table-enabled`|Check duplicate table enable | 
bool| 
+`.spec.props.proxy-frontend-executeor-size` | Front end executor size | number 
| 
+`.spec.props.proxy-backend-executor-suitable` |Back end executor suitable | 
string | 
+`.spec.props.proxy-backend-driver-type` |Back end driver type | string | 
+`.spec.props.proxy-frontend-database-protocol-type` | Front end database 
protocol type | string | 
+
+#### Examples
+
+ShardingSphereProxy example:
 
 ```yaml
-## @section Name parameters
-## @param nameOverride String to partially override common.names.fullname 
template (will maintain the release name)
-##
-nameOverride: apache-shardingsphere-proxy-cluster
-
-## @section ShardingSphere operator parameters
-operator:
-  ## @param replicaCount operator replica count
-  ##
-  replicaCount: 2
-  image:
-    ## @param image.repository operator image name
-    ##
-    repository: "apache/shardingsphere-operator"
-    ## @param image.pullPolicy image pull policy
-    ##
-    pullPolicy: IfNotPresent
-    ## @param image.tag image tag
-    ##
-    tag: "0.2.0"
-  ## @param imagePullSecrets image pull secret of private repository
-  ## e.g:
-  ## imagePullSecrets:
-  ##   - name: mysecret
-  ##
-  imagePullSecrets: {}
-  ## @param resources operator Resources required by the operator
-  ## e.g:
-  ## resources:
-  ##   limits:
-  ##     cpu: 2
-  ##   limits:
-  ##     cpu: 2
-  ##
-  resources: {}
-  ## @param health.healthProbePort operator health check port
-  ##
-  health:
-    healthProbePort: 8081
-
-
-## @section ShardingSphere-Proxy cluster parameters
-proxyCluster:
-  enabled: true
-  ## @param replicaCount ShardingSphere-Proxy cluster starts the number of 
replicas, Note: After you enable automaticScaling, this parameter will no 
longer take effect
-  ## @param proxyVersion ShardingSphere-Proxy cluster version
-  ##
-  replicaCount: "3"
-  proxyVersion: "5.3.1"
-  ## @param automaticScaling.enable ShardingSphere-Proxy Whether the 
ShardingSphere-Proxy cluster has auto-scaling enabled
-  ## @param automaticScaling.scaleUpWindows ShardingSphere-Proxy automatically 
scales the stable window
-  ## @param automaticScaling.scaleDownWindows ShardingSphere-Proxy 
automatically shrinks the stabilized window
-  ## @param automaticScaling.target ShardingSphere-Proxy auto-scaling 
threshold, the value is a percentage, note: at this stage, only cpu is 
supported as a metric for scaling
-  ## @param automaticScaling.maxInstance ShardingSphere-Proxy maximum number 
of scaled-out replicas
-  ## @param automaticScaling.minInstance ShardingSphere-Proxy has a minimum 
number of boot replicas, and the shrinkage will not be less than this number of 
replicas
-  ##
-  automaticScaling:
-    enable: false
-    scaleUpWindows: 30
-    scaleDownWindows: 30
-    target: 20
-    maxInstance: 4
-    minInstance: 1
-  ## @param resources ShardingSphere-Proxy starts the requirement resource, 
and after opening automaticScaling, the resource of the request multiplied by 
the percentage of target is used to trigger the scaling action
-  ## e.g:
-  ## resources:
-  ##   limits:
-  ##     cpu: 2
-  ##     memory: 2Gi
-  ##   requests:
-  ##     cpu: 2
-  ##     memory: 2Gi
-  ##
-  resources: {}
-  ## @param service.type ShardingSphere-Proxy external exposure mode
-  ## @param service.port ShardingSphere-Proxy exposes  port
-  ##
-  service:
+apiVersion: shardingsphere.apache.org/v1alpha1
+kind: ShardingSphereProxy
+metadata:
+  name: shardingsphere-cluster-shardingsphere-proxy
+  namespace: shardingsphere-operator
+spec:
+  version: 5.4.0
+  serviceType:
     type: ClusterIP
-    port: 3307
-  ## @param startPort ShardingSphere-Proxy boot port
-  ##
-  startPort: 3307
-  ## @param mySQLDriver.version ShardingSphere-Proxy The ShardingSphere-Proxy 
mysql driver version will not be downloaded if it is empty
-  ##
+  replicas: 3
+  proxyConfigName: "shardingsphere-cluster-shardingsphere-proxy-configuration"
+  port: 3307
   mySQLDriver:
     version: "5.1.47"
-  ## @param imagePullSecrets ShardingSphere-Proxy pull private image 
repository key
-  ## e.g:
-  ## imagePullSecrets:
-  ##   - name: mysecret
-  ##
-  imagePullSecrets: []
-  ## @section  ShardingSphere-Proxy ServerConfiguration parameters
-  ## NOTE: If you use the sub-charts to deploy Zookeeper, the server-lists 
field must be "{{ printf \"%s-zookeeper.%s:2181\" .Release.Name 
.Release.Namespace }}",
-  ## otherwise please fill in the correct zookeeper address
-  ## The server.yaml is auto-generated based on this parameter.
-  ## If it is empty, the server.yaml is also empty.
-  ## ref: 
https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-jdbc/yaml-config/mode/
-  ## ref: 
https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-jdbc/builtin-algorithm/metadata-repository/
-  ##
-  serverConfig:
-    ## @section Compute-Node ShardingSphere-Proxy ServerConfiguration 
authority parameters
-    ## NOTE: It is used to set up initial user to login compute node, and 
authority data of storage node.
-    ## @param serverConfig.authority.privilege.type authority provider for 
storage node, the default value is ALL_PERMITTED
-    ## @param serverConfig.authority.users[0].password Password for compute 
node.
-    ## @param serverConfig.authority.users[0].user Username,authorized host 
for compute node. Format: <username>@<hostname> hostname is % or empty string 
means do not care about authorized host
-    ##
-    authority:
-      privilege:
-        type: ALL_PERMITTED
-      users:
-        - password: root
-          user: root@%
-    ## @section Compute-Node ShardingSphere-Proxy ServerConfiguration mode 
Configuration parameters
-    ## @param serverConfig.mode.type Type of mode configuration. Now only 
support Cluster mode
-    ## @param serverConfig.mode.repository.props.namespace Namespace of 
registry center
-    ## @param serverConfig.mode.repository.props.server-lists Server lists of 
registry center
-    ## @param serverConfig.mode.repository.props.maxRetries Max retries of 
client connection
-    ## @param serverConfig.mode.repository.props.operationTimeoutMilliseconds 
Milliseconds of operation timeout
-    ## @param serverConfig.mode.repository.props.retryIntervalMilliseconds 
Milliseconds of retry interval
-    ## @param serverConfig.mode.repository.props.timeToLiveSeconds Seconds of 
ephemeral data live
-    ## @param serverConfig.mode.repository.type Type of persist repository. 
Now only support ZooKeeper
-    ## @param serverConfig.props.proxy-frontend-database-protocol-type Default 
startup protocol
-    mode:
-      repository:
-        props:
-          maxRetries: 3
-          namespace: governance_ds
-          operationTimeoutMilliseconds: 5000
-          retryIntervalMilliseconds: 500
-          server-lists: "{{ printf \"%s-zookeeper.%s:2181\" .Release.Name 
.Release.Namespace }}"
-          timeToLiveSeconds: 600
-        type: ZooKeeper
-      type: Cluster
-    props:
-      proxy-frontend-database-protocol-type: MySQL
-  ## @section ZooKeeper chart parameters
-
-## ZooKeeper chart configuration
-## https://github.com/bitnami/charts/blob/master/bitnami/zookeeper/values.yaml
-##
-zookeeper:
-  ## @param zookeeper.enabled Switch to enable or disable the ZooKeeper helm 
chart
-  ##
-  enabled: true
-  ## @param zookeeper.replicaCount Number of ZooKeeper nodes
-  ##
-  replicaCount: 2
-  ## ZooKeeper Persistence parameters
-  ## ref: https://kubernetes.io/docs/user-guide/persistent-volumes/
-  ## @param zookeeper.persistence.enabled Enable persistence on ZooKeeper 
using PVC(s)
-  ## @param zookeeper.persistence.storageClass Persistent Volume storage class
-  ## @param zookeeper.persistence.accessModes Persistent Volume access modes
-  ## @param zookeeper.persistence.size Persistent Volume size
-  ##
-  persistence:
-    enabled: false
-    storageClass: ""
-    accessModes:
-      - ReadWriteOnce
-    size: 8Gi
+```
+
+ShardingSphereProxyServerConfig example:
 
+```yaml
+apiVersion: shardingsphere.apache.org/v1alpha1
+kind: ShardingSphereProxyServerConfig
+metadata:
+  name: shardingsphere-cluster-shardingsphere-proxy-configuration
+  namespace: shardingsphere-operator
+spec:
+  authority:
+    privilege:
+      type: ALL_PERMITTED
+    users:
+    - password: root
+      user: root@%
+  mode:
+    repository:
+      props:
+        maxRetries: 3
+        namespace: governance_ds
+        operationTimeoutMilliseconds: 5000
+        retryIntervalMilliseconds: 500
+        server-lists: 
'shardingsphere-cluster-zookeeper.shardingsphere-operator:2181'
+        timeToLiveSeconds: 600
+      type: ZooKeeper
+    type: Cluster
+  props:
+    proxy-frontend-database-protocol-type: MySQL
+```
+
+
+### ComputeNode 
+
+ComputeNode describes the computing nodes in the ShardingSphere cluster, 
usually referred to as Proxy. Since ShardingSphere Proxy is a stateless 
application, it can be managed using Kubernetes' native workload Deployment, 
meanwhile using ConfigMap and Service to configure startup configuration and 
service discovery. Using ComputeNode can not only unify key configurations in 
Deployment, ConfigMap, and Service, but also match the semantics of 
ShardingSphere, helping Operators quickly lock  [...]
+
+![](../../../img/user-manual/cn-concepts-1.png)
+
+#### Operator Configuration
+
+Currently, the Operator use ComputeNode needs to open featureGate that is 
relevant:
+
+```shell
+helm install [RELEASE_NAME] 
shardingsphere/apache-shardingsphere-operator-charts --set 
operator.featureGates.computeNode=true --set proxyCluster.enabled=false
+```
+
+#### Column Comment
+
+##### Programmatic Configuration
+
+Configuration item|  Description | Type | Examples 
+------------------ | 
--------------------------|------------------------------------------------------
 | ----------------------------------------
+`metadata.name` | Name of deployment plan name |  string | `foo` 
+`metadata.namespace` | Default namespace of deployment plan | string |         
                             | `shardingsphere-system`
+`spec.storageNodeConnector.type`     | Back end driver type | string | `mysql`
+`spec.storageNodeConnector.version`  | Back end driver version| string  | 
`5.1.47`
+`spec.serverVersion`                 | ShardingSphere-Proxy version | string | 
`5.4.0`
+`spec.replicas `     | Deployment plan instance |  number | `3`
+`spec.selectors`     | Instance selector, same as Deployment.Spec.Selectors |  
number | `3`
+`spec.portBindings[0].name`          | Name of exposed port  | string |        
                                                                 | `3307`
+`spec.portBindings[0].containerPort` | Exposed container port | number |`3307`
+`spec.portBindings[0].servicePort`   | Exposed container port | number         
                                                        | `3307`
+`spec.portBindings[0].procotol`      | Exposed port procotol| string|  `TCP`
+`spec.serviceType`                   | Exposed port type | string              
                                                       | `ClusterIP`
+`spec.bootstrap.serverConfig.authority.privilege.type`    | Authority 
priviliege for compute node, the default value is ALL_PRIVILEGES_PERMITTED | 
string                                                                        | 
`ALL_PRIVILEGES_PERMITTED` 
+`spec.bootstrap.serverConfig.authority.users[0].user`     | Username, 
authorized host for compute node. Format: <username>@<hostname> hostname is % 
or empty string means do not care about authorized host|string |`root@%`
+`spec.bootstrap.serverConfig.authority.users[0].password` | Password of 
compute node |string                                                            
                                                         | `root`
+`spec.bootstrap.serverConfig.mode.type`                                        
  | Type of mode configuration, supports Standalone and Cluster          | 
string | `Cluster`
+`spec.bootstrap.serverConfig.mode.repository.type`                             
  | Type of persist repository, supports ZooKeeper and Etcd  |string            
  | `ZooKeeper`
+`spec.bootstrap.serverConfig.mode.repository.props`            |Registry 
center properties configuration, refer to [Common ServerConfig Repository 
Props](#Common\ ServerConfig\ Repository\ Props\ Configuration)  | 
map[string]string                                    | 
+
+##### Common ServerConfig Repository Props Configuration
+Configuration item |  Description | Examples 
+------------------ | 
--------------------------------------------------------------------------------
 | ----------------------------------------
+`spec.bootstrap.serverConfig.mode.repository.props.timeToLiveSeconds`          
  | TTL                                        | `600`
+`spec.bootstrap.serverConfig.mode.repository.props.serverlists`                
 | Server lists of registry center                                    | 
`zookeeper.default:2181` 
+`spec.bootstrap.serverConfig.mode.repository.props.retryIntervalMilliseconds`  
  | Milliseconds of retry interval                                       | `500`
+`spec.bootstrap.serverConfig.mode.repository.props.operationTimeoutMilliseconds`
 | Millisecond of operation timeout                                   | `5000`
+`spec.bootstrap.serverConfig.mode.repository.props.namespace`                  
  | Namespace of registry center(Not namespace of K8s)                          
             | `governance_ds`
+`spec.bootstrap.serverConfig.mode.repository.props.maxRetries`                 
  | Max retries of client connection                                   | `3`
+
+
+##### Optional Configuration  
+
+Configuration item |  Description | Type | Examples 
+------------------ | 
--------------------------|------------------------------------------------------
 | ----------------------------------------
+`spec.probes.livenessProbe` | Liveness probe |  corev1.Probe | 
+`spec.probes.readinessProbe` | Readiness probe |  corev1.Probe | 
+`spec.probes.startupProbe` | Startup probe |  corev1.Probe | 
+`spec.imgaePullSecrets ` | Image pull secrets | corev1.LocalObjectReference  | 
+`spec.env` | Environment variable | corev1.Env | 
+`spec.resources` | Resources| corev1.ResourceRequirements | 
+`spec.bootstrap.agentConfig.plugins.logging.file.props` | Agent configuration 
plugins logging file properties| map[string]string |
+`spec.bootstrap.agentConfig.plugins.metrics.prometheus.host` | Agent 
configuration plugins metrics prometheus host| map[string]string |
+`spec.bootstrap.agentConfig.plugins.metrics.prometheus.port` | Agent 
configuration plugins metrics prometheus port| map[string]string |
+`spec.bootstrap.agentConfig.plugins.metrics.prometheus.props` | Agent 
configuration plugins metrics prometheus properties| map[string]string |
+`spec.bootstrap.agentConfig.plugins.tracing.openTracing.props` | Agent 
configuration plugins tracing opentracing properties| map[string]string |
+`spec.bootstrap.agentConfig.plugins.tracing.openTelemetry.props` | Agent 
configuration plugins tracing opentelemetry properties| map[string]string |
+
+#### Instance Configuration
+
+The following is a basic instance configuration of ComputeNode CRD, which can 
pull up a three-node file server cluster in ShardingSphere Proxy.
+
+```yaml
+apiVersion: shardingsphere.apache.org/v1alpha1
+kind: ComputeNode
+metadata:
+  labels:
+    app: foo
+  name: foo
+spec:
+  storageNodeConnector:
+    type: mysql
+    version: 5.1.47
+  serverVersion: 5.4.0
+  replicas: 3
+  selector:
+    matchLabels:
+      app: foo
+  portBindings:
+  - name: server
+    containerPort: 3307
+    servicePort: 3307
+    protocol: TCP
+  serviceType: ClusterIP
+  bootstrap:
+    serverConfig:
+      authority:
+        privilege:
+          type: ALL_PERMITTED
+        users:
+        - user: root@%
+          password: root
+      mode:
+        type: Cluster
+        repository:
+          type: ZooKeeper
+          props:
+            timeToLiveSeconds: "600"
+            server-lists: ${PLEASE_REPLACE_THIS_WITH_YOUR_ZOOKEEPER_SERVICE}
+            retryIntervalMilliseconds: "500"
+            operationTimeoutMilliseconds: "5000"
+            namespace: governance_ds
+            maxRetries: "3"
+      props:
+        proxy-frontend-database-protocol-type: MySQL
+```
+Note:  A ZooKeeper cluster in normal operation is a prerequisite.
+
+### StorageNode
+
+StorageNode is the Operator's description of the data source and provides data 
source lifecycle management. Its use needs to cooperate with StorageProvider, 
and now supports AWS RDS and CloudNative PG. As shown in the picture:
+
+![](../../../img/user-manual/sn-concepts-1.png)
+
+Note: StorageNode is an optional CRD, and users can decide whether to manage 
data sources through StorageNode depending on real situation.
+
+#### Operator Configuration
+
+Currently, the Operator use StorageNode needs to open featureGate that is 
relevant:
+
+```shell
+helm install [RELEASE_NAME] 
shardingsphere/apache-shardingsphere-operator-charts --set 
operator.featureGates.storageNode=true --set 
operator.storageNodeProviders.aws.region='' --set 
operator.storageNodeProviders.aws.accessKeyId='' --set 
operator.storageNodeProviders.aws.secretAccessKey='' --set 
operator.storageNodeProviders.aws.enabled=true
+```
+
+#### Column Comment
+
+##### Programmatic Configuration 
+
+Configuration items |  Description | Type | Examples 
+------------------ | 
--------------------------|------------------------------------------------------
 | ----------------------------------------
+`metadata.name` | Name of deployment plan |  string | `foo` 
+`metadata.namespace` | Default namespace of deployment plan | string |         
                             | `shardingsphere-system`
+`spec.storageProviderName` | Name of provisioner |  string  | 
`aws-rds-instance` 
+
+##### Optional Configuration
+
+Configuration item |  Description | Type | Examples 
+------------------ | 
--------------------------|------------------------------------------------------
 | ----------------------------------------
+`spec.storageProviderSchema` |  Schema initialize | string | `sharding_db`
+`spec.replicas` | Aurora cluster size  | number | 2
+
+#### Examples
+
+The following is a StorageNode configuration introduction for AWS RDS Aurora, 
which can pull up Aurora cluster:
+
+```yaml
+apiVersion: shardingsphere.apache.org/v1alpha1
+kind: StorageNode
+metadata:
+  name: storage-node-with-aurora-example
+  annotations:
+    "storageproviders.shardingsphere.apache.org/cluster-identifier": 
"storage-node-with-aurora-example"
+    "storageproviders.shardingsphere.apache.org/instance-db-name": "test_db"
+    # The following annotations are required for auto registration.
+    "shardingsphere.apache.org/register-storage-unit-enabled": "false" # If it 
needs auto registration, please set up 'true'.
+    "shardingsphere.apache.org/logic-database-name": "sharding_db"
+    "shardingsphere.apache.org/compute-node-name": 
"shardingsphere-operator-shardingsphere-proxy"
+spec:
+  schema: "test_db"
+  storageProviderName: aws-aurora-cluster-mysql-5.7
+  replicas: 2 # Currently, only AWS Aurora is efficient.
+```
+### StorageProvider
+
+StorageProvider declares some different suppliers of StorageNode, such as AWS 
RDS and CloudNative PG.  
+
+#### Column Comment
+
+##### Programmatic Configuration
+
+Configuration item |  Descrption | Type | Examples 
+------------------ | 
--------------------------|------------------------------------------------------
 | ----------------------------------------
+`metadata.name` | Name of deployment plan |  string | `foo` 
+`spec.storageProviderName` | Name of provisioner |  string  | 
`aws-rds-instance` 
+
+#### Examples
+
+The following declares a StorageProvider in AWS Aurora, including the setting 
of relevant features:
+
+```yaml
+apiVersion: shardingsphere.apache.org/v1alpha1
+kind: StorageProvider
+metadata:
+  name: aws-aurora-cluster-mysql-5.7
+spec:
+  provisioner: storageproviders.shardingsphere.apache.org/aws-aurora
+  reclaimPolicy: Delete
+  parameters:
+    masterUsername: "root"
+    masterUserPassword: "root123456"
+    instanceClass: "db.t3.small"
+    engine: "aurora-mysql"
+    engineVersion: "5.7"
 ```
 
 ## Clean
 
 ```shell
 helm uninstall shardingsphere-cluster -n shardingsphere-operator
-kubectl delete crd shardingsphereproxies.shardingsphere.apache.org 
shardingsphereproxyserverconfigs.shardingsphere.apache.org
 ```
+
 ## Next
-In order to use the created shardingsphere-proxy cluster, you need to use 
[DistSQL](https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-proxy/distsql/usage/)
 to configure corresponding resources and rules, such as database resources, 
sharding rules, and so on.
+To use the created ShardingSphere-Proxy cluster, you need to use 
[DistSQL](https://shardingsphere.apache.org/document/current/cn/user-manual/shardingsphere-proxy/distsql/usage/)
 to configure corresponding resources and rules, such as database resources, 
sharding rules, etc.

Reply via email to