Zakelly commented on code in PR #26187:
URL: https://github.com/apache/flink/pull/26187#discussion_r1967121499


##########
docs/content/docs/libs/state_processor_api.md:
##########
@@ -585,13 +585,13 @@ CREATE TABLE state_table (
 ### Connector options
 
 #### General options
-| Option             | Required | Default | Type                               
    | Description                                                               
                                                                                
                       |
-|--------------------|----------|---------|----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| connector          | required | (none)  | String                             
    | Specify what connector to use, here should be 'savepoint'.                
                                                                                
                       |
-| state.backend.type | required | (none)  | Enum Possible values: hashmap, 
rocksdb | Defines the state backend which must be used for state reading. This 
must match with the value which was defined in Flink job which created the 
savepoint or checkpoint.         |
-| state.path         | required | (none)  | String                             
    | Defines the state path which must be used for state reading. All file 
system that are supported by Flink can be used here.                            
                           |
-| operator.uid       | optional | (none)  | String                             
    | Defines the operator UID which must be used for state reading (can't be 
used together with `operator.uid.hash`). Either `operator.uid` or 
`operator.uid.hash` must be specified. |
-| operator.uid.hash  | optional | (none)  | String                             
    | Defines the operator UID hash which must be used for state reading (can't 
be used together with `operator.uid`). Either `operator.uid` or 
`operator.uid.hash` must be specified. |
+| Option             | Required | Default | Type                               
    | Description                                                               
                                                                                
                                                                           |
+|--------------------|----------|---------|----------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| connector          | required | (none)  | String                             
    | Specify what connector to use, here should be 'savepoint'.                
                                                                                
                                                                           |
+| state.backend.type | optional | (none)  | Enum Possible values: hashmap, 
rocksdb | Defines the state backend which must be used for state reading. This 
must match with the value which was defined in Flink job which created the 
savepoint or checkpoint. If not provided then it falls back to 
`state.backend.type`. |

Review Comment:
   ```suggestion
   | state.backend.type | optional | (none)  | Enum Possible values: hashmap, 
rocksdb | Defines the state backend which must be used for state reading. This 
must match with the value which was defined in Flink job which created the 
savepoint or checkpoint. If not provided then it falls back to 
`state.backend.type` in flink configuration. |
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to