Hi Robin,

There’s a part called “Developer Guide” in Flink CDC documentations[1] where 
you could find more info about how pipeline framework is implemented.

Actually, Flink CDC itself does not maintain any runtime information about 
pipeline jobs, and it is basically equivalent to a normal Java-written 
DataStream job submitted to Flink cluster, with extra runtime operators. So, 
there’s no place like “Flink CDC UI” for supervising pipeline jobs. Flink Web 
UI is all that’s available, not counting some third-party platforms like 
Ververica Platform[2].

As for pipeline job lifecycle managing like stopping and restarting, all 
standard Flink CLI commands in `./bin/flink [stop | cancel | run]`[3] should 
work as expected. Flink Job ID will be logged each time when you submitted a 
new pipeline job.

[1] 
https://nightlies.apache.org/flink/flink-cdc-docs-release-3.2/docs/developer-guide/understand-flink-cdc-api/
[2] https://www.ververica.com/platform
[3] https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/cli/

Regards,
Xiqian

De : Robin Moffatt via user <user@flink.apache.org>
Date : vendredi, 6 décembre 2024 à 02:42
À : user@flink.apache.org <user@flink.apache.org>
Objet : Flink CDC operational questions
Are there any resources I can look at regarding the operational side of Flink 
CDC?

e.g. is the Flink web UI the only place to monitor what's going on, or looking 
through the log files? Can you stop and restart a job or will it trigger a 
fresh snapshot? etc

thanks!

Reply via email to