Hi Igal,
thanks for these pointers!
I currently deploy a flink jar per docker copy. But this is a spike
setup anyway. I will now discard it and switch directly to working in
kubernetes.
So, just so I understand this right, the recommended production setup
would be:
* Build a docker image
How do you deploy the job currently?
Are you using the data stream integration / or as a Flink Jar [1]
(also please note, that the directories might be created but without
checkpoint interval set, they will be empty)
Regarding your two questions:
That is true that you can theoretically share the
Hi Igal,
thanks for your quick and detailed reply! For me, this is the really
great defining feature of Stateful Functions: Separating
StreamProcessing "Infrastructure" from Business Logic Code, possibly
maintained by a different team.
Regarding your points: I did add the checkpoint interval
Hi Jan,
The architecture outlined by you, sounds good and we've run successfully
mixed architectures like this.
Let me try to address your questions:
1)
To enable checkpointing you need to set the relevant values in your
flink-conf.yaml file.
execution.checkpointing.interval: (see [1])
state.che
Hi,
I'm currently trying to set up a Flink Stateful Functions Job with the
following architecture:
* Kinesis Ingress (embedded)
* Stateful Function (embedded) that calls to and takes responses from an
external business logic function (python worker similar to the one in
the python greeter e