Hi Jacob,
It’s a little bit of guesswork …
The disappearing records remind me a bit of a peculiarity of Oracle, that each
(e.g. INSERT) statement is in an implicit transaction and hence needs to be
committed.
In Flink committing transaction happen together with the checkpoint cycle, i.e.
this
Hi,
I want to make the tables created by Flink Table API/SQL durable and
permanent. To achieve this, I am trying the following basic example using
the JDBC Oracle connector. I have added both the Flink JDBC and Oracle JDBC
drivers to the Flink lib directory. I am using the Flink SQL client to run
Hi,
We are using Flink 1.18 and we do lot of stateful processing in our jobs
and persist large states. We are using rocksdb as the state backend and
write state to a filesystem or hdfs.
For now we are using POJO serialization. I find this simple to setup and
easy to use as we would have lots of po