Hi Mike,
Which version of Flink did you use? Could you try Flink-1.14 which enables
logging of RocksDB [1][2] to see what reported in RocksDB log. From my
experience, this is caused by waiting for resource (maybe column family) to
close when closing the DB, and you should not meet this problem
HI Yang, Roman,
Thanks for the information and sorry for the late reply. Seems like the
Kubernetes node restarted during the Flink finalization stage.
I think that is the root cause.
Regards,
Oscar
On Wed, Oct 27, 2021 at 4:20 PM Yang Wang wrote:
> Hi,
>
> I think Roman is right. It seems that
Hi,
I am using the state processing API to examine a savepoint. My code works fine
when I use a HashMapStateBackend but for larger savepoints, I don’t have enough
memory so need to use a EmbeddedRocksDBStateBackend. Even then, I am able to
process some smaller states but this one:
operatorID,p
Thanks for the info, Yang! I'm using Finalizer and the labels to handle the
deletion.
On Mon, Oct 25, 2021 at 3:56 AM Yang Wang wrote:
> Hi Weiqing,
>
> > Why does Flink not set the owner reference of HA related ConfigMaps to
> JobManager deployment? It is easier to clean up for users.
> The maj
You can fork the repo into your github account (or download and import in
your git hosting solution). Then change
https://github.com/klarna-incubator/flink-connector-dynamodb/blob/master/pom.xml#L54
to 8 and try to build it. Fix whatever pops up - it should be minor things;
stackoverflow usually he