Hi Alan!
I think it should be possible to address this gap for most cases. We don't
have the same robust way of getting the last-state information for session
jobs as we do for applications, so it will be slightly less reliable
overall.
For session jobs the last checkpoint info has to be queried f
Hi,
We wanted to use the Apache Flink Kubernetes operator to manage the
lifecycle of our Flink jobs in Flink session clusters. And we wanted to
have the "last-state" upgrade feature for our use cases.
However, the latest official doc states the "last-state" upgrade mode is
not supported in the se
Hi Ahmed, hi Hong,
Thanks for your responses.
It sounds like the most promising would be to initially focus on the Global
Window with the custom trigger.
We don't need to be compatible with the aggregation used by the KPL
(actually we would likely combine records in protobuf, and my impression i
Hi Michael,
Unfortunately the new `KinesisDataStreamsSink` doesn't support aggregation
yet.
My suggestion if you want to use native kinesis aggregation is to use the
latest connector version that supports KPL as sink for Table API, that
would be 1.14.x. you could package the connector of that versi
Hi there,
Would you mind sharing the whole JM/TM log? It looks like the error log in
the previous email is not the root cause.
Best,
Biao Geng
ou...@139.com 于2024年4月29日周一 16:07写道:
> Hi all:
>When I ran flink sql datagen source and wrote to jdbc, checkpoint kept
> failing with the following
Thanks Biao Geng for your response. Indeed, 1.19 documentation uses
execution.savepoint.path, restoration works with said configuration name.
https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/dev/table/sqlclient/#execute-sql-files
Regards
Keith
From: Biao Geng
Date: Friday, 26 A
Hi all:
When I ran flink sql datagen source and wrote to jdbc, checkpoint kept
failing with the following error log.
2024-04-29 15:46:25,270 ERROR
org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
[] - Unhandled exception.
org.apache.flink.runtime.disp
Hi all,
We are currently using Flink 1.18.1 (AWS Managed Flink) and are writing to
Kinesis streams in several of our applications using the Table API.
In our use case, we would like to be able to aggregate multiple records
(rows) together and emit them in a single Kinesis record.
As far as I und