I am trying to stop the job by triggering the savepoint, but it's
failing with the below error.
*./bin/flink stop --savepointPath
gs://staging-data-flink/flink-1-16-2/savepoints/
3a912091b13c446c0d359074414db1db*
it's working if I just trigger the save point without stopping the job.
*./bin/flink
On Fri, Jun 23, 2023 at 11:54 PM Shrihari R wrote:
> Hi All,
>
> I am trying to stop the job by triggering the savepoint, but it's
> failing with the below attached error.
>
> *Command Used*
> ./bin/flink stop --savepointPath
> gs://staging-data-flink/flink-1-16-2/savepoints/
> 3a912091b13c446c0d
https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/internals/filesystems/
On Fri, 23 Jun 2023 at 11:20, 李 琳 wrote:
>
>
> Hi all,
>
>
>
> Recently, I have been testing the Flink Kubernetes Operator. In the
> official example, the checkpoint/savepoint path is configured with a file
>
Hi team,
I have a Kafka topic named employee which uses confluent avro schema and
will emit the payload as below:
{
"employee": {
"id": "123456",
"name": "sampleName"
}
}
I am using the upsert-kafka connector to consume the events from the above
Kafka topic as below using the Flink SQL DDL statem
Hi all,
Recently, I have been testing the Flink Kubernetes Operator. In the official
example, the checkpoint/savepoint path is configured with a file system:
state.savepoints.dir: file:///flink-data/savepoints
state.checkpoints.dir: file:///flink-data/checkpoints
high-availability:
org.apache
Hi Lu,
I would say that if your application is stable and checkpoints do not
timeout there is no immediate necessity to do anything. The fact that the
consumer lag stays low means that you are able to keep up with the incoming
data. That said, the fact that you observe "constant backpressure" with
Hi,
I’m currently using the Opensearch Connector for the Table API. For testing I
need to disable the hostname verification. Is there a way to do this?
Thanks
Eugenio