Hi Ankur,
Where do you see Flink using/bundling Curl?
Best regards,
Martijn
On Wed, Oct 11, 2023 at 9:08 AM Singhal, Ankur wrote:
>
> Hi Team,
>
>
>
> Do we have any plans to update flink to support Curl 8.4.0 with earlier
> versions having severe vulnerabilities?
>
>
>
> Thanks & Regards,
>
Hi Matijn,
This is just a reference but we are using it at multiple places.
https://github.com/apache/flink/blob/master/tools/ci/maven-utils.sh#L59
Although the hostname we are referring here is hardcoded so it can be mitigated.
Thanks and Regards,
Ankur Singhal
-Original Message-
From:
Hello everyone,
A little background on the job: relatively simple topology with a single
Kinesis ingress stream that utilizes EFO, a few transform operations, and then
sinking to DynamoDb.
Version: Flink 1.15
I’m running a Flink job on the AWS Managed Service, and am not sure if what I’m
runni
Hey Team,
Looking for some thoughts here:
* We have a Kinesis Producer that produces to a topic in another AWS account
* The producer allows for configurations to set credentials for that
account:
https://github.com/apache/flink-connector-aws/blob/main/flink-connector-aws/flink-connecto
Hi,
What SQL Runtime are you referring to? Why do you need to get it?
Best,
Ron
Enric Ott <243816...@qq.com> 于2023年10月12日周四 14:26写道:
> Hi,Team:
> Is there any approach to get flink sql runtime via api ?
> Any help would be appreciated.
>
For example, operator state and checkpoint listener of flink sql
runtime.I'm trying to modify flink sql compiled behavior
programmatically and get corresponding flink sql runtime.
-- --
??:
Hi,
You are using a special Kafka connector. From the definition in website:
"as a sink, the upsert-kafka connector can consume a changelog stream. It will
write INSERT/UPDATE_AFTER data as normal Kafka messages value, and write DELETE
data as Kafka messages with null values (indicate tombstone
I have implemented a Custom File Source(wrapper around native Flink File
Source) that signalsNoSplits using a custom strategy which terminates the
Flink App automatically.
Due to external restrictions, I am bound to using Flink 1.13.3 in Streaming
mode with Unbounded Input and cannot use the Batch