ElasticsearchConnector for ES 8.1x

2023-05-19 Thread patricia lee
Flink 1.17 Elasticsearch 8.1.1 Description: Upgrading to Flink 1.17 from 1.8. ElasticsearchSink is already deprecated and I am using the flink-connector-elasticsearch7. This throws an IO exception, unable to parse response body. But when I downgraded to Elasticsearch 7.17 it worked. Question: I

Migrating to Flink 1.17

2023-06-14 Thread patricia lee
Hi, I am migrating our old flink from 1.8 to 1.17 So far I am just adjusting the classes that were removed such as the SplitStream and OutputSelector. Just wanted to ask if there is a specific version to gradually update in a correct way that I do not know yet? Thanks Regards, Patricia

CleanUpInRocksDbCompactFilter

2023-06-15 Thread patricia lee
Hi, I am currently migrating our flink project from 1.8 to 1.17. The cleanUpInRocksDbCompactFilter() now accepts longtimeNumberOfQueries() as parameter. The question is how would we know the right value. We set to 1000 temporarily, is there a default value to set. Regards, Patricia

RocksdbStateBackend.enableTtlCompactionFilter

2023-06-20 Thread patricia lee
Hi, >From 1.8 to 1.17 flink, enableTtlCompactionFilter() has been removed. I have seen some examples to do a factory of options to pass as argumets for settings, is this the right approach? If not what is the best way to enable the compaction filter in rocksdbstatebackend? Thanks Regards

Role Based Access on Flink (Admin / Non Admin)

2023-07-09 Thread patricia lee
Hi, I just wanted to confirm if there is really a role based access in flink? We have linked it to our ldap but the requirement is, the administrators should only be the people who could upload a jar file. I am reading the documentation but I couldn't find it, or maybe I missed. Regards, Patri

Re: Role Based Access on Flink (Admin / Non Admin)

2023-07-10 Thread patricia lee
does not authenticate the client, and the recommendation is to use proxy. Thanks! Regards, Patricia On Mon, Jul 10, 2023, 1:33 PM patricia lee wrote: > Hi, > > I just wanted to confirm if there is really a role based access in flink? > We have linked it to our ldap but the require

Java 17 for Flink 1.17 supported?

2023-07-31 Thread patricia lee
Hi, I was advised to upgrade the JDK of our flink 1.7 to 17. However, in the documeation it only says in bold "Java 11". Is java 17 support will starton flink 1.18 release? Thanks

404 Jar File Not Found w/ Web Submit Disabled

2023-08-14 Thread patricia lee
Hi, I disabled the web.ui.submit=false, after that uploading jar files via rest endpoint is now throwing 404. In the documentation it says: "Even it is disabled sessions clusters still accept jobs through REST requests (Http calls). This flag only guards the feature to upload jobs in the UI" I a

Re: 404 Jar File Not Found w/ Web Submit Disabled

2023-08-14 Thread patricia lee
dpoint. Is this the expected behavior? Regards, Patricia On Mon, Aug 14, 2023, 5:07 PM patricia lee wrote: > Hi, > > I disabled the web.ui.submit=false, after that uploading jar files via > rest endpoint is now throwing 404. In the documentation it says: > > "Even it is disabl

Re: 404 Jar File Not Found w/ Web Submit Disabled

2023-08-18 Thread patricia lee
ng my Flink application by > RestClusterClient for a long time. > > Or you can use the CLI by starting a subprocess. > > Best > Jiadong Lu > > On 2023/8/17 23:07, jiadong.lu wrote: > > Hi Patricia > > > > Have you tried the url path of '/v1

Rate Limit / Throttle Data to Send

2023-08-29 Thread patricia lee
Hi, I have a requirement that I need to send data to a third party with a limit number of elements with flow below. kafkasource mapToVendorPojo processfunction sinkToVendor My implementation is I continuously add the elements to my list state ListState in ProcessFunction and once it reaches 100

Re: Rate Limit / Throttle Data to Send

2023-08-31 Thread patricia lee
-release-1.17/docs/dev/datastream/operators/windows/#keyed-vs-non-keyed-windows > > [2] > https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/dev/datastream/operators/windows/#processwindowfunction > > [3] > https://nightlies.apache.org/flink/flink-docs-release-1

Send data asynchronously to a 3rd party via SinkFunction

2023-09-01 Thread patricia lee
I'd like to ask if there is a way to send data to a vendor (SDK plugin, which is also an HTTP request) asynchronously in flink 1.17? After transformation on the data, I usually collate them as a List to my custom SinkFunction. I initialized a CompleteableFuture inside the invoke() method. However

Async IO metrics for tps

2023-09-06 Thread patricia lee
Hi flink users, I used Async IO (RichAsyncFunction) for sending 100 txns to a 3rd party. I check the runtimeContex that it has metric of numRecordsSent, we wanted to expose this metric to our prometheus server so that we can monitor how much records we are sending per second. The reason why we ne

Re: Async IO metrics for tps

2023-09-07 Thread patricia lee
tried to check this metric in our code but the it doesn't increment. Regards, Pat On Thu, Sep 7, 2023, 7:08 PM liu ron wrote: > Hi, > > What's your question? > > Best > Ron > > patricia lee 于2023年9月7日周四 14:29写道: > >> Hi flink users, >> >> I

Re: Async IO metrics for tps

2023-09-10 Thread patricia lee
external 3rd party. Regards, P On Thu, Sep 7, 2023, 9:38 PM patricia lee wrote: > Apology. > > > The question is, from our understanding we do not need to implement the > counter for numRecordsOutPerSecond metric explicity in our codes? As this > metric is automatically exposed

Custom Metrics not showing in prometheus

2023-09-18 Thread patricia lee
Hi, I have created a counter of records in my RichSinkFunction myCounter.inc() I can see the value exists in the job manager web ui > running jobs > sink function > task > metrics. However, I am not available to see it in my prometheus web ui. I am running docker flink in my local as well as pr

Completable Future in RichSinkFunction with Retry

2023-09-21 Thread patricia lee
I initially used the genericbase sink / the richAsync function, but these two were not applicable to my use case. I implemented a completable future that sends data sendBatch() to vendor api. Is there a built in api supported for retry with custom method in rich sink function? Regards, Pat

ProcessWindowFunction Parallelism

2023-09-26 Thread patricia lee
Hi, Are processwindowfunctions cannot have more than 1 parallelism? Whenever I set it to 2, I am receiving an error message, "The parallelism of non parallel operator must be 1." dataKafka = Kafkasource (datasource) .parallelism(2) .rebalance(); dataKafka.windowAll(GlobalWindows.create()).trigge

Flink 1.17.1 with 1.8 projects

2023-10-16 Thread patricia lee
Hi, Some of my colleagues are using Flink 1.17.1 server but with projects with Flink 1.8 libraries, so far the projects are working fine without issue for a month now. Will there be any issue that we are not just aware of, if we continue with this kind of set up env? Appreciate any response. Re

Job multiple instance vs job parallel

2023-10-23 Thread patricia lee
Hi, I'd like to ask the behavior I am getting I am using kafka as a source with window TumblingProcessingTime. When I tried to fire 1 parallel config but submit 2 instnce of the same jar in flink server, the data being consumed by the 2 jobs are the same (duplicate) even they have the same kafk

FLINK CONNECTOR 1.18 and Kafka 2.7

2023-11-09 Thread patricia lee
Hi, I am upgrading my project to Flink 1.18 but seems kafka connector 1.18.0 not available yet? I couldn't pull the jar file flink-kafka-connector. But when I check on mvn repo the versions available were: 3.0.1-1.18 3.0.1-1.17 But on the documentation it says -1.18 Questions: 1. Is 3.0.1-1.18

Re: FLINK CONNECTOR 1.18 and Kafka 3.4.1

2023-11-09 Thread patricia lee
ctor-kafka/blob/ea4fac3966c84f4cae8b80d70873254f03b1c333/pom.xml#L53 > > And you can download it from here: > https://flink.apache.org/downloads/#apache-flink-kafka-connector-301 > > Best, > Junrui > > patricia lee 于2023年11月9日周四 16:00写道: > >> Hi, >> >> I am upgrading my project to

Apache Flink + Java 17 error module

2023-11-13 Thread patricia lee
Hi, I upgraded the project to Flink 1.18.0 and Java 17. I am also using flink-kafka-connector 3.0.1-1.18 from mvn repository. However, running it shows error: Unable to make field private final java.lang.Object[] java.util.Arrays$ArrayList.a accessible: module java.base does not "opens java.uti

Error FlinkConsumer in Flink 1.18.0

2023-11-23 Thread patricia lee
Hi, Flink 1.18.0 Kafka Connector 3.0.1-1.18 Kafka v 3.2.4 JDK 17 I get error on class org.apache.flink.streaming.runtime.tasks.SourceStreamTask on LegacySourceFunctionThread.run() "java.util.concurrent.CompletableFuture@12d0b74 [Not completed, 1 dependents] I am using the FlinkKafkaConsumer.

Parallelism and Target TPS

2024-01-31 Thread patricia lee
Hi, I have a flink job that consumes from kafka and sinks it to an API. I need to ensure that my flink job can send within the rate limit 200 tps, we are planning to increase the parallelism, but I do not know the right number to set. 1 parallelism does equal to 1 consumer? So if 200, should we s

Flink 1.20 and Flink Kafka Connector 3.2.0-1.19

2024-10-03 Thread patricia lee
Hi, I have upgraded our project to Flink 1.20 and JDK 17. But I noticed there is no Kafka connector for Flink 1.20. I currently used the versions but there is intermittent error of Kafka related No Class Definition Error Where can I get the Kafka connector for flink 1.20? Thanks

Java 17 Support Flink 1.20

2024-11-05 Thread patricia lee
Hi, The team is planning to upgrade our Flink 1.18 to Flink 1.20 and we were mandated to upgrade to Java 17. However, in the documentation, it is said Java 17 is *experimental*. These changes will be done in all environments including *PRODUCTION*. I have read some already tried Java 17 with Fli

AWS S3 - Sink and StateBackend

2024-12-18 Thread patricia lee
I have to configure my project to use TWO S3s. 1- S3 for Statebackend 2 - S3 for FileSink How can I configure 2 different s3 endpoint and different credentials for each of the s3. When I checked the documentation, there is only a single property attribute for s3 access keys and bucket endpoint.

Flink 2.0 Migration Execution registerTypes

2025-07-10 Thread patricia lee
Hi, We are currently migrating from Flink version 1.18 to Flink version 2.0. We have this configuration: StreamExecutionEnvironment env = new StreamExecutionEnvironment(); env.setRegisterTypes(MyClass.class); In flink 2.0, if our understanding is correct, we'll use this registerPojoType instead

Flink 2.0 registerTypes question

2025-07-10 Thread patricia lee
Hi, We are currently migrating our flink projects from version 1.18 to 2.0. We have this part of the codes that we set the model in the StreamExecutionEnvironment env = new StreamExecutionEnvironment(); env.setRegisterTypes(MyClass.class); Right now in Flink 2.0, I followed this

Re: Flink 2.0 Migration Execution registerTypes

2025-07-10 Thread patricia lee
-38078 > > Best, > Zhanghao Chen > ------ > *From:* patricia lee > *Sent:* Thursday, July 10, 2025 16:06 > *To:* user@flink.apache.org > *Subject:* Flink 2.0 Migration Execution registerTypes > > Hi, > > We are currently migrating from Fli

Flink 2.0 Migration RockDBStateBackend

2025-07-14 Thread patricia lee
Hi, We are migrating our Flink 1.20 to Flink 2.0. We have set the code in our project like this: RockDBStateBackend rockDb = new RockDBStateBackend(config.getPath() + "/myrockdbpath"); rockdb.setPredefinedOptions(PredefinedOptions.FLASH_SSD_OPTIMIZED) We followed the new way in Flink 2.0 in th

ElasticsearchConnector for ES version 9.x.x

2025-08-07 Thread patricia lee
Hi, We need to upgrade out Elasticseach from 8.x.x to 9.x.x. The flink project is connecting to ES 8.x.x and we still use the RestHighLevelClient which has still compatibility. Since we will be upgrading to ES 9.x.x, we are also upgrading our Flink 1.20 to Flink 2.0 and in documentation, the conn