I have to configure my project to use TWO S3s.
1- S3 for Statebackend
2 - S3 for FileSink
How can I configure 2 different s3 endpoint and different credentials for
each of the s3.
When I checked the documentation, there is only a single property attribute
for s3 access keys and bucket endpoint.
Hi,
The team is planning to upgrade our Flink 1.18 to Flink 1.20 and we were
mandated to upgrade to Java 17. However, in the documentation, it is said
Java 17 is *experimental*.
These changes will be done in all environments including *PRODUCTION*.
I have read some already tried Java 17 with Fli
Hi,
I have upgraded our project to Flink 1.20 and JDK 17. But I noticed there
is no Kafka connector for Flink 1.20.
I currently used the versions but there is intermittent error of Kafka
related No Class Definition Error
Where can I get the Kafka connector for flink 1.20? Thanks
Hi,
I have a flink job that consumes from kafka and sinks it to an API. I need
to ensure that my flink job can send within the rate limit 200 tps, we are
planning to increase the parallelism, but I do not know the right number to
set. 1 parallelism does equal to 1 consumer? So if 200, should we s
Hi,
Flink 1.18.0
Kafka Connector 3.0.1-1.18
Kafka v 3.2.4
JDK 17
I get error on class
org.apache.flink.streaming.runtime.tasks.SourceStreamTask on
LegacySourceFunctionThread.run()
"java.util.concurrent.CompletableFuture@12d0b74 [Not completed, 1
dependents]
I am using the FlinkKafkaConsumer.
Hi,
I upgraded the project to Flink 1.18.0 and Java 17. I am also using
flink-kafka-connector 3.0.1-1.18 from mvn repository.
However, running it shows error:
Unable to make field private final java.lang.Object[]
java.util.Arrays$ArrayList.a accessible: module java.base does not "opens
java.uti
ctor-kafka/blob/ea4fac3966c84f4cae8b80d70873254f03b1c333/pom.xml#L53
>
> And you can download it from here:
> https://flink.apache.org/downloads/#apache-flink-kafka-connector-301
>
> Best,
> Junrui
>
> patricia lee 于2023年11月9日周四 16:00写道:
>
>> Hi,
>>
>> I am upgrading my project to
Hi,
I am upgrading my project to Flink 1.18 but seems kafka connector 1.18.0
not available yet?
I couldn't pull the jar file flink-kafka-connector. But when I check on mvn
repo the versions available were:
3.0.1-1.18
3.0.1-1.17
But on the documentation it says -1.18
Questions:
1. Is 3.0.1-1.18
Hi,
I'd like to ask the behavior I am getting
I am using kafka as a source with window TumblingProcessingTime.
When I tried to fire 1 parallel config but submit 2 instnce of the same jar
in flink server, the data being consumed by the 2 jobs are the same
(duplicate) even they have the same kafk
Hi,
Some of my colleagues are using Flink 1.17.1 server but with projects with
Flink 1.8 libraries, so far the projects are working fine without issue for
a month now.
Will there be any issue that we are not just aware of, if we continue with
this kind of set up env? Appreciate any response.
Re
Hi,
Are processwindowfunctions cannot have more than 1 parallelism? Whenever I
set it to 2, I am receiving an error message, "The parallelism of non
parallel operator must be 1."
dataKafka = Kafkasource (datasource)
.parallelism(2)
.rebalance();
dataKafka.windowAll(GlobalWindows.create()).trigge
I initially used the genericbase sink / the richAsync function, but these
two were not applicable to my use case.
I implemented a completable future that sends data sendBatch() to vendor
api.
Is there a built in api supported for retry with custom method in rich sink
function?
Regards,
Pat
Hi,
I have created a counter of records in my RichSinkFunction
myCounter.inc()
I can see the value exists in the job manager web ui > running jobs > sink
function > task > metrics. However, I am not available to see it in my
prometheus web ui.
I am running docker flink in my local as well as pr
external 3rd party.
Regards,
P
On Thu, Sep 7, 2023, 9:38 PM patricia lee wrote:
> Apology.
>
>
> The question is, from our understanding we do not need to implement the
> counter for numRecordsOutPerSecond metric explicity in our codes? As this
> metric is automatically exposed
tried to check this metric in our code but the it doesn't increment.
Regards,
Pat
On Thu, Sep 7, 2023, 7:08 PM liu ron wrote:
> Hi,
>
> What's your question?
>
> Best
> Ron
>
> patricia lee 于2023年9月7日周四 14:29写道:
>
>> Hi flink users,
>>
>> I
Hi flink users,
I used Async IO (RichAsyncFunction) for sending 100 txns to a 3rd party.
I check the runtimeContex that it has metric of numRecordsSent, we wanted
to expose this metric to our prometheus server so that we can monitor how
much records we are sending per second. The reason why we ne
I'd like to ask if there is a way to send data to a vendor (SDK plugin,
which is also an HTTP request) asynchronously in flink 1.17?
After transformation on the data, I usually collate them as a List to my
custom SinkFunction. I initialized a CompleteableFuture inside the invoke()
method. However
-release-1.17/docs/dev/datastream/operators/windows/#keyed-vs-non-keyed-windows
>
> [2]
> https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/dev/datastream/operators/windows/#processwindowfunction
>
> [3]
> https://nightlies.apache.org/flink/flink-docs-release-1
Hi,
I have a requirement that I need to send data to a third party with a limit
number of elements with flow below.
kafkasource
mapToVendorPojo
processfunction
sinkToVendor
My implementation is I continuously add the elements to my list state
ListState in ProcessFunction and once it reaches 100
ng my Flink application by
> RestClusterClient for a long time.
>
> Or you can use the CLI by starting a subprocess.
>
> Best
> Jiadong Lu
>
> On 2023/8/17 23:07, jiadong.lu wrote:
> > Hi Patricia
> >
> > Have you tried the url path of '/v1
dpoint. Is this the expected
behavior?
Regards,
Patricia
On Mon, Aug 14, 2023, 5:07 PM patricia lee wrote:
> Hi,
>
> I disabled the web.ui.submit=false, after that uploading jar files via
> rest endpoint is now throwing 404. In the documentation it says:
>
> "Even it is disabl
Hi,
I disabled the web.ui.submit=false, after that uploading jar files via rest
endpoint is now throwing 404. In the documentation it says:
"Even it is disabled sessions clusters still accept jobs through REST
requests (Http calls). This flag only guards the feature to upload jobs in
the UI"
I a
Hi,
I was advised to upgrade the JDK of our flink 1.7 to 17. However, in the
documeation it only says in bold "Java 11".
Is java 17 support will starton flink 1.18 release?
Thanks
does not authenticate the client, and the
recommendation is to use proxy.
Thanks!
Regards,
Patricia
On Mon, Jul 10, 2023, 1:33 PM patricia lee wrote:
> Hi,
>
> I just wanted to confirm if there is really a role based access in flink?
> We have linked it to our ldap but the require
Hi,
I just wanted to confirm if there is really a role based access in flink?
We have linked it to our ldap but the requirement is, the administrators
should only be the people who could upload a jar file.
I am reading the documentation but I couldn't find it, or maybe I missed.
Regards,
Patri
Hi,
>From 1.8 to 1.17 flink, enableTtlCompactionFilter() has been removed.
I have seen some examples to do a factory of options to pass as argumets
for settings, is this the right approach? If not what is the best way to
enable the compaction filter in rocksdbstatebackend?
Thanks
Regards
Hi,
I am currently migrating our flink project from 1.8 to 1.17.
The cleanUpInRocksDbCompactFilter() now accepts longtimeNumberOfQueries()
as parameter. The question is how would we know the right value. We set to
1000 temporarily, is there a default value to set.
Regards,
Patricia
Hi,
I am migrating our old flink from 1.8 to 1.17
So far I am just adjusting the classes that were removed such as the
SplitStream and OutputSelector.
Just wanted to ask if there is a specific version to gradually update in a
correct way that I do not know yet? Thanks
Regards,
Patricia
Flink 1.17
Elasticsearch 8.1.1
Description:
Upgrading to Flink 1.17 from 1.8. ElasticsearchSink is already deprecated
and I am using the flink-connector-elasticsearch7.
This throws an IO exception, unable to parse response body. But when I
downgraded to Elasticsearch 7.17 it worked.
Question:
I
29 matches
Mail list logo