Hello Team,
We are using the Flink version 1.16.3 and we are planning to use the Flink
OpenSearch connector, which requires the dependency of
"opensearch-rest-high-level-client". Latest version of
opensearch-rest-high-level-client is 2.15 which is not working with Flink
OpenSearch connector 1.
Sending my email to join the apache user mailing list.
Email: mszaci...@bloomberg.net
Hi Mate,
That option might be exactly what I need. Thanks!
Best regards,
Nathan T. A. Lewis
On Sun, 12 May 2024 05:27:10 -0600 czmat...@gmail.com wrote
Hi Nathan,
Job submissions for FlinkSessionJob resources will always be done by first
uploading the JAR file itself from the
Hello,
I am trying to run a Flink Session Job with a jar that is hosted on a maven
repository in Google's Artifact Registry.
The first thing I tried was to just specify the `jarURI` directly:
apiVersion: flink.apache.org/v1beta1
kind: FlinkSessionJob
metadata:
name: myJobName
Hello,
I have a stateless Flink job that processes incoming stream as follows:* For a
specific key, an event is produced every 10 sec.* Every 60 sec, the event is
acted upon (the event is modified, enriched). What is important is the event
gets acted upon every 60 second.
* Upon job restart
Hello,
I am running a Flink stateful job, where the checkpoint size increases
continuously over time (200+ MB). The actual State size should be in < 1 MB.
The source is a Kafka Topic. The number of keys in the Topic is < 1000
(confirmed by inspecting the Topic). Each Key needs to store
Hi Till,
I’m using Flink 1.11.2 version.
Yes, FlinkKafkaProducer011.setWriteTimestampToKafka(true) was set and causing
the issue.
Thank you for your help!
Regards,
Priyanka
From: Till Rohrmann
Sent: Tuesday, January 12, 2021 3:10 PM
To: Taher Koitawala
Cc: Priyanka Kalra A ; user
Sent: Monday, January 11, 2021 6:50 PM
To: Priyanka Kalra A
Cc: user ; Sushil Kumar Singh B
; Anuj Kumar Jain A
; Chirag Dewan ;
Pankaj Kumar Aggarwal
Subject: Re: Timestamp Issue with OutputTags
Can you please share your code?
On Mon, Jan 11, 2021, 6:47 PM Priyanka Kalra A
mailto:priy
Hi Team,
We are generating multiple side-output tags and using default processing time
on non-keyed stream. The class $YYY extends ProcessFunction and
implementation is provided for processElement method. Upon sending valid data,
it gives error "Invalid timestamp: -9223372036854775808. Time
Hi,
In the SQL API I see the query below. I want to know how I can make tumbling
tables based on amount of rows. So I want to make a window for row 1-10, 11-20
etc. It is also good if the windowing takes place on a Integer ID column. How
can I do this?
Table result1 = tableEnv.sqlQuery
Hi,
My boss wants to know how many events Flink can process, analyse etc. per
second? I cant find this in the documentation.
Hi,
I try to create a tumbling time window of 2 rows each in Flink Java. This must
based on the dateTime (TimeStamp3 datatype) or unixDateTime(BIGINT datatype)
column. I've added below the code of two different code versions. The error
messages I get I placed above the code.
When I prin
hi,
ok,thanks.I'll read it. Then I have another problem, which was that I had
caught the exception ,but it still came out.
At 2019-09-29 17:05:20, "Biao Liu" wrote:
Hi allan,
It's not a bug. Flink does not support null value, see discussion [1].
In you example
13 matches
Mail list logo