Hello Community,
I need to have a custom parallel data source (Flink ParallelSourceFunction) for
fetching data based on some custom logic. In this source function, opening
multiple threads via java thread pool to distribute work further.
These threads share Flink provided 'SourceContext' and co
Hi, Flink SQL doesn't support a inline field in struct type as pk. You can try
to raise an issue about this feature in community[1].
For a quick solution, you can try to transform it by DataStream API first by
extracting the 'id' and then convert it to Table API to use SQL.
[1]
https://issu
I am reading stats from Kinesis, deserializing them into a stat POJO and
then doing something like this using an aggregated window with no defined
processWindow function:
timestampedStats
.keyBy(v -> v.groupKey())
.window(TumblingEventTimeWindows.of(Time.seconds(appCfg.get
Thanks for your proposal, Zhanghao Chen. I think it adds more transparency
to the configuration documentation.
+1 from my side on the proposal
On Wed, Oct 11, 2023 at 2:09 PM Zhanghao Chen
wrote:
> Hi Flink users and developers,
>
> Currently, Flink won't generate doc for the deprecated options
Registering the counter is fine, e.g. in `open()`:
lazy val responseCounter: Counter = getRuntimeContext
.getMetricGroup
.addGroup("response_code")
.counter("myResponseCounter")
then, in def asyncInvoke(), I can still only do responseCounter.inc(), but
what I want is responseCou
Hi team,
I'm interested in understanding if there is a method available for clearing
the State Backends in Flink. If so, could you please provide guidance on
how to accomplish this particular use case?
Thanks and regards,
Arjun S
Hi team,
I'm also interested in finding out if there is Java code available to
determine the extent to which a Flink job has processed files within a
directory. Additionally, I'm curious about where the details of the
processed files are stored within Flink.
Thanks and regards,
Arjun S
Hi team,
I'm also interested in finding out if there is Java code available to
determine the extent to which a Flink job has processed files within a
directory. Additionally, I'm curious about where the details of the
processed files are stored within Flink.
Thanks and regards,
Arjun S
On Mon, 30
Hi team!
I came across strange behavior in Flink 1.17.1. If during the build of a
checkpoint the s3 storage becomes unavailable, then the current checkpoint
expired by timeout and new ones are not triggered.
The triggering for new checkpoints is resumed only after s3 is restored and
this can be
The Apache Flink community is very happy to announce the release of Apache
Flink Kubernetes Operator 1.6.1.
Please check out the release blog post for an overview of the release:
https://flink.apache.org/2023/10/27/apache-flink-kubernetes-operator-1.6.1-release-announcement/
The release is availa
Hi team,
I have a Kafka topic named employee which uses confluent avro schema and
will emit the payload as below:
{
"employee": {
"id": "123456",
"name": "sampleName"
}
}
I am using the upsert-kafka connector to consume the events from the above
Kafka topic as below using the Flink SQL DDL statem
11 matches
Mail list logo