Hey everyone,
I have encountered a problem with my Table API Program. I am trying to use a
nested attribute as a watermark. The structure of my schema is a row, which
itself has 3 rows as attributes and they again have some attributes, especially
the Timestamp that I want to use as a watermark.
Hi, Le Xu,
If the job is a streaming job, all tasks should be scheduled before any
data can flow through the pipeline, and tasks will run in parallel.
I think the Execution Mode[1] and FLIP-134[2] will help you to understand
more details.
Best,
Hang
[1]
https://nightlies.apache.org/flink/flink-d
Hi Niklas,
On your confirmations:
a1) The default behaviour is documented at
https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/kafka/#event-time-and-watermarks
- Flink uses the timestamp embedded in the Kafka ConsumerRecord.
a2) I'm not 100% sure: since Flink 1.15, t
Hi Lars,
Have you used any of the new restore modes that were introduced with 1.15?
https://flink.apache.org/2022/05/06/restore-modes.html
Best regards,
Martijn
On Fri, Dec 9, 2022 at 2:52 PM Lars Skjærven wrote:
> Lifecycle rulesNone
>
> On Fri, Dec 9, 2022 at 3:17 AM Hangxiang Yu wrote:
>
Hi Surendra,
I think this behaviour is documented at
https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/connectors/datastream/kafka/#consumer-offset-committing
Best regards,
Martijn
On Tue, Dec 13, 2022 at 5:28 PM Surendra Lalwani via user <
user@flink.apache.org> wrote:
> Hi Team
Hello!
I have a quick question about slot-sharing behavior specified at this link (
https://nightlies.apache.org/flink/flink-docs-master/docs/concepts/flink-architecture/).
My understanding would be each task slot in Flink cluster represents a
sequentially running operator (and therefore can be se
Hello,
I have a Kafka source (the new one) in Flink 1.15 that's followed by a
process function with parallelism=2. Some days, I see long periods of
backpressure in the source. During those times, the pool-usage metrics of
all tasks stay between 0 and 1%, but the process function appears 100% busy.
Hi Team,
I am on Flink version 1.13.6. I am reading couple of streams from Kafka and
applying interval join with interval of 2 hours. However when I am checking
KafkaConsumer_records_lag_max it is coming in some thousands but when I
check Flink UI there is no backpressure and also the metrics insi
Hi,
I've got a Kinesis consumer which reacts to each record by doing some async
work using an implementation of RichAsyncFunction. I'm adding a retry strategy.
After x failed attempts I want this to time out and give up by returning no
data (i.e. not be treated as a failure).
Here is a cut dow
Hi,
I'm reading from a kafka topic and performing some custom AVRO
deserialisation but now I want to create a change log stream from the
source kafka topic.
I'm currently creating a temporary view and then selecting * from that
and finally changing the resultant table to a change log stream via
th
Hi devs and users,
I’d like to share the highlights about the 1.17 release sync on 12/13/2022.
- Release tracking page:
- 1.17 development is moving forward [1], we have 5 weeks remaining
- @committers Please continuously update the the progress in the 1.17 page
- Externalized Connect
11 matches
Mail list logo