Please send an e-mail to user-subscr...@flink.apache.org to subscribe to
the flink user mail list.
Best,
Hang
mark 于2023年3月15日周三 22:07写道:
> subscribe
>
The Apache Flink community is very happy to announce the release of Apache
Flink 1.15.4, which is the fourth bugfix release for the Apache Flink 1.15
series.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
I got it to work Thanks for pointing me in the right direction. I had some flink dependence that wasn’t set to provided and I removed sql-connector-Kafka and that seems to fix the problem. Thanks once again Med venlig hilsen / Best regardsLasse NedergaardDen 15. mar. 2023 kl. 15.21 skrev Lasse Nede
>
> CREATE TEMPORARY VIEW filteredResults AS
> SELECT * from suspiciousOrders WHERE small_ts > large_ts;
Looks like after added the condition, the final expanded query should not match
the condition[1] of an interval join that leads to the planner recognize it as
an interval join. It’s not
Hi. Thanks Shammon. You are right org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory are not in the file. I’m use the shade plugin as described and the only difference from my other project are the nested project structure. I have “my project”/Flink/“my flink project”/src/ma
Hello,
I'm seeing some strange behaviour in Flink SQL where adding a new SELECT
statement causes a previously created Interval Join to be changed into a
regular Join. I'm concerned because the Flink docs make clear that regular
Joins are not safe because their memory usage can grow indefinitely.
I
subscribe
Hi, Lasse,
I think you should make sure the situation as Shammon said.
Maybe you need to use the maven-shade-plugin like this to package, and make
sure files in `META-INF/services` are merged together.
org.apache.maven.plugins <
> artifactId>maven-shade-plugin 3.2.4 <
> executions> package
Thank you All.
On Tuesday, 14 March, 2023 at 07:14:05 am IST, yuxia
wrote:
The plan shows the filters has been pushed down. But remeber, although pused
down, the filesystem table won't accept the filter. So, it'll be still like
scan
all files.
Best regards,
Yuxia
发件人: "Maryam Moaf
Hi Lasse
I think you can first check whether there is a file
`META-INF/services/org.apache.flink.table.factories.Factory` in your uber
jar and there's
`org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory`
in the file. Flink would like to create table factory from that file.
Hi.
I have a simple job creating a table from Kafka. It works perfect on my local
machine but when I build a Uber jar and use the official Flink image I get a
validation exception.
Could not find any factory for identifier ‘Kafka’ that implements
org.Apache.Flink.table.dynamicTableFactory in
Hi Penny,
When you complete step 1 and step 2, it means that you have subscribed to
the User mailing list so you can post the email that you want to send to
the User mailing list by performing step 3. I can see why the email can be
confusing though.
Best regards,
Martijn
On Sat, Mar 11, 2023 at
Hi Alexis
Currently I think checkpoint and savepoint will not save watermarks. I
think how to deal with watermarks at checkpoint/savepoint is a good
question, we can discuss this in dev mail list
Best,
Shammon FY
On Wed, Mar 15, 2023 at 4:22 PM Alexis Sarda-Espinosa <
sarda.espin...@gmail.com>
>
> 退订
请发送任意邮件到 user-unsubscr...@flink.apache.org 取消 订阅来自 user@flink.apache.org
邮件列表的邮件,发送到 user@flink.apache.org 是不会取消订阅的。
> 发自我的iPhone
>
>
> -- Original --
> From: Tony Wei
> Date: Tue,Mar 14,2023 1:11 PM
> To: David Anderson
> Cc: Hangxiang Yu , user
>
Hi Shammon, thanks for the info. I was hoping the savepoint would include
the watermark, but I'm not sure that would make sense in every scenario.
Regards,
Alexis.
Am Di., 14. März 2023 um 12:59 Uhr schrieb Shammon FY :
> Hi Alexis
>
> In some watermark generators such as BoundedOutOfOrderTimest
15 matches
Mail list logo