陈磊 created FLINK-36661:
--
Summary: the managed memory setting is relatively small, resulting
in duplicate processing results for batch tasks
Key: FLINK-36661
URL: https://issues.apache.org/jira/browse/FLINK-36661
Hi Yanquan,
Thanks for checking in - I did not send an email about RC1, because I built
it, then realized we wanted to include an additional PR. I tried reusing
the same rc1 tag, but it interfered with the release scripts (couldn't
upload to dist.apache.org, due to existing directory.)
Instead of
I've checked:
- Review JIRA release notes
- Verify hashes and verify signatures
- Build success from source with JDK11 & maven3.8.6
- Source code artifacts matching the current release
I’ve searched the mail list and could not find the vote about
flink-connector-aws v5.0.0 release candidate #1, s
LvYanquan created FLINK-36660:
-
Summary: Release flink-connector-elasticsearch vx.x.x for Flink 2.0
Key: FLINK-36660
URL: https://issues.apache.org/jira/browse/FLINK-36660
Project: Flink
Issue Ty
+1 (binding)
- verified signatures and checksum
- built from source
- reviewed web PR
Best,
Xintong
On Tue, Nov 5, 2024 at 2:04 PM weijie guo wrote:
> +1 (binding)
>
> - verified signatures
> - verified hashsums
> - checked github release tag
> - checked release notes
> - reviewed the web P
+1 (binding)
- verified signatures and checksum
- built from source
- reviewed web PR
Best,
Xintong
On Tue, Nov 5, 2024 at 2:03 PM weijie guo wrote:
> +1 (binding)
>
> - verified signatures
> - verified hashsums
> - checked github release tag
> - checked release notes
> - reviewed the web P
LvYanquan created FLINK-36659:
-
Summary: Release flink-connector-jdbc vx.x.x for Flink 2.0
Key: FLINK-36659
URL: https://issues.apache.org/jira/browse/FLINK-36659
Project: Flink
Issue Type: Sub-t
LvYanquan created FLINK-36658:
-
Summary: Support and Release Connectors for Flink 2.0
Key: FLINK-36658
URL: https://issues.apache.org/jira/browse/FLINK-36658
Project: Flink
Issue Type: Improvemen
+1 (binding)
- verified signatures
- verified hashsums
- checked github release tag
- checked release notes
- reviewed the web PR
- build from source
Best regards,
Weijie
Hong Liang 于2024年11月5日周二 07:28写道:
> Hi everyone,
> Please review and vote on release candidate #1 for
> flink-connector-p
+1 (binding)
- verified signatures
- verified hashsums
- checked github release tag
- checked release notes
- reviewed the web PR
- build from source
Best regards,
Weijie
Hong Liang 于2024年11月5日周二 07:09写道:
> Hi everyone,
> Please review and vote on release candidate #2 for flink-connector-aws
Hi, Lincoln. Thanks for your response.
> since both scriptPath and script(statements) can be null, we need to
clarify the behavior when both are empty, such as throwing an error
Yes, you are correct. I have updated the FLIP about this. When these fields
are both empty, the server throws an excep
Hi, community!
When working on JIRA[1]: during adaptive rescaling, what strategy should be
used
to select candidate slots for ensuring efficient/expected resource utilization?
We have received some lively discussions and valuable feedback
(thanks for Matthias, Rui, Gyula, Maximilian, Tison,
Hi, Arvid.
It has been a month and we are glad to see that we have completed the release
of Kafka 3.3.0 targeting 1.19 and 1.20.
Considering that Flink 2.0-preview1 has already been released, I would like to
know about our plans and progress for bumping to 2.0-preview1.
I tested the changes req
Thanks Shengkai for driving this! Overall, looks good!\
I have two minor questions:
1. Regarding the interface parameters (including REST API
& Java interfaces), since both scriptPath and script(statements)
can be null, we need to clarify the behavior when both are
empty, such as throwing an error
Hi, Ferenc.
Thanks for your clarification. We can hard code these different options in
the sql-gateway module. I have updated the FLIP and PoC branch about this
part. But I think we should provide a unified API to ship artifacts to
different deployment.
Best,
Shengkai
Ferenc Csaky 于2024年11月4日
Hi everyone,
Please review and vote on release candidate #1 for
flink-connector-prometheus v1.0.0, as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)
This version supports Flink 1.19 and 1.20.
The complete staging area is available for yo
Hi everyone,
Please review and vote on release candidate #2 for flink-connector-aws
v5.0.0, as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)
This version supports Flink 1.19 and 1.20.
The complete staging area is available for your revi
Presume you're coming from Spark and looking for something like RDD.foreach.
In Flink there is no such feature. I think you can use a batch job for
processing and storing the data.
All the rest can be done in a custom code outside of Flink.
The hard way is to implement a custom connector which is
Matyas Orhidi created FLINK-36657:
-
Summary: Release Flink 1.20
Key: FLINK-36657
URL: https://issues.apache.org/jira/browse/FLINK-36657
Project: Flink
Issue Type: Bug
Reporter: Ma
Hi Timo,
Thanks for the detailed and very well structured FLIP document!
This is an important feature and will enable many more use-cases for Flink
SQL and Table API.
I have a few questions / suggestions:
1. "Scoping and Simplifications", "Partition and Order Semantics":
"By default, we requir
Hello,
I have been looking at the Flink Jira and git. I see a large number of Flink
Jira issues that are open and critical or blockers
https://issues.apache.org/jira/browse/FLINK-36655?jql=project%20%3D%20FLINK%20AND%20priority%20in%20(Blocker%2C%20Critical)
I realise some of these issues may not
Hi David, Hi Shengkai,
> can I apply a PTF to a stream that doesn't have a time attribute?
Yes, time attributes are optional. This is why the
REQUIRES_TIME_ATTRIBUTE argument trait exists. If no on_time has been
specified in the SQL call and the REQUIRES_TIME_ATTRIBUTE trait is not
present, t
Hi Shengkai,
Thank you for driving this FLIP! I think this is a good way to
close this gap on the short-term until FLIP-316 can be finished.
I would only like to add one thing: YARN has a `yarn.ship-files`
config option that ships local or DFS files/directories to the
YARN cluster [1].
Best,
Fer
Hi, Yang Li.
IIUC, Your issue is similar to the one described in this JIRA[1], right?
[1] https://issues.apache.org/jira/browse/FLINK-33936
--
Best!
Xuyang
At 2024-10-29 15:16:14, "李阳" wrote:
>Hello devs,
>
>I would like to initiate a discussion about the Flink TopNFunction.
Leonard Xu created FLINK-36656:
--
Summary: Flink CDC treats MySQL Sharding table with boolean type
conversion error
Key: FLINK-36656
URL: https://issues.apache.org/jira/browse/FLINK-36656
Project: Flink
guanghua pi created FLINK-36655:
---
Summary: using flink state processor api to process big state in
rocksdb is very slow
Key: FLINK-36655
URL: https://issues.apache.org/jira/browse/FLINK-36655
Project:
Hi, Shegnkai.
Thank you for your answer. I have no further questions.
--
Best!
Xuyang
At 2024-11-04 10:00:32, "Shengkai Fang" wrote:
>Hi, Xuyang. Thanks a lot for your response!
>
>> Does that means we will support multi DMLs, multi DQLs, mixed DMLs & DQLs
>in one sql script?
>
Shuai Xu created FLINK-36654:
Summary: Decimal divide Integer reports Null pointer exception
Key: FLINK-36654
URL: https://issues.apache.org/jira/browse/FLINK-36654
Project: Flink
Issue Type: Bug
28 matches
Mail list logo