Robert Metzger created FLINK-29492:
--
Summary: Kafka exactly-once sink causes OutOfMemoryError
Key: FLINK-29492
URL: https://issues.apache.org/jira/browse/FLINK-29492
Project: Flink
Issue Typ
Jingsong Lee created FLINK-29491:
Summary: Primary key without partition field can be supported from
full changelog
Key: FLINK-29491
URL: https://issues.apache.org/jira/browse/FLINK-29491
Project: Fli
Jingsong Lee created FLINK-29490:
Summary: Timestamp LTZ is unsupported in table store
Key: FLINK-29490
URL: https://issues.apache.org/jira/browse/FLINK-29490
Project: Flink
Issue Type: Bug
+1 (not-binding)
Thank you Gyula,
Helm install from flink-kubernetes-operator-1.2.0-helm.tgz looks good, logs
look normal
podman Dockerfile build from source looks good.
twistlock security scans of the proposed image look good:
ghcr.io/apache/flink-kubernetes-operator:95128bf
UI and basic
+1 having an option storing every version of a connector in one repo
Also, it would be good to have the major(.minor) version of the connected
system in the name of the connector jar, depending of the compatibility. I
think this compatibility is mostly system dependent.
Thanks, Peter
On Fri, Se
Justin created FLINK-29489:
--
Summary: Display issue when querying complex, deeply nested fields
Key: FLINK-29489
URL: https://issues.apache.org/jira/browse/FLINK-29489
Project: Flink
Issue Type: Bug
Ok, I was wrong. The step is actually documented at the end of the Flink
release documentation [1] in item 15) in the "Checklist to declare the
process completed" subsection. I missed that one. Sorry for the confusion.
I revoke my veto and close FLINK-29485 [2].
[1]
https://cwiki.apache.org/confl
Probably, my expectations were wrong here: I expected that we want to test
the compatibility in these tests between different major versions and that
we would want to verify the current version of the release branch as well
before releasing the artifacts. What's the rationale behind doing it after
Could you be more specific as to what you believe should be updated?
IIRC the release-1.16 branch only gets updated ta all once the release
is out (e.g., mark docs as stable, update japicmp reference).
On 30/09/2022 15:32, Matthias Pohl wrote:
Looking into the git history, there are numerous
Looking into the git history, there are numerous locations that need to be
updated in the release-1.16 branch. Yun Gao did a few commits around that
topic (da9e6be..6f69f4e). But these changes were committed close to the
actual release date rather than the release branch creation date. Is this
part
-1 (non-binding)
Hi Xingbo,
I just noticed that we haven't updated the current Flink version in
TypeSerializerUpgradeTest. It is missing in the release-1.16 branch and on
master. That means that the serialization tests are not executed for Flink
1.16. See FLINK-29485 [1].
[1] https://issues.apach
Chesnay Schepler created FLINK-29488:
Summary: MetricRegistryImpl should implement AutoCloseableAsync
Key: FLINK-29488
URL: https://issues.apache.org/jira/browse/FLINK-29488
Project: Flink
Chesnay Schepler created FLINK-29487:
Summary: RpcService should implement AutoCloseableAsync
Key: FLINK-29487
URL: https://issues.apache.org/jira/browse/FLINK-29487
Project: Flink
Issue
yuzelin created FLINK-29486:
---
Summary: Enable SQL Client to Connect SQL Gateway in Remote Mode
Key: FLINK-29486
URL: https://issues.apache.org/jira/browse/FLINK-29486
Project: Flink
Issue Type: New
Hi everyone,
Please review and vote on the release candidate #1 for the version 1.16.0,
as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)
The complete staging area is available for your review, which includes:
* JIRA release notes [1],
*
Matthias Pohl created FLINK-29485:
-
Summary: TypeSerializerUpgradeTestBase doesn't use
FlinkVersions.current()
Key: FLINK-29485
URL: https://issues.apache.org/jira/browse/FLINK-29485
Project: Flink
Thanks Jiabao!
+1 (binding)
Cheers, Martijn
On Fri, Sep 30, 2022 at 11:04 AM jiabao.sun
wrote:
> Hi everyone,
>
>
> Thanks for all your feedback for FLIP-262[1]: MongoDB Connector in the
> discussion thread[2],
> I'd like to start a vote for it.
>
>
> The vote will be open for at least 72 hours
Etienne Chauchot created FLINK-29484:
Summary: Support orderless check of elements in
SourceTestSuiteBase and SinkTestSuiteBase
Key: FLINK-29484
URL: https://issues.apache.org/jira/browse/FLINK-29484
jackylau created FLINK-29483:
Summary: flink python udf arrow in thread model bug
Key: FLINK-29483
URL: https://issues.apache.org/jira/browse/FLINK-29483
Project: Flink
Issue Type: Bug
Gyula Fora created FLINK-29482:
--
Summary: Ingress always forces ClusterIP rest service type
Key: FLINK-29482
URL: https://issues.apache.org/jira/browse/FLINK-29482
Project: Flink
Issue Type: Imp
Hi Qingsheng,
Thanks For the feedback.
Other metrics mentioned in FLIP-33, we will also implement it.
Best,
Jiabao
--
From:Qingsheng Ren
Send Time:2022年9月28日(星期三) 18:36
To:孙家宝
Cc:dev
Subject:Re: [DISCUSS] FLIP-262 MongoDB Connector
Hi everyone,
Please review and vote on the release candidate #2 for the version 1.2.0 of
Apache Flink Kubernetes Operator,
as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)
**Release Overview**
As an overview, the release consists of th
Hi Peter,
I think this also depends on the support SLA that the technology that you
connect to provides. For example, with Flink and Elasticsearch, we choose
to follow Elasticsearch supported versions. So that means that when support
for Elasticsearch 8 is introduced, support for Elasticsearch 6 s
23 matches
Mail list logo