Hi Dawid,
Please don't get me wrong. I just described the facts, shared different
opinions, and tried to make sure we are on the same page. My intention is
clearly not to block your effort. If you, after hearing all the different
opinions, still think your solution is the right approach, please go
Thanks Gordon for the comments!
1. I have changed the FLIP name to the one proposed by you.
2. In the Iceberg sink we need access only to the Flink metrics. We do
not specifically need the job ID in the Committer after the SinkV2
migration (more about that later). This is the reason wh
Jing Ge created FLINK-33195:
---
Summary: ElasticSearch Connector should directly depend on
3rd-party libs instead of flink-shaded repo
Key: FLINK-33195
URL: https://issues.apache.org/jira/browse/FLINK-33195
P
Hi Jing,
Yes I agree that if we can get them resolved then that would be ideal.
I guess the worry is that at 1.17, we had a released Flink core and Kafka
connector.
At 1.18 we will have a released Core Flink but no new Kafka connector. So the
last released Kafka connector would now be
https://m
Hi, Zhu Zhu,
Thanks for your feedback!
> I think we can introduce a new config option
> `taskmanager.load-balance.mode`,
> which accepts "None"/"Slots"/"Tasks". `cluster.evenly-spread-out-slots`
> can be superseded by the "Slots" mode and get deprecated. In the future
> it can support more mode,
For the record, after the rename, the new FLIP link is:
https://cwiki.apache.org/confluence/display/FLINK/FLIP-371%3A+Provide+initialization+context+for+Committer+creation+in+TwoPhaseCommittingSink
Thanks,
Peter
Péter Váry ezt írta (időpont: 2023. okt. 5.,
Cs, 11:02):
> Thanks Gordon for the co
Thanks for the efforts Peter!
I've just analyzed it through and I think it's useful feature.
+1 from my side.
G
On Thu, Oct 5, 2023 at 12:35 PM Péter Váry
wrote:
> For the record, after the rename, the new FLIP link is:
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-371%3A+Provid
Hi Team,
In my previous email[1] I have described our challenges migrating the
existing Iceberg SinkFunction based implementation, to the new SinkV2 based
implementation.
As a result of the discussion around that topic, I have created the
FLIP-371 [2] to address the Committer related changes, and
If there is no more questions or concerns, I will start the voting thread
tomorrow
On 2022/06/27 13:09:51 Roc Marshal wrote:
> Hi, all,
>
>
>
>
> I would like to open a discussion on porting JDBC Source to new Source API
> (FLIP-27[1]).
>
> Martijn Visser, Jing Ge and I had a preliminary d
Hi,
I have opened a draft PR [1] that shows the minimal required changes and a
suggested unit test setup for Java version specific tests.
There is still some work to be done (run all benchmarks, add more tests for
compatibility/migration)
If you have time please review / comment on the approach he
After digging into the flink-python code, It seems if
`PYFLINK_GATEWAY_DISABLED` is set to false in an environment variable, then
using Types.LIST(Types.ROW([...])) does not have any issue, once Java
Gateway is launched.
It was unexpected for Flink local run to set this flag to false explicitly.
Hi David,
It’s a deliberate choice to decouple the connectors. We shouldn’t block
Flink 1.18 on connector statuses. There’s already work being done to fix
the Flink Kafka connector. Any Flink connector comes after the new minor
version, similar to how it has been for all other connectors with Flin
Thanks for creating RC1
* Downloaded artifacts
* Built from sources
* Verified checksums and gpg signatures
* Verified versions in pom files
* Checked NOTICE, LICENSE files
The strange thing I faced is
CheckpointAfterAllTasksFinishedITCase.testRestoreAfterSomeTasksFinished
fails on AZP [1]
which
13 matches
Mail list logo