[jira] [Created] (FLINK-37851) Migrate state processor API from source API v1 to source API v2

2025-05-26 Thread Gabor Somogyi (Jira)
Gabor Somogyi created FLINK-37851:
-

 Summary: Migrate state processor API from source API v1 to source 
API v2
 Key: FLINK-37851
 URL: https://issues.apache.org/jira/browse/FLINK-37851
 Project: Flink
  Issue Type: Improvement
  Components: API / State Processor
Affects Versions: 2.1.0
Reporter: Gabor Somogyi
Assignee: Gabor Somogyi






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] FLIP-XXX: Flink Events Reporter System

2025-05-26 Thread Li Wang
Hi Kartikey,

Thanks for the FLIP. I think this Events Reporter System idea is very good
for Flink. We truly need more deep insight, not just metrics, especially
for achieving autonomous operations.

The V1 plan looks very practical. Focus on core parts and use asynchronous
dispatch for stability is good strategy. For future reporters, I have a
question in my mind. I wish to use Kafka Events Reporter at a later time.
So, how this V1 design can well and effectively enable developing such a
Kafka reporter?

Thanks


On Sat, May 24, 2025 at 11:07 PM Kartikey Pant 
wrote:

> Hi Flink Devs,
>
> I’m Kartikey Pant. Drawing on my experience with large-scale Flink
> pipelines and AI/ML, I believe Flink needs richer, structured event data
> for advanced tuning, AIOps, and deeper observability - moving beyond
> current metrics and log scraping.
>
> To help with this, I've drafted a proposal for a new Flink EventsReporter
> System. The core idea is to create something familiar, based on how
> MetricReporters work, but focused on emitting key operational events in a
> structured way.
>
> For V1, I'm suggesting we start focused and prioritize stability:
>
>-
>
>Build the basic asynchronous reporting framework.
>-
>
>Emit critical events like Job Status changes & Checkpoint results (as
>JSON).
>-
>
>Include a simple FileEventsReporter so it's useful right away.
>
> You can read the full proposal here:
>
> https://docs.google.com/document/d/1R4fmOTQDLZcUQwgmCCxoRb74MGPZOypiScUbKL43AL4
>
> I'm eager for your feedback. Does this V1 approach make sense, or am I
> overlooking anything? I'm looking to get more involved in Flink
> development, and your insights and guidance here would be incredibly
> helpful.
>
> Thanks a lot,
>
> Kartikey Pant
>


Re: [VOTE] FLIP-313: Add support of User Defined AsyncTableFunction

2025-05-26 Thread Becket Qin
Thanks for pointing to FLIP-498, Timo. I missed that.

>From the FLIP-498 discussion thread, it is unclear to me whether people had
agreed to "discard" FLIP-313. The FLIP-498 discussion mentioned that we may
potentially still add the hint based options later, which is what was
proposed in FLIP-313. And I think we already see use cases in per function
instance options instead of job level configs.

@Timo, can you clarify that by -1, do you want to veto the technical
proposal of FLIP-313, or do you mean you want to have yet another FLIP
(other than FLIP-498) to add the hint based options? And why?

BTW, I feel that we can do better in dealing with similar FLIPs from
different contributors as well as FLIPs dormant for long. I'll start a
separate discussion on that rather than derail this thread.

Thanks,

Jiangjie (Becket) Qin


On Mon, May 26, 2025 at 3:22 AM Timo Walther  wrote:

> -1
>
> This FLIP has been subsumed by FLIP-498: AsyncTableFunction for async
> table function support [1]. In the discussion for FLIP-498, we decided
> to discard FLIP-313 as it has been abandoned for a while.
>
> I hope this is ok for everyone. @Alan might give some timeline when this
> feature will land?
>
> Cheers,
> Timo
>
>
> [1]
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-498%3A+AsyncTableFunction+for+async+table+function+support
>
>
> On 26.05.25 07:51, Teunissen, F.G.J. (Fred) wrote:
> > +1 (non-binding)
> >
> > We currently use a custom async table source and join it using FOR
> SYSTEM TIME AS OF  This approach has some challenges, especially when
> used after aggregations .
> >
> > Introducing support for an async UDTF would allow us to perform the join
> using LATERAL TABLE, which would greatly simplify the query structure and
> improve maintainability.
> >
> > Kind regards,
> > Fred Teunissen
> >
> > From: Becket Qin 
> > Date: Thursday, 22 May 2025 at 17:08
> > To: dev@flink.apache.org 
> > Subject: Re: [VOTE] FLIP-313: Add support of User Defined
> AsyncTableFunction
> >
> > I just realized this FLIP has never been voted to pass.
> >
> > +1 to the FLIP.
> > This is actually something long overdue. I feel it is even more like a
> bug
> > that we need to fix than a feature.
> >
> > Thanks,
> >
> > Jiangjie (Becket) Qin
> >
> > On Sun, Jun 25, 2023 at 7:36 PM Aitozi  wrote:
> >
> >> Hi devs:
> >>  The last comments in [1] has been addressed, I'd like to restart
> this
> >> vote thread.
> >> The vote will be open for at least 72 hours (until June 29th, 10:00AM
> GMT)
> >> unless there is an objection or an insufficient number of votes.
> >>
> >> [1]
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.apache.org%2Fthread%2F7vk1799ryvrz4lsm5254q64ctm89mx2l&data=05%7C02%7CFred.Teunissen%40ing.com%7C065ec6c9cf4b4473046c08dd994279de%7C587b6ea13db94fe1a9d785d4c64ce5cc%7C0%7C0%7C638835232969844437%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=sD%2Fwr7xdTScWh464iYJLgzW%2BdE0toaLGloZ5Jtmz%2F1U%3D&reserved=0
> 
> >> [2]
> >>
> >>
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FFLINK%2FFLIP-313%253A%2BAdd%2Bsupport%2Bof%2BUser%2BDefined%2BAsyncTableFunction&data=05%7C02%7CFred.Teunissen%40ing.com%7C065ec6c9cf4b4473046c08dd994279de%7C587b6ea13db94fe1a9d785d4c64ce5cc%7C0%7C0%7C638835232969866359%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=IP%2FRjxDZqex0oXzKW58jAElrr7aWFu%2FNsouo7aYWB1E%3D&reserved=0
> <
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-313%3A+Add+support+of+User+Defined+AsyncTableFunction
> >
> >>
> >> Best regards,
> >> Aitozi
> >>
> >> Aitozi  于2023年6月14日周三 09:47写道:
> >>
> >>> Hi all,
> >>>  Thanks for all the feedback about FLIP-313: Add support of User
> >>> Defined AsyncTableFunction[1]. Based on the discussion [2], we have
> come
> >> to
> >>> a consensus, so I would like to start a vote.
> >>> The vote will be open for at least 72 hours (until June 19th, 10:00AM
> >> GMT)
> >>> unless there is an objection or an insufficient number of votes.
> >>>
> >>>
> >>> [1]
> >>>
> >>
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FFLINK%2FFLIP-313%253A%2BAdd%2Bsupport%2Bof%2BUser%2BDefined%2BAsyncTableFunction&data=05%7C02%7CFred.Teunissen%40ing.com%7C065ec6c9cf4b4473046c08dd994279de%7C587b6ea13db94fe1a9d785d4c64ce5cc%7C0%7C0%7C638835232969879042%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=q7wFos19f0UtOLW6q0HhrtzGKS5THv5uQtLCxc3ZzfA%3D&reserved=0
> <
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-313%3A+Add+support+of+User+Defined+AsyncTableFunction
> >
> >>> [2]
> https://eur02.safelinks.protection

Re: Getting "sun.rmi.registry is not visible" exception while running main method of CliFrontend.java class of flink-clients package

2025-05-26 Thread pranav tiwari
Hi,
Please guide me for the problem mentioned in the trailing mail.


On Mon, 26 May 2025 at 7:09 PM, pranav tiwari  wrote:

> Hi,
>
> I recently cloned the Flink source code, and am trying to run the main
> method of the *CliFrontend.java *class of the *flink-clients* package,
> but getting the following error message-
>
>
>
> *package sun.rmi.registry is not visible  (package sun.rmi.registry is
> declared in module java.rmi, which does not export it to the unnamed
> module)sun.rmi.registry*
>
> I am running this in intellij. I have added below in VM options of
> intellij run configuration, but still getting the same error.
>
> --add-exports=java.base/sun.net.util=ALL-UNNAMED
> --add-exports=java.rmi/sun.rmi.registry=ALL-UNNAMED
> --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED
> --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED
> --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED
> --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED
> --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED
> --add-exports=java.security.jgss/sun.security.krb5=ALL-UNNAMED
> --add-opens=java.base/java.lang=ALL-UNNAMED
> --add-opens=java.base/java.net=ALL-UNNAMED
> --add-opens=java.base/java.io=ALL-UNNAMED
> --add-opens=java.base/java.nio=ALL-UNNAMED
> --add-opens=java.base/sun.nio.ch=ALL-UNNAMED
> --add-opens=java.base/java.lang.reflect=ALL-UNNAMED
> --add-opens=java.base/java.text=ALL-UNNAMED
> --add-opens=java.base/java.time=ALL-UNNAMED
> --add-opens=java.base/java.util=ALL-UNNAMED
> --add-opens=java.base/java.util.concurrent=ALL-UNNAMED
> --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED
> --add-opens=java.base/java.util.concurrent.locks=ALL-UNNAMED
>
> Please let me know how this can be fixed?
>
>
> Thank you
> Pranav
>
>
>


Re: Getting "sun.rmi.registry is not visible" exception while running main method of CliFrontend.java class of flink-clients package

2025-05-26 Thread Ferenc Csaky
Hi,

What branch and JDK version are you using? Is should not be necessary to 
configure
`--add-opens` by hand. In the past, I ran into problems like this, when I was 
moving back
and forth between JDK versions, and sometimes the IDE can stuck in a confused 
state.

In general, I would suggest to invalidate caches ('File -> Invalidatre 
Caches...'),
mark everything there except the embedded browser cleanup. To start from a 
clean slate,
you can also close IntelliJ, and remove the `.idea` and all `*.iml` files from 
the cloned
repository. Assuming `release-2.0` or `master` branch, for development IMO 
JDK17 should
be used. So before anything, I tend to build the project from the terminal with 
JDK17 +
Maven 3.8.6 via `mvn clean install -DskipTests -Dfast -T1`, this will 
definitely succeed.

>From this point, when you reopen the project in IntelliJ, make sure the 
>Project SDK is set
to JDK17. In the general settings, under 'Build, Execution, Deployment -> 
Compiler ->
Java Compiler' you can check the applied 'Project bytecode version' and 
'Per-module
bytecode version', but by default those should match the project SDK.

+1 hint, sometimes it can be confusing why IntelliJ cannot build Scala modules. 
In my
experience most of the times, the cause of that is the Scala compiler is 
configured for
a different JDK version than the project. That can be checked under
'Build, Execution, Deployment -> Compiler -> Scala Compiler -> Scala Compile 
Server', and
making sure the JDK configured there matches with the one configured at project 
SDK.

Cheers,
Ferenc



On Tuesday, May 27th, 2025 at 04:10, pranav tiwari  wrote:

> 
> 
> Hi,
> Please guide me for the problem mentioned in the trailing mail.
> 
> 
> On Mon, 26 May 2025 at 7:09 PM, pranav tiwari pranav...@gmail.com wrote:
> 
> > Hi,
> > 
> > I recently cloned the Flink source code, and am trying to run the main
> > method of the *CliFrontend.java *class of the flink-clients package,
> > but getting the following error message-
> > 
> > package sun.rmi.registry is not visible (package sun.rmi.registry is
> > declared in module java.rmi, which does not export it to the unnamed
> > module)sun.rmi.registry
> > 
> > I am running this in intellij. I have added below in VM options of
> > intellij run configuration, but still getting the same error.
> > 
> > --add-exports=java.base/sun.net.util=ALL-UNNAMED
> > --add-exports=java.rmi/sun.rmi.registry=ALL-UNNAMED
> > --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED
> > --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED
> > --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED
> > --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED
> > --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED
> > --add-exports=java.security.jgss/sun.security.krb5=ALL-UNNAMED
> > --add-opens=java.base/java.lang=ALL-UNNAMED
> > --add-opens=java.base/java.net=ALL-UNNAMED
> > --add-opens=java.base/java.io=ALL-UNNAMED
> > --add-opens=java.base/java.nio=ALL-UNNAMED
> > --add-opens=java.base/sun.nio.ch=ALL-UNNAMED
> > --add-opens=java.base/java.lang.reflect=ALL-UNNAMED
> > --add-opens=java.base/java.text=ALL-UNNAMED
> > --add-opens=java.base/java.time=ALL-UNNAMED
> > --add-opens=java.base/java.util=ALL-UNNAMED
> > --add-opens=java.base/java.util.concurrent=ALL-UNNAMED
> > --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED
> > --add-opens=java.base/java.util.concurrent.locks=ALL-UNNAMED
> > 
> > Please let me know how this can be fixed?
> > 
> > Thank you
> > Pranav


[jira] [Created] (FLINK-37854) UnsupportedTemporalTypeException: Unsupported field: OffsetSeconds

2025-05-26 Thread Jacky Lau (Jira)
Jacky Lau created FLINK-37854:
-

 Summary: UnsupportedTemporalTypeException: Unsupported field: 
OffsetSeconds
 Key: FLINK-37854
 URL: https://issues.apache.org/jira/browse/FLINK-37854
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 2.1.0
Reporter: Jacky Lau
 Fix For: 2.1.0


spark supports, while flink doesn't
{code:java}
spark-sql (default)> SELECT DATE_FORMAT(CURRENT_TIMESTAMP, '-MM-dd 
HH:mm:ssXXX');
2025-05-27 10:23:50+08:00
Time taken: 4.358 seconds, Fetched 1 row(s)
spark-sql (default)> SELECT DATE_FORMAT(CURRENT_TIMESTAMP, '-MM-dd 
HH:mm:ssX');
2025-05-27 10:23:57+08
Time taken: 0.062 seconds, Fetched 1 row(s)
spark-sql (default)> SELECT DATE_FORMAT(CURRENT_TIMESTAMP, '-MM-dd 
HH:mm:ssXX');
2025-05-27 10:24:17+0800 {code}
 
{code:java}
Caused by: java.time.temporal.UnsupportedTemporalTypeException: Unsupported 
field: OffsetSeconds
    at java.base/java.time.LocalDate.get0(LocalDate.java:709)
    at java.base/java.time.LocalDate.getLong(LocalDate.java:688)
    at java.base/java.time.LocalDateTime.getLong(LocalDateTime.java:718)
    at 
java.base/java.time.format.DateTimePrintContext.getValue(DateTimePrintContext.java:308)
    at 
java.base/java.time.format.DateTimeFormatterBuilder$OffsetIdPrinterParser.format(DateTimeFormatterBuilder.java:3628)
    at 
java.base/java.time.format.DateTimeFormatterBuilder$CompositePrinterParser.format(DateTimeFormatterBuilder.java:2402)
    at 
java.base/java.time.format.DateTimeFormatter.formatTo(DateTimeFormatter.java:1849)
    at 
java.base/java.time.format.DateTimeFormatter.format(DateTimeFormatter.java:1823)
    at java.base/java.time.LocalDateTime.format(LocalDateTime.java:1746)
    at 
org.apache.flink.table.utils.DateTimeUtils.formatTimestamp(DateTimeUtils.java:799)
    at 
org.apache.flink.table.utils.DateTimeUtils.formatTimestamp(DateTimeUtils.java:764)
 {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[Discuss] Flink CDC 3.4 Kick Off, [Discuss] Flink CDC 3.5 Kick Off

2025-05-26 Thread Yanquan Lv
Hi devs,

As FlinkCDC 3.4 was released recently, it's a good time to kick off the
upcoming Flink CDC 3.5 release cycle.

In the past two major versions, we have made significant optimizations to
the capability of schema evolution, this feature is quite stable now. In
this version, we will focus on the following development tasks:

   1. Expand the source connectors that pipeline jobs can support.
   Integrate PostgreSQL/Kafka connector as pipeline source, which are high
   demands for feedback from community users.
   2. Improve usability in multi-tables (with significant differences in
   primary keys and data distribution) synchronization scenario.
   3. Further improve the robustness of the Transform module in complex
   scenes.

For developers who are interested in participating and contributing new
features in this release cycle, please feel free to create a jira that
targets cdc-3.5.0 to trace your planning features [1].

I'm happy to volunteer as a release manager and of course open to work
together with someone on this.
To ensure that we can complete the above plan, we plan to complete the
development of Flink CDC 3.5 by the end of August, 2025.


Best,
Yanquan

[1]
https://issues.apache.org/jira/browse/FLINK-37839?jql=project%20%3D%20FLINK%20AND%20fixVersion%20%3D%20cdc-3.5.0


Re: [Discuss] Flink CDC 3.4 Kick Off, [Discuss] Flink CDC 3.5 Kick Off

2025-05-26 Thread Yanquan Lv
sorry for the title and update:
[Discuss] Flink CDC 3.5 Kick Off


Yanquan Lv  于2025年5月27日周二 11:36写道:

> Hi devs,
>
> As FlinkCDC 3.4 was released recently, it's a good time to kick off the
> upcoming Flink CDC 3.5 release cycle.
>
> In the past two major versions, we have made significant optimizations to
> the capability of schema evolution, this feature is quite stable now. In
> this version, we will focus on the following development tasks:
>
>1. Expand the source connectors that pipeline jobs can support.
>Integrate PostgreSQL/Kafka connector as pipeline source, which are high
>demands for feedback from community users.
>2. Improve usability in multi-tables (with significant differences in
>primary keys and data distribution) synchronization scenario.
>3. Further improve the robustness of the Transform module in complex
>scenes.
>
> For developers who are interested in participating and contributing new
> features in this release cycle, please feel free to create a jira that
> targets cdc-3.5.0 to trace your planning features [1].
>
> I'm happy to volunteer as a release manager and of course open to work
> together with someone on this.
> To ensure that we can complete the above plan, we plan to complete the
> development of Flink CDC 3.5 by the end of August, 2025.
>
>
> Best,
> Yanquan
>
> [1]
> https://issues.apache.org/jira/browse/FLINK-37839?jql=project%20%3D%20FLINK%20AND%20fixVersion%20%3D%20cdc-3.5.0
>


Re: [Discuss] Flink CDC 3.4 Kick Off, [Discuss] Flink CDC 3.5 Kick Off

2025-05-26 Thread Xiqian YU
+1, thanks for driving this release!

Best Regards,
Xiqian

> 2025年5月27日 11:38,Yanquan Lv  写道:
> 
> sorry for the title and update:
> [Discuss] Flink CDC 3.5 Kick Off
> 
> 
> Yanquan Lv  于2025年5月27日周二 11:36写道:
> 
>> Hi devs,
>> 
>> As FlinkCDC 3.4 was released recently, it's a good time to kick off the
>> upcoming Flink CDC 3.5 release cycle.
>> 
>> In the past two major versions, we have made significant optimizations to
>> the capability of schema evolution, this feature is quite stable now. In
>> this version, we will focus on the following development tasks:
>> 
>>   1. Expand the source connectors that pipeline jobs can support.
>>   Integrate PostgreSQL/Kafka connector as pipeline source, which are high
>>   demands for feedback from community users.
>>   2. Improve usability in multi-tables (with significant differences in
>>   primary keys and data distribution) synchronization scenario.
>>   3. Further improve the robustness of the Transform module in complex
>>   scenes.
>> 
>> For developers who are interested in participating and contributing new
>> features in this release cycle, please feel free to create a jira that
>> targets cdc-3.5.0 to trace your planning features [1].
>> 
>> I'm happy to volunteer as a release manager and of course open to work
>> together with someone on this.
>> To ensure that we can complete the above plan, we plan to complete the
>> development of Flink CDC 3.5 by the end of August, 2025.
>> 
>> 
>> Best,
>> Yanquan
>> 
>> [1]
>> https://issues.apache.org/jira/browse/FLINK-37839?jql=project%20%3D%20FLINK%20AND%20fixVersion%20%3D%20cdc-3.5.0
>> 



[DISCUSS] Deprecating/dropping support for Python 3.8

2025-05-26 Thread Mika Naylor
Hi all,

I recently wanted to look into using the new dependency groups mechanism for 
centralising testing/development requirements in one place in a consistent 
format, rather than scattered in various places. I opened FLINK-37775 to 
reflect this, but had to drop it as it's only a feature in new versions of pip, 
and new pip versions are no longer being released for our minimum Python 
version, 3.8.

3.8 is officially end of life as of last year[1], but is still used[2]. With 
3.8 at EOL, and 3.9 reaching it end of this year, I wanted to ask for feedback 
on whether:

 • We have some sort of policy or intuition around deprecating/dropping support 
for Python versions to make maintenance burden a little easier, and whether 
this should be based on languages reaching eol, wider language version usage or 
whether we have insight into what python versions those on the latest Flink 
versions are using.
 • Whether we should deprecate the version first (like in FLINK-28195[3]), and 
how many minor versions this deprecation should exist for before dropping 
altogether (just one?)
Kind regards,
Mika

[1] https://devguide.python.org/versions/
[2] https://w3techs.com/technologies/history_details/pl-python/3
[3] https://issues.apache.org/jira/browse/FLINK-28195

Re: [DISCUSS] Projection pushdown support in Flink Kafka Table connector

2025-05-26 Thread Farooq Qaiser
Hey folks,

Just wanted to give this thread a quick bump for visibility.

Thanks,
Farooq

On Fri, May 9, 2025 at 12:04 PM Farooq Qaiser 
wrote:

> Hi folks,
>
> Just wanted to share a PR I've been working on to add support for
> projection pushdown to the Flink Kafka Table connector:
> https://github.com/apache/flink-connector-kafka/pull/174
>
> I think this could improve performance and even resiliency in some cases
> (see the PR description for more details).
>
> Would love to hear thoughts from the community on the proposed change and
> if there are people who are interested in helping review this.
>
> Thanks,
> Farooq
>


[jira] [Created] (FLINK-37852) SQL Server CDC cannot capture changes from tables with special characters

2025-05-26 Thread Sergei Morozov (Jira)
Sergei Morozov created FLINK-37852:
--

 Summary: SQL Server CDC cannot capture changes from tables with 
special characters
 Key: FLINK-37852
 URL: https://issues.apache.org/jira/browse/FLINK-37852
 Project: Flink
  Issue Type: Bug
  Components: Flink CDC
Affects Versions: cdc-3.4.0
Reporter: Sergei Morozov


Configure a connector against a table containing an opening square bracket in 
its name (e.g. {{customers [1]}}).
h5. Expected behavior

The connector captures the changes as usual.
h5. Actual behavior

The connector fails to introspect the table schema and fails as follows:

{noformat}org.apache.flink.runtime.source.coordinator.SourceCoordinator - 
Uncaught exception in the SplitEnumerator for Source Source: customers[1] while 
handling operator event RequestSplitEvent (host='localhost') from subtask 0 
(#0). Triggering job failover.
org.apache.flink.util.FlinkRuntimeException: Chunk splitting has encountered 
exception
at 
org.apache.flink.cdc.connectors.base.source.assigner.SnapshotSplitAssigner.checkSplitterErrors(SnapshotSplitAssigner.java:687)
 ~[classes/:?]
at 
org.apache.flink.cdc.connectors.base.source.assigner.SnapshotSplitAssigner.getNext(SnapshotSplitAssigner.java:402)
 ~[classes/:?]
at 
org.apache.flink.cdc.connectors.base.source.assigner.HybridSplitAssigner.getNext(HybridSplitAssigner.java:182)
 ~[classes/:?]
at 
org.apache.flink.cdc.connectors.base.source.enumerator.IncrementalSourceEnumerator.assignSplits(IncrementalSourceEnumerator.java:219)
 ~[classes/:?]
at 
org.apache.flink.cdc.connectors.base.source.enumerator.IncrementalSourceEnumerator.handleSplitRequest(IncrementalSourceEnumerator.java:117)
 ~[classes/:?]
at 
org.apache.flink.runtime.source.coordinator.SourceCoordinator.handleRequestSplitEvent(SourceCoordinator.java:637)
 ~[flink-runtime-1.20.1.jar:1.20.1]
at 
org.apache.flink.runtime.source.coordinator.SourceCoordinator.lambda$handleEventFromOperator$3(SourceCoordinator.java:306)
 ~[flink-runtime-1.20.1.jar:1.20.1]
at 
org.apache.flink.runtime.source.coordinator.SourceCoordinator.lambda$runInEventLoop$10(SourceCoordinator.java:533)
 ~[flink-runtime-1.20.1.jar:1.20.1]
at 
org.apache.flink.util.ThrowableCatchingRunnable.run(ThrowableCatchingRunnable.java:40)
 [flink-core-1.20.1.jar:1.20.1]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[?:1.8.0_452]
at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266) 
[?:1.8.0_452]
at java.util.concurrent.FutureTask.run(FutureTask.java) [?:1.8.0_452]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
 [?:1.8.0_452]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
 [?:1.8.0_452]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_452]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_452]
at java.lang.Thread.run(Thread.java:750) [?:1.8.0_452]
Caused by: java.lang.IllegalStateException: Error when splitting chunks for 
customer.dbo.customers [1]
at 
org.apache.flink.cdc.connectors.base.source.assigner.SnapshotSplitAssigner.splitTable(SnapshotSplitAssigner.java:361)
 ~[classes/:?]
at 
org.apache.flink.cdc.connectors.base.source.assigner.SnapshotSplitAssigner.splitChunksForRemainingTables(SnapshotSplitAssigner.java:670)
 ~[classes/:?]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[?:1.8.0_452]
at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266) 
~[?:1.8.0_452]
at java.util.concurrent.FutureTask.run(FutureTask.java) ~[?:1.8.0_452]
... 3 more
Caused by: java.lang.RuntimeException: Fail to analyze table in chunk splitter.
at 
org.apache.flink.cdc.connectors.base.source.assigner.splitter.JdbcSourceChunkSplitter.analyzeTable(JdbcSourceChunkSplitter.java:368)
 ~[classes/:?]
at 
org.apache.flink.cdc.connectors.base.source.assigner.splitter.JdbcSourceChunkSplitter.generateSplits(JdbcSourceChunkSplitter.java:112)
 ~[classes/:?]
at 
org.apache.flink.cdc.connectors.base.source.assigner.SnapshotSplitAssigner.splitTable(SnapshotSplitAssigner.java:359)
 ~[classes/:?]
at 
org.apache.flink.cdc.connectors.base.source.assigner.SnapshotSplitAssigner.splitChunksForRemainingTables(SnapshotSplitAssigner.java:670)
 ~[classes/:?]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[?:1.8.0_452]
at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266) 
~[?:1.8.0_452]
at java.util.concurrent.FutureTask.run(FutureTask.java) ~[?:1.8.0_452]
... 3 more
Caused 

Re: [VOTE] FLIP-530: Dynamic job configuration

2025-05-26 Thread Roman Khachatryan
Hi everyone,

I'm happy to announce that FLIP-530: Dynamic job configuration
[1] has been accepted.

There were 8 approving votes, 4 of which were binding:

- Piotr Nowojski (binding)
- Hao Li (non-binding)
- Junrui Lee (binding)
- Hangxiang Yu (binding)
- Fefan Wang (non-binding)
- Rui Fan (binding)
- Kartikey Pant (non-binding)
- Gustavo de Morais (non-binding)

There are no disapproving votes.

Thanks everyone!

[1]
https://cwiki.apache.org/confluence/x/uglKFQ

Regards,
Roman


On Wed, May 21, 2025 at 9:56 AM Gustavo de Morais 
wrote:

> Hi,
>
> +1 (non-binding).
>
> Thanks,
> Gustavo
>
> On Wed, 21 May 2025 at 08:32, Kartikey Pant 
> wrote:
>
> > Hi,
> >
> > +1 (non-binding)
> >
> > Thanks, Kartikey.
> >
> > On Wed, May 21, 2025 at 12:59 AM Roman Khachatryan 
> > wrote:
> >
> > > Hi everyone,
> > >
> > > I'd like to start a vote on FLIP-530: Dynamic job configuration
> > > [1] which has been discussed in this thread [2].
> > >
> > > The vote will be open for at least 72 hours unless there is an
> objection
> > > or not enough votes.
> > >
> > > [1]
> > > https://cwiki.apache.org/confluence/x/uglKFQ
> > >
> > > [2]
> > > https://lists.apache.org/thread/w1m420jx6h5cjv4rfy229xs00mmn7pwg
> > >
> > > Regards,
> > > Roman
> > >
> >
>


[jira] [Created] (FLINK-37853) FLIP-530: Dynamic job configuration

2025-05-26 Thread Roman Khachatryan (Jira)
Roman Khachatryan created FLINK-37853:
-

 Summary: FLIP-530: Dynamic job configuration
 Key: FLINK-37853
 URL: https://issues.apache.org/jira/browse/FLINK-37853
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Configuration
Reporter: Roman Khachatryan
Assignee: Roman Khachatryan


Umbrella ticket for implementation of 
https://cwiki.apache.org/confluence/x/uglKFQ 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] FLIP-531: Initiate Flink Agents as a new Sub-Project

2025-05-26 Thread Jing Ge
Fair enough! Thanks Xintong for the clarification! Looking forward to it!

Best regards,
Jing

On Mon, May 26, 2025 at 3:48 AM Xintong Song  wrote:

> @Robert,
>
> 1. I wasn't aware of the ASF subproject concept. Yes, the intention here is
> to create a repository, just like flink-cdc, flink-kubernetes-operator.
> I'll add a clarification in the FLIP.
>
> 2. Sorry for the confusion. I think we are just using Kafka as an example
> here. I'll correct it in the FLIP.
>
> I think there are two different ways to build a multi-agent system, and we
> plan to support both.
>
>- Running multiple agents running in the same Flink job. This means
>managing the lifecycle of the agents as a whole, and end-to-end
>checkpointing consistency across them. For this case, we do plan to
>leverage StateFun for the communication. In the first step, we probably
>will simply depend on StateFun, to quickly get it work. In the long
> term, I
>think it makes sense to move codes that we want to reuse from StateFun
> into
>the new project, rather than depending on a no-longer-maintained
> project.
>- Running agents as separated Flink jobs. In this way, I agree we should
>leverage Flink's connector framework.
>
> Thanks for pointing out the issues.
>
> Best,
>
> Xintong
>
>
>
> On Sun, May 25, 2025 at 10:38 AM Yuan Mei  wrote:
>
> > Thanks Xintong, Sean and Chris.
> >
> > This is a great step forward for the future of Flink. I'm really looking
> > forward to it!
> >
> > Best,
> > Yuan
> >
> > On Sat, May 24, 2025 at 10:00 PM Robert Metzger 
> > wrote:
> >
> > > Thanks for the nice proposal.
> > >
> > > One question: The proposal talks a lot about establishing a "sub
> > project".
> > > If I understand correctly, the ASF has a concept of subprojects, with
> > > sub-project committers, mailing lists, jira projects, .. etc. [1][2].
> > >
> > > Is the intention of this proposal to establish such a sub project?
> > > Or is the intention to basically create a "flink-agents" git
> repository,
> > > where all existing Flink committers have access to, and the Flink PMC
> > votes
> > > on releases? (I assume this is the intention). If so, I would update
> the
> > > proposal to talk about a new repository? or at least clarify the
> > immediate
> > > implications for the project.
> > >
> > > My second question is about this key feature:
> > > > *Inter-Agent Communication:* Built-in support for asynchronous
> > > agent-to-agent communication using Kafka.
> > >
> > > Does this mean the code from the flink-agents repo will have a
> dependency
> > > on AK? One of the big benefits of Flink is that it is independent of
> the
> > > underlying message streaming system. Wouldn't it be more elegant and
> > > actually easier to rely on the Flink connector framework here, and
> leave
> > > the concrete implementation to the user?
> > > Also, I wonder why we need to rely on an external message streaming
> > system
> > > at all? Is it because we want to be able to send messages into
> arbitrary
> > > directions? if so, maybe we can re-use code from Flink Statefun? I
> > > personally would think that relying on Flink's internal data transfer
> > model
> > > by default brings a lot of cost, performance, operations and
> > implementation
> > > benefits ... and users can still manually setup a connector using a
> > Kafka,
> > > Pulsar or PubSub connection. WDYT?
> > >
> > > Best,
> > > Robert
> > >
> > >
> > > [1]
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/CASSANDRA/Cassandra+Sub+Projects
> > > [2]
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/HADOOP/Apache+Hadoop+Ozone+-+sub-project+to+Apache+TLP+proposal
> > >
> > >
> > > On Fri, May 23, 2025 at 6:14 AM Xintong Song 
> > > wrote:
> > >
> > > > @Jing,
> > > >
> > > > I think the FLIP already included the high-level design goals, by
> > listing
> > > > the key features that we plan to support in the Proposed Solution
> > > section,
> > > > and demonstrating how using the framework may look like with the code
> > > > examples. Of course the high-level goals need to be further detailed,
> > > which
> > > > will be the next step. The purpose of this FLIP is to get community
> > > > consensus on initiating this new project. On the other hand,
> technical
> > > > design takes time to discuss, and likely requires continuous
> iteration
> > as
> > > > the project is being developed. So I think it makes sense to separate
> > the
> > > > design discussions from the initiation proposal.
> > > >
> > > > Of course any contributor's thoughts and inputs are valuable to the
> > > > project. And efficiency also matters, as the agentic ai industry
> grows
> > > > fast, we really need to keep up with the pace. I believe it would be
> > more
> > > > efficient to come up with some initial draft design / implementation
> > that
> > > > everyone can comment on, compared to just randomly collecting ideas
> > when
> > > we
> > > > have nothing. Fortunately, the project 

Getting "sun.rmi.registry is not visible" exception while running main method of CliFrontend.java class of flink-clients package

2025-05-26 Thread pranav tiwari
Hi,

I recently cloned the Flink source code, and am trying to run the main
method of the *CliFrontend.java *class of the *flink-clients* package, but
getting the following error message-



*package sun.rmi.registry is not visible  (package sun.rmi.registry is
declared in module java.rmi, which does not export it to the unnamed
module)sun.rmi.registry*

I am running this in intellij. I have added below in VM options of intellij
run configuration, but still getting the same error.

--add-exports=java.base/sun.net.util=ALL-UNNAMED
--add-exports=java.rmi/sun.rmi.registry=ALL-UNNAMED
--add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED
--add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED
--add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED
--add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED
--add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED
--add-exports=java.security.jgss/sun.security.krb5=ALL-UNNAMED
--add-opens=java.base/java.lang=ALL-UNNAMED
--add-opens=java.base/java.net=ALL-UNNAMED
--add-opens=java.base/java.io=ALL-UNNAMED
--add-opens=java.base/java.nio=ALL-UNNAMED
--add-opens=java.base/sun.nio.ch=ALL-UNNAMED
--add-opens=java.base/java.lang.reflect=ALL-UNNAMED
--add-opens=java.base/java.text=ALL-UNNAMED
--add-opens=java.base/java.time=ALL-UNNAMED
--add-opens=java.base/java.util=ALL-UNNAMED
--add-opens=java.base/java.util.concurrent=ALL-UNNAMED
--add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED
--add-opens=java.base/java.util.concurrent.locks=ALL-UNNAMED

Please let me know how this can be fixed?


Thank you
Pranav


Re: [VOTE] FLIP-313: Add support of User Defined AsyncTableFunction

2025-05-26 Thread Timo Walther

-1

This FLIP has been subsumed by FLIP-498: AsyncTableFunction for async 
table function support [1]. In the discussion for FLIP-498, we decided 
to discard FLIP-313 as it has been abandoned for a while.


I hope this is ok for everyone. @Alan might give some timeline when this 
feature will land?


Cheers,
Timo


[1] 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-498%3A+AsyncTableFunction+for+async+table+function+support



On 26.05.25 07:51, Teunissen, F.G.J. (Fred) wrote:

+1 (non-binding)

We currently use a custom async table source and join it using FOR SYSTEM TIME 
AS OF  This approach has some challenges, especially when used after 
aggregations .

Introducing support for an async UDTF would allow us to perform the join using 
LATERAL TABLE, which would greatly simplify the query structure and improve 
maintainability.

Kind regards,
Fred Teunissen

From: Becket Qin 
Date: Thursday, 22 May 2025 at 17:08
To: dev@flink.apache.org 
Subject: Re: [VOTE] FLIP-313: Add support of User Defined AsyncTableFunction

I just realized this FLIP has never been voted to pass.

+1 to the FLIP.
This is actually something long overdue. I feel it is even more like a bug
that we need to fix than a feature.

Thanks,

Jiangjie (Becket) Qin

On Sun, Jun 25, 2023 at 7:36 PM Aitozi  wrote:


Hi devs:
 The last comments in [1] has been addressed, I'd like to restart this
vote thread.
The vote will be open for at least 72 hours (until June 29th, 10:00AM GMT)
unless there is an objection or an insufficient number of votes.

[1] 
https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.apache.org%2Fthread%2F7vk1799ryvrz4lsm5254q64ctm89mx2l&data=05%7C02%7CFred.Teunissen%40ing.com%7C065ec6c9cf4b4473046c08dd994279de%7C587b6ea13db94fe1a9d785d4c64ce5cc%7C0%7C0%7C638835232969844437%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=sD%2Fwr7xdTScWh464iYJLgzW%2BdE0toaLGloZ5Jtmz%2F1U%3D&reserved=0
[2]

https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FFLINK%2FFLIP-313%253A%2BAdd%2Bsupport%2Bof%2BUser%2BDefined%2BAsyncTableFunction&data=05%7C02%7CFred.Teunissen%40ing.com%7C065ec6c9cf4b4473046c08dd994279de%7C587b6ea13db94fe1a9d785d4c64ce5cc%7C0%7C0%7C638835232969866359%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=IP%2FRjxDZqex0oXzKW58jAElrr7aWFu%2FNsouo7aYWB1E%3D&reserved=0

Best regards,
Aitozi

Aitozi  于2023年6月14日周三 09:47写道:


Hi all,
 Thanks for all the feedback about FLIP-313: Add support of User
Defined AsyncTableFunction[1]. Based on the discussion [2], we have come

to

a consensus, so I would like to start a vote.
The vote will be open for at least 72 hours (until June 19th, 10:00AM

GMT)

unless there is an objection or an insufficient number of votes.


[1]


https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FFLINK%2FFLIP-313%253A%2BAdd%2Bsupport%2Bof%2BUser%2BDefined%2BAsyncTableFunction&data=05%7C02%7CFred.Teunissen%40ing.com%7C065ec6c9cf4b4473046c08dd994279de%7C587b6ea13db94fe1a9d785d4c64ce5cc%7C0%7C0%7C638835232969879042%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=q7wFos19f0UtOLW6q0HhrtzGKS5THv5uQtLCxc3ZzfA%3D&reserved=0

[2] 
https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.apache.org%2Fthread%2F7vk1799ryvrz4lsm5254q64ctm89mx2l&data=05%7C02%7CFred.Teunissen%40ing.com%7C065ec6c9cf4b4473046c08dd994279de%7C587b6ea13db94fe1a9d785d4c64ce5cc%7C0%7C0%7C638835232969891732%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=k5EGhhb5tWkqRLRY%2F1fuNJqCaKvtQB7vKFtbuwPkVPk%3D&reserved=0

Best regards,
Aitozi





-
ATTENTION:
The information in this e-mail is confidential and only meant for the intended 
recipient. If you are not the intended recipient, don't use or disclose it in 
any way. Please let the sender know and delete the message immediately.
-




Re:Re: Re: Re: Re: [DISCUSS] FLIP-486: Introduce a new DeltaJoin

2025-05-26 Thread Xuyang
Hi, Gustavo. Oh, I apologize for not noticing your reply before...

1. The quick answer is that as long as at least one streaming join is 
transformed into a delta join, FORCE will not raise an error.

2. Related to FLIP-516, I am also gradually working on a POC for multi-way 
delta join, but it may take some more time. Maybe some utility classes can be 
reusable. Overall, for cascading joins with the same key, similar to FLIP-516, 
they can be optimized into a single large delta join operator. In contrast, 
cascading joins with different keys will be optimized into multiple 
operators(delta lookup, delta merger...).




--

Best!
Xuyang





At 2025-05-24 00:24:37, "Gustavo de Morais"  wrote:
>Hey Xuyang,
>
>Happy to see the FLIP being approved. Could you reply to the questions
>above if you have some time?
>
>Thanks,
>Gustavo
>
>On Tue, 20 May 2025 at 08:56, Gustavo de Morais 
>wrote:
>
>> Hey Xuyang,
>>
>> Thanks for proposing this and driving the discussion. In general, a very
>> interesting idea.
>>
>> - Can you go a bit in detail on how the optimizer will work in AUTO/FORCE
>> modes? When are we returning an error or falling back to the regular
>> streaming join operator? Do we support, for example, a delta join for the
>> first level and then regular join operator for the next join levels or will
>> the optimizer just return an error if you have FORCE set and use multiple
>> chained joins?
>>
>> - FLIP-516 adds support for a multiple join operator, which eliminates
>> intermediate state between joins. I see the potential of combining this
>> with FLIP-516 for some use cases. The same strategy described here could be
>> used in combination with it to also reduce the amount of state used by the
>> input streams in a second interaction. Or did you have a different strategy
>> to approach cascading joins?
>>
>> Best,
>> Gustavo
>>
>> On Thu, 15 May 2025 at 04:53, Ron Liu  wrote:
>>
>>> Hi, Xuyang
>>>
>>> Thanks for your reply, looks good to me.
>>>
>>> Best,
>>> Ron
>>>
>>> Xuyang  于2025年5月13日周二 15:00写道:
>>>
>>> > Hi, Feng. Let me address your questions:
>>> >
>>> > 1. As you mentioned, these filters should be recalculated. We can apply
>>> > the filters after lookup the source,
>>> >
>>> > and then perform the join. Moreover, we can consider pushing the filters
>>> > down to the lookup source to
>>> >
>>> > achieve a more efficient lookup.
>>> >
>>> > 2. You are correct that in the Calc, the join key must be referenced
>>> > directly without any transformations.
>>> >
>>> > Otherwise, we won’t be able to revert the transformed join key back to
>>> its
>>> > original ones. Let me clarify this
>>> >
>>> > part in the Flip.
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> >
>>> > Best!
>>> > Xuyang
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > 在 2025-05-12 10:33:41,"Feng Jin"  写道:
>>> > >Hi, xuyang
>>> > >
>>> > >Thanks for proposing this FLIP — it’s very helpful for scenarios
>>> involving
>>> > >large states.
>>> > >
>>> > >I have a few questions regarding the node whitelist section:
>>> > >
>>> > >1. If the TableSource implements FilterPushdownSpec, take LEFT JOIN as
>>> an
>>> > >example — when the right side supports filter pushdown, the source
>>> should
>>> > >be able to apply the filter. However, in the case of a LookupJoin, it
>>> > seems
>>> > >that filter conditions cannot be pushed down. In such cases, are the
>>> > >filters re-applied after the join?
>>> > >
>>> > >2. When there is a Calc node between the source and the join, and the
>>> join
>>> > >key involves some computation, does this prevent the plan from being
>>> > >transformed into a Delta Join?
>>> > >
>>> > >
>>> > >Best,
>>> > >Feng
>>> > >
>>> > >
>>> > >On Fri, May 9, 2025 at 11:34 AM Xuyang  wrote:
>>> > >
>>> > >> Hi, Ron. Thanks for your attention to this flip.
>>> > >>
>>> > >> 1. At first, inner/left/right/full join will be supported. I have
>>> > updated
>>> > >> the flip about this part.
>>> > >>
>>> > >> 2. Are you referring to the situation where the downstream primary
>>> key
>>> > >> (PK) differs from the upstream join key? Once the sink requires the
>>> > >> upstream to send complete -U and +U data to satisfy its idempotent
>>> > update
>>> > >> (for example, 1. when the sink PK does not match the upstream upsert
>>> > key,
>>> > >> it results in a sink materialization node, or 2. when the sink is a
>>> > retract
>>> > >> sink instead of an upsert sink), a deduplication node will be
>>> introduced
>>> > >> after the delta join to replay the +U data as both -U and +U. More
>>> > details
>>> > >> can be found in section "When and How to Convert." Please correct if
>>> I
>>> > >> mistake your mean.
>>> > >>
>>> > >> 3. Yes, the majority of the design can be shared, including the index
>>> > API,
>>> > >> optimization phases, and parts of the operator implementation. The
>>> main
>>> > >> difference is that strong consistency semantics require additional
>>> > >> interfaces to establish certain agreements with the connector, s