Hi, LiHao.

Thanks for the proposal. The idea of connection is very attractive and
really helps Flink users to manage their secret. But I have some concerns
about this FLIP:

*Proposal for a Top-Level Connection Concept in Flink SQL*
I believe connection should be elevated as a core, top-level concept in
Flink SQL. This would allow users to leverage connection metadata to create
catalogs, tables, and other resources, as both catalogs and tables
inherently require external system integrations. To streamline management,
introducing a connection store to securely organize and access all
connections/secrets could be a valuable addition.

*Improving Syntax Clarity for Connection References*
The current syntax for referencing connections feels overly implicit. +1
with @Ryan's proposal to use SQL to specify connections explicitly.
However, I recommend conducting a deeper analysis of existing use cases and
community feedback to ensure this aligns with user expectations and broader
design goals.

Best,
Shengkai

Martijn Visser <martijnvis...@apache.org> 于2025年6月4日周三 02:04写道:

> Hi all,
>
> First of all, I think having a Connection resource is something that will
> be beneficial for Apache Flink. I could see that being extended in the
> future to allow for easier secret handling [1].
> In my mental mind, I'm comparing this proposal against SQL/MED from the ISO
> standard [2]. I do think that SQL/MED isn't a very user friendly syntax
> though, looking at Postgres for example [3].
>
> I think it's a valid question if Connection should be considered with a
> catalog or database-level scope. @Ryan can you share something more, since
> you've mentioned "Note: I much prefer catalogs for this case. Which is what
> we use internally to manage connection properties". It looks like there
> isn't a strong favourable approach looking at other vendors (like,
> Databricks does scopes it on a Unity catalog, Snowflake on a database
> level).
>
> Also looking forward to Leonard's input.
>
> Best regards,
>
> Martijn
>
> [1] https://issues.apache.org/jira/browse/FLINK-36818
> [2] https://www.iso.org/standard/84804.html
> [3] https://www.postgresql.org/docs/current/sql-createserver.html
>
> On Fri, May 30, 2025 at 5:07 AM Leonard Xu <xbjt...@gmail.com> wrote:
>
> > Hey Mayank.
> >
> > Thanks for the FLIP, I went through this FLIP quickly and found some
> > issues which I think we
> > need to deep discuss later. As we’re on a short Dragon boat Festival,
> > could you kindly hold
> > on this thread? and we will back to continue the FLIP discuss.
> >
> > Best,
> > Leonard
> >
> >
> > > 2025 4月 29 23:07,Mayank Juneja <mayankjunej...@gmail.com> 写道:
> > >
> > > Hi all,
> > >
> > > I would like to open up for discussion a new FLIP-529 [1].
> > >
> > > Motivation:
> > > Currently, Flink SQL handles external connectivity by defining
> endpoints
> > > and credentials in table configuration. This approach prevents
> > reusability
> > > of these connections and makes table definition less secure by exposing
> > > sensitive information.
> > > We propose the introduction of a new "connection" resource in Flink.
> This
> > > will be a pluggable resource configured with a remote endpoint and
> > > associated access key. Once defined, connections can be reused across
> > table
> > > definitions, and eventually for model definition (as discussed in
> > FLIP-437)
> > > for inference, enabling seamless and secure integration with external
> > > systems.
> > > The connection resource will provide a new, optional way to manage
> > external
> > > connectivity in Flink. Existing methods for table definitions will
> remain
> > > unchanged.
> > >
> > > [1] https://cwiki.apache.org/confluence/x/cYroF
> > >
> > > Best Regards,
> > > Mayank Juneja
> >
> >
>

Reply via email to