Thanks yuxia, you're right and I'll add the new database to
AlterDatabaseEvent.

I added `ignoreIfNotExists` for AlterDatabaseEvent because it is a
parameter in the `Catalog.alterDatabase` method. Although this value is
currently always false in `AlterDatabaseOperation`, I think it's better
to stay consistent with `Catalog.alterDatabase`. What do you think?

Best,
Shammon FY

On Thu, Jun 1, 2023 at 10:25 AM yuxia <luoyu...@alumni.sjtu.edu.cn> wrote:

> Hi, Shammon.
> I mean do we need to contain the new database after alter in
> AlterDatabaseEvent?  So that the listener can know what has been modified
> for the database. Or the listener don't need to care about the actual
> modification.
> Also, I'm wondering whether AlterDatabaseEven need to include
> ignoreIfNotExists method since alter database operation don't have such
> syntax like 'alter database if exists xxx'.
>
> Best regards,
> Yuxia
>
> ----- 原始邮件 -----
> 发件人: "Shammon FY" <zjur...@gmail.com>
> 收件人: "dev" <dev@flink.apache.org>
> 发送时间: 星期三, 2023年 5 月 31日 下午 2:55:26
> 主题: Re: [DISCUSS] FLIP-294: Support Customized Job Meta Data Listener
>
> Hi yuxia
>
> Thanks for your input. The `AlterDatabaseEvent` extends
> `DatabaseModificationEvent` which has the original database.
>
> Best,
> Shammon FY
>
> On Wed, May 31, 2023 at 2:24 PM yuxia <luoyu...@alumni.sjtu.edu.cn> wrote:
>
> > Thanks Shammon for driving it.
> > The FLIP generally looks good to me. I only have one question.
> > WRT AlterDatabaseEvent, IIUC, it'll contain the origin database name and
> > the new CatalogDatabase after modified. Is it enough only pass the origin
> > database name? Will it be better to contain the origin CatalogDatabase so
> > that listener have ways to know what changes?
> >
> > Best regards,
> > Yuxia
> >
> > ----- 原始邮件 -----
> > 发件人: "ron9 liu" <ron9....@gmail.com>
> > 收件人: "dev" <dev@flink.apache.org>
> > 发送时间: 星期三, 2023年 5 月 31日 上午 11:36:04
> > 主题: Re: [DISCUSS] FLIP-294: Support Customized Job Meta Data Listener
> >
> > Hi, Shammon
> >
> > Thanks for driving this FLIP, It will enforce the Flink metadata
> capability
> > from the platform produce perspective. The overall design looks good to
> me,
> > I just have some small question:
> > 1. Regarding CatalogModificationListenerFactory#createListener method, I
> > think it would be better to pass Context as its parameter instead of two
> > specific Object. In this way, we can easily extend it in the future and
> > there will be no compatibility problems. Refer to
> >
> >
> https://github.com/apache/flink/blob/9880ba5324d4a1252d6ae1a3f0f061e4469a05ac/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/DynamicTableFactory.java#L81
> > 2. In FLIP, you mentioned that multiple Flink tables may refer to the
> same
> > physical table, so does the Listener report this physical table
> repeatedly?
> > 3. When registering a Listener object, will it connect to an external
> > system such as Datahub? If the Listener object registration times out due
> > to permission issues, it will affect the execution of all subsequent SQL,
> > what should we do in this case?
> >
> > Best,
> > Ron
> >
> > Shammon FY <zjur...@gmail.com> 于2023年5月31日周三 08:53写道:
> >
> > > Thanks Feng, the catalog modification listener is only used to report
> > > read-only ddl information to other components or systems.
> > >
> > > > 1. Will an exception thrown by the listener affect the normal
> execution
> > > process?
> > >
> > > Users need to handle the exception in the listener themselves. Many
> DDLs
> > > such as drop tables and alter tables cannot be rolled back, Flink
> cannot
> > > handle these exceptions for the listener. It will cause the operation
> to
> > > exit if an exception is thrown, but the executed DDL will be
> successful.
> > >
> > > > 2. What is the order of execution? Is the listener executed first or
> > are
> > > specific operations executed first?  If I want to perform DDL
> permission
> > > verification(such as integrating with Ranger based on the listener) ,
> is
> > > that possible?
> > >
> > > The listener will be notified to report catalog modification after DDLs
> > are
> > > successful, so you can not do permission verification for DDL in the
> > > listener. As mentioned above, Flink will not roll back the DDL even
> when
> > > the listener throws an exception. I think permission verification is
> > > another issue and can be discussed separately.
> > >
> > >
> > > Best,
> > > Shammon FY
> > >
> > > On Tue, May 30, 2023 at 1:07 AM Feng Jin <jinfeng1...@gmail.com>
> wrote:
> > >
> > > > Hi, Shammon
> > > >
> > > > Thanks for driving this Flip, [Support Customized Job Meta Data
> > Listener]
> > > > will  make it easier for Flink to collect lineage information.
> > > > I fully agree with the overall solution and have a small question:
> > > >
> > > > 1. Will an exception thrown by the listener affect the normal
> execution
> > > > process?
> > > >
> > > > 2. What is the order of execution? Is the listener executed first or
> > are
> > > > specific operations executed first?  If I want to perform DDL
> > permission
> > > > verification(such as integrating with Ranger based on the listener) ,
> > is
> > > > that possible?
> > > >
> > > >
> > > > Best,
> > > > Feng
> > > >
> > > > On Fri, May 26, 2023 at 4:09 PM Shammon FY <zjur...@gmail.com>
> wrote:
> > > >
> > > > > Hi devs,
> > > > >
> > > > > We would like to bring up a discussion about FLIP-294: Support
> > > Customized
> > > > > Job Meta Data Listener[1]. We have had several discussions with
> Jark
> > > Wu,
> > > > > Leonard Xu, Dong Lin, Qingsheng Ren and Poorvank about the
> functions
> > > and
> > > > > interfaces, and thanks for their valuable advice.
> > > > > The overall job and connector information is divided into metadata
> > and
> > > > > lineage, this FLIP focuses on metadata and lineage will be
> discussed
> > in
> > > > > another FLIP in the future. In this FLIP we want to add a
> customized
> > > > > listener in Flink to report catalog modifications to external
> > metadata
> > > > > systems such as datahub[2] or atlas[3]. Users can view the specific
> > > > > information of connectors such as source and sink for Flink jobs in
> > > these
> > > > > systems, including fields, watermarks, partitions, etc.
> > > > >
> > > > > Looking forward to hearing from you, thanks.
> > > > >
> > > > >
> > > > > [1]
> > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-294%3A+Support+Customized+Job+Meta+Data+Listener
> > > > > [2] https://datahub.io/
> > > > > [3] https://atlas.apache.org/#/
> > > > >
> > > >
> > >
> >
>

Reply via email to