Hi Xuyang.

Yes, the goal is somewhat like a logging system, it is to expose data details 
to external systems for record keeping, regulation, auditing etc.


I have tried to exploit the current logging system, by putting logback, 
kafka-appender on the classpath, and modifying the jdbc connector source code, 
I could construct a special logger that sends the row data with context info 
(job id) to kafka for further processing by external systems.


This approach requires going deep into deployments and tweak logback 
configuration files. Ideally I could like to be able to configure the logging 
system from within flink-conf.yaml. Maybe I could borrow the 
configuration/initialization concept from Metrics modules, and make a light 
weight auditing module.






                       
Original Email
                       
                     

Sender:"Xuyang"< xyzhong...@163.com &gt;;

Sent Time:2023/11/7 10:25

To:"Bo"< 99...@qq.com &gt;;

Cc recipient:"Chen Yu"< yuchen.e...@gmail.com &gt;;"user"< 
user@flink.apache.org &gt;;

Subject:Re:Re:  Auditing sink using table api


Hi, Bo.
Do you means adding a logger sink after the actual sink? IMO, that is 
impossible.&nbsp;


But there is another way. If the sink is provided by flink, you can modify the 
code in it like adding a INFO-level log, print a clearer exception and so on. 
Then re-build the specific connector.&nbsp;







--
&nbsp; &nbsp; Best!
&nbsp; &nbsp; Xuyang







在 2023-11-04 17:37:55,"Bo" <99...@qq.com&gt; 写道:
Hi, Yu


Thanks for the suggestion.&nbsp;


Ideally the data need to come from the sink being audited, adding another sink 
serves part of the purpose, but if anything goes wrong in the original sink, I 
presume it won't be reflected in the additional sink. (correct me If I'm 
mistaken)


I may have to make some custom sink based on the official ones




------------------ Original ------------------
From: Chen Yu <yuchen.e...@gmail.com&gt;
Date: Sat,Nov 4,2023 3:31 PM
To: Bo <99...@qq.com&gt;, user <user@flink.apache.org&gt;
Subject: Re:  Auditing sink using table api



  Hi Bo,
  
 
  How about write the data to Print Connector[1] simultaneously 
via&nbsp;insertInto[2]? It will print the data into Taskmanager's Log.
  Of course, you can choose an appropriate connector according to your audit 
log storage.
 
  
 
  Best,
  Yu Chen
  
 
  
[1]&nbsp;https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/connectors/table/print/
  
[2]https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/common/#emit-a-table
  
 
 
 
 发件人: Bo <99...@qq.com&gt;
 发送时间: 2023年11月4日 13:53
 收件人: user <user@flink.apache.org&gt;
 主题: Auditing sink using table api &nbsp;
 
  Hello community,
 
 
 I am looking for a way to perform auditing of the various sinks (mostly 
JdbcDynamicTableSink) using the table api.
 By "auditing", I mean to log details of every row data coming into the sink, 
and any anormalies when the sink write to external systems.
 
 
 Does flink have some kind of auditing mechanism in place? The only way I could 
see now is to make a custom sink that supports detail logging to external 
systems.
 
 
 Any thoughts/suggestions?
 
 
 Regards,
 
 
 Bo

Reply via email to