Hi Dawid,

Thanks for driving this FLIP,big +1 for the proposal feature.

About the connector.properties part, I suggest avoid using timestamp because 
timestamp is a keyword in DDL as dataType, user may feel confused, using 
'timestamp.filed’ or ’source.timestamp’ will be better?

```
CREATE TABLE kafka_table (
  id BIGINT,
  eventType STRING,
  timestamp TIMESTAMP(3)
) WITH (
  'connector.type' = 'kafka',
  'value.format.type' = 'avro’,
  'timestamp' = 'timestamp'
)
```
Another minor comment, we could use `timestamp` replaces timestamp  in column 
definition of the example.

Best,
Leonard


> 在 2020年3月1日,22:30,Dawid Wysakowicz <dwysakow...@apache.org> 写道:
> 
> Hi,
> 
> I would like to propose an improvement that would enable reading table
> columns from different parts of source records. Besides the main payload
> majority (if not all of the sources) expose additional information. It
> can be simply a read-only metadata such as offset, ingestion time or a
> read and write  parts of the record that contain data but additionally
> serve different purposes (partitioning, compaction etc.), e.g. key or
> timestamp in Kafka.
> 
> We should make it possible to read and write data from all of those
> locations. In this proposal I discuss reading partitioning data, for
> completeness this proposal discusses also the partitioning when writing
> data out.
> 
> I am looking forward to your comments.
> 
> You can access the FLIP here:
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-107%3A+Reading+table+columns+from+different+parts+of+source+records?src=contextnavpagetreemode
> 
> Best,
> 
> Dawid
> 
> 

Reply via email to