Hi Leonard,

thanks for your feedback.

(1) Actually, I discuss this already in the FLIP. But let me summarize our options again if it was not clear enough in the FLIP:

a) CREATE TABLE t (a AS CAST(SYSTEM_METADATA("offset") AS INT))
pro: readable, complex arithmetic possible, more SQL compliant, SQL Server compliant
con: long

b) CREATE TABLE t (a INT AS SYSTEM_METADATA("offset"))
pro: shorter, not SQL nor SQL Server compliant
con: requires parser changes, no complex arithmetic like `computeSomeThing(SYSTEM_METADATA("offset"))` possible

c) CREATE TABLE t (a AS SYSTEM_METADATA("offset", INT))
pro: shorter, very readable, complex arithmetic possible
con: non SQL expression, requires parser changes

So I decided for a) with less disadvantages.

2) Yes, a format can expose its metadata through the mentioned interfaces in the FLIP. I added an example to the FLIP.

3) The concept of a key or value format is connector specific. And since the table source/table sinks are responsible for returning the metadata columns. We can allow this in the future due to the flexibility of the design. But I also don't think that we need this case for now. I think we can focus on the value format and ignore metadata from the key.

Regards,
Timo


On 07.09.20 11:03, Leonard Xu wrote:
Ignore  my question(4), I’ve  found the answer in the doc : 
'value.fields-include' = ‘EXCEPT_KEY' (all fields of the schema minus fields of 
the key)

在 2020年9月7日,16:33,Leonard Xu <xbjt...@gmail.com> 写道:

(4) About Reading and writing from key and value section, we bind that the 
fields of key part must belong to the fields of value part according to the 
options 'key.fields' = 'id, name' and 'value.fields-include' = 'ALL',  Is this 
by design? I think the key fields and value fields are independent each other 
in Kafka.




Reply via email to