[ 
https://issues.apache.org/jira/browse/FLINK-25483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17467621#comment-17467621
 ] 

陈磊 commented on FLINK-25483:
----------------------------

Hi, [~MartijnVisser] .Glad you can reply to my question. In the current 
implementation of FlinkSQL writing ES, it writes all rows of data into ES 
Document. If the currently written data contains a null value field, it will 
also overwrite the value in the ES Document field. However, this is not 
expected for many users. In some actual business scenarios, the data that the 
user expects to write is non-null data, and for the null value field, it is not 
expected that it will overwrite the original field value in the ES.

For example: the source data has 3 fields, a, b, c
insert into table2
select
a,b,c
from table1

When the b field is null, the user expects the a_value and c_value fields to 
actually be written into the ES.

In fact, what is written to ES is: a_value, null, c_value

> When FlinkSQL writes ES, it will not write and update the null value field
> --------------------------------------------------------------------------
>
>                 Key: FLINK-25483
>                 URL: https://issues.apache.org/jira/browse/FLINK-25483
>             Project: Flink
>          Issue Type: New Feature
>          Components: Table SQL / Ecosystem
>            Reporter: 陈磊
>            Priority: Minor
>
> Using Flink SQL to consume Kafka to write ES, sometimes some fields do not 
> exist, and those that do not exist do not want to write ES, how to deal with 
> this situation?
> For example: the source data has 3 fields, a, b, c
> insert into table2
> select
> a,b,c
> from table1
> When b=null, only hope to write a and c
> When c=null, only hope to write a and b
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to