[ https://issues.apache.org/jira/browse/FLINK-16070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
godfrey he updated FLINK-16070: ------------------------------- Fix Version/s: 1.11.0 > Blink planner can not extract correct unique key for UpsertStreamTableSink > --------------------------------------------------------------------------- > > Key: FLINK-16070 > URL: https://issues.apache.org/jira/browse/FLINK-16070 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner > Affects Versions: 1.11.0 > Reporter: Leonard Xu > Assignee: godfrey he > Priority: Critical > Labels: pull-request-available > Fix For: 1.10.1, 1.11.0 > > Time Spent: 10m > Remaining Estimate: 0h > > I reproduce an Elasticsearch6UpsertTableSink issue which user reported in > mail list[1] that Blink planner can not extract correct unique key for > following query, but legacy planner works well. > {code:java} > // user code > INSERT INTO ES6_ZHANGLE_OUTPUT > SELECT aggId, pageId, ts_min as ts, > count(case when eventId = 'exposure' then 1 else null end) as expoCnt, > > count(case when eventId = 'click' then 1 else null end) as clkCnt > FROM ( > SELECT > 'ZL_001' as aggId, > pageId, > eventId, > recvTime, > ts2Date(recvTime) as ts_min > from kafka_zl_etrack_event_stream > where eventId in ('exposure', 'click') > ) as t1 > group by aggId, pageId, ts_min > {code} > I found that blink planner can not extract correct unique key in > `*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner works well > in > `*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)* > `. A simple ETL job to reproduce this issue can refers[2] > > [1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html] > [2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java] > > -- This message was sent by Atlassian Jira (v8.3.4#803005)