[ 
https://issues.apache.org/jira/browse/FLINK-35341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17847771#comment-17847771
 ] 

Benchao Li commented on FLINK-35341:
------------------------------------

I think it's a known issue, and has been resolved with the 
"table.optimizer.non-deterministic-update.strategy"[1] feature, you can try 
that.

https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/dev/table/config/#table-optimizer-non-deterministic-update-strategy

> Retraction stop working with Clock dependent function in Filter
> ---------------------------------------------------------------
>
>                 Key: FLINK-35341
>                 URL: https://issues.apache.org/jira/browse/FLINK-35341
>             Project: Flink
>          Issue Type: Bug
>          Components: Table SQL / Runtime
>    Affects Versions: 1.17.1
>            Reporter: Lim Qing Wei
>            Priority: Major
>
>  
> Say we have a Flink SQL view where
>  # we use clock dependent function like `UNIX_TIMESTAMP()` in query filter, 
> eg. WHERE clause, eg. table.timestamp < UNIX_TIMESTAMP()
>  # source record is retracted at a time where the filter is evaluated as false
> we expect a retraction is produced from the view, but in practice nothing 
> happen.
>  
> We are using kafka as a source, here's a small snippet that shows the problem.
>  
> {code:java}
> CREATE TEMPORARY VIEW my_view AS
>     SELECT key,
>             someData,
>             expiry
>     FROM upstream 
>     WHERE expiry > UNIX_TIMESTAMP();
> select * from my_view where key = 5574332;{code}
>  
>  
> The actual query is a bit more complicated but this simplified one should 
> illustrate the issue. Below is the event happen in chronological order:
>  
>  # Run this query as a stream
>  # Create a record in upstream where key = 5574332, and expiry to be in 3 
> minutes into the future.
>  # Observe insertion of the record, as expected
>  # Wait for 3 minutes
>  # Now the record should expired, but given there's no update, there's no 
> change to the stream output just yet
>  # Delete the upstream record (using tombstone in kafka)
>  # Observe no change in the stream output, but we are expecting 
> retraction(aka deletion)
>  
> Is this a known issue? I've search Jira but could find any, I observed this 
> in 1.15 until 1.17, havent tested with 1.18 and above though



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to