Hi, Krzysztof   

> * I have a high pace stream of events coming in Kafka. 
> * I have some dimension tables stored in Hive. These tables are changed 
> daily. I can keep a snapshot for each day. 

For this use case, Flink supports temporal join the latest hive partition as 
temporal table now, you can refer the example in doc [1], and this feature will 
come soon with nearly Flink 1.12 release.

Best,
Leonard
[1] 
https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/hive/hive_read_write.html#temporal-join-the-latest-partition

Reply via email to