Classification: External

Hi,

I have a use case involving calculating the lifetime order count of a
customer in real-time. To reduce the memory footprint, I plan to run a
batch job on stored data every morning (let's say at 5 am) to calculate the
total order count up to that moment. Additionally, I aim to deploy a Flink
job that can provide the updated order count in real-time after 5 am
reading from Kafka in real time. Is it possible for this job to understand
both the data from the batch job and the new order count to provide a
consistently accurate total count?

Please let me know if you have solved this use case earlier or any idea on
how to proceed.

Reply via email to