Hi There, I am currently reading the Kafka Definitive guide book as we wanted 
to Architect an application using Kafka as the primary data store. In the Log 
Compaction section you have mentioned the dirty records are compacted with 
leaving just the latest records in the topic. Actually the application we are 
architecting needs to maintain bi-temporal data with all the message versions 
of the same key intact. What are the properties we should set in order to make 
sure none of the messages gets compacted? Or is it always safe to store the 
messages in another datastore (Postgresql) without relying Kafka as our only 
data store.

PS: The application that will use Kafka as the datastore is expected to store 
less than 1GB worth of data in 200+ topics.

Regards,
Barathan Kulothongan

Reply via email to