Log compaction though allows it to work as a data store quite well for some use cases . It's exactly why I started looking hard at Kafka lately.
"The general idea is quite simple. Rather than maintaining only recent log entries in the log and throwing away old log segments we maintain the most recent entry for each unique key. This ensures that the log contains a complete dataset and can be used for reloading key-based state." https://cwiki.apache.org/confluence/display/KAFKA/Log+Compaction > Date: Tue, 6 Jan 2015 16:34:06 -0800 > Subject: Re: Is it possible to enforce an "unique constraint" through Kafka? > From: wiz...@gmail.com > To: users@kafka.apache.org > > Kafka is more of a message queue than a data store. You can use it to store > history of the queue (certainly a powerful use case for disaster recovery), > but it's still not really a data store. > > From the Kafka website (kafka.apache.org): > Apache Kafka is a publish-subscribe messaging [queue] rethought as a > distributed commit log. > > -Mark > > On Tue, Jan 6, 2015 at 3:14 PM, Joseph Pachod <joseph.pac...@gmail.com> > wrote: > > > Hi > > > > Having read a lot about kafka and its use at linkedin, I'm still unsure > > whether Kafka can be used, with some mindset change for sure, as a general > > purpose data store. > > > > For example, would someone use Kafka to enforce an "unique constraint"? > > > > A simple use case is, in the case of linkedin, unicity of users' login. > > > > What would be you recommended implementation for such a need? > > > > Thanks in advance > > > > Best, > > Joseph > >