Let me explain my use case:- We have a ELK setup in which logstash-forwarders pushes logs from different services to a logstash. The logstash then pushes them to kafka. The logstash consumer then pulls them out of Kafka and indexes them to Elasticsearch cluster.
We are trying to ensure that no single service logs doesn't overwhelm the system. So I was thinking if each service logs go in their own topics in kafka and if we can specify a maximum length in the topic then the producer of that topic can block when a kafka topic is full. AFAIK there is no such notion as maximum length of a topic, i.e. offset has no limit, except Long.MAX_VALUE I think, which should be enough for a couple of lifetimes (9 * 10E18, or quintillion or million trillions). What would be the purpose of that, besides being a nice foot-gun :) Marko Bonaći Monitoring | Alerting | Anomaly Detection | Centralized Log Management Solr & Elasticsearch Support Sematext <http://sematext.com/> | Contact <http://sematext.com/about/contact.html> On Sat, Nov 28, 2015 at 2:13 PM, Debraj Manna <subharaj.ma...@gmail.com> wrote: > Hi, > > Can some one please let me know the following:- > > > 1. Is it possible to specify maximum length of a particular topic ( in > terms of number of messages ) in kafka ? > 2. Also how does Kafka behave when a particular topic gets full? > 3. Can the producer be blocked if a topic get full rather than deleting > old messages? > > I have gone through the documentation > <http://kafka.apache.org/081/documentation.html#basic_ops_add_topic> but > could not find anything of what I am looking for. >