Hi,
   We are prototyping kafka + storm for our stream processing / event
processing needs. One of the issues we face is a huge influx of stream data
from one of our customers. If we have a single topic for this stream for
all customers, other customers who are behind the big customer stream would
starve for significant time, until their turn comes.
    One idea is to create a topic per customer per use case, implement a
fairness algorithm on top of the high level consumer using
*createMessageStreamsByFilter* and use that to build a  storm Spout.
However, this also means tens of thousands of topics and several tens of
thousands (even hundreds of thousands) of partitions on a single kafka
cluster.
     I remember reading that you are effectively limited by filehandles.
Has anyone tried such a setup ?

Thanks!
-Neelesh

Reply via email to