I am using Apache Kafka & Apache Storm integration. I need to design a
model.Here are the specification of my topology :

I have configured topic in Kafka. Let say *customer1* . Now, the storm
bolts will read the data from the *customer1* kafka-spout. It processes the
data and writes into mongo and cassandra db. Here the db names are also
same as the kafka topics *customer1*. Table structure and rest of the
things will be same.

Now, suppose I get a new customer let say *customer2*. I need to read data
from *customer2* kafka-spout and write it into mongo and cassandra db where
the db names will be *customer2*.

I can think of two ways to do it .

   1.

   I will write a bolt which gets trigged whenever a new customer name gets
   added into a Kafka topic .That bolt will have code which will create and
   submit the new topology to cluster.
   2.

   I will create independent jars for all the customer and submit the
   topology manually.

I searched a lot about it but didn't get which approach is better.

What are the PROs and CONs of the above specified approach in terms of
efficiency, code maintainability and adding new changes to the existing
model ?

Is there any other way to handle this ?

Reply via email to