I have a doubt regarding performance. Here are the two solution i can think of
1. I will write a bolt which gets trigged whenever a new customer name gets added into a Kafka topic .That bolt will have code which will submit the new topology. 2. I will create independent jars for all the customer and submit the topology. Which approach it better and efficient ? Does deploying multiple jars will create any problem or in first case what if the storm nimbus fails, will it be a problem ? Thanks !! On Fri, Aug 7, 2015 at 12:07 PM, Abhishek Agarwal <abhishc...@gmail.com> wrote: > Yes. Since kafka takes the topic in the configuration, it means you will > have to add a new spout with different config. Either you resubmit the > topology (along with jar) or you can just have different topology for > different consumer. > > On Fri, Aug 7, 2015 at 11:52 AM, Ritesh Sinha < > kumarriteshranjansi...@gmail.com> wrote: > >> Do you mean creating a new jar for different customers and deploying it >> on the cluster? >> >> On Fri, Aug 7, 2015 at 11:45 AM, Abhishek Agarwal <abhishc...@gmail.com> >> wrote: >> >>> You will have to re-deploy your topology, with a new kafka spout for >>> another topic. >>> >>> On Fri, Aug 7, 2015 at 10:54 AM, Ritesh Sinha < >>> kumarriteshranjansi...@gmail.com> wrote: >>> >>>> I have a topology which runs in the following way : >>>> It reads data from the Kafka topic and pass it to storm.Storm processes >>>> the data and stores it into two different DBs(mongo & cassandra). >>>> >>>> Here, the kafka topic is the name of the customer also the database >>>> name in mongodb and cassandra is same as the name of the kafka topic. >>>> >>>> Now, Suppose i have submitted the topology and it is running and i get >>>> a new customer. >>>> >>>> I will add a topic name in my kafka .So, is it possible to make storm >>>> read data from that kafka topic when the cluster is running. >>>> >>>> Thanks >>>> >>> >>> >>> >>> -- >>> Regards, >>> Abhishek Agarwal >>> >>> >> > > > -- > Regards, > Abhishek Agarwal > >