Hi all,

We are currently evaluating flink for processing kafka messages and are
running into some issues. The basic problem we are trying to solve is
allowing our end users to dynamically create jobs to alert based off the
messages coming from kafka. At launch we figure we need to support at least
15,000 jobs (3000 customers with 5 jobs each). I have the example kafka job
running and it is working great. The questions I have are:

   1. On my local machine (admittedly much less powerful than we would be
   using in production) things fall apart once I get to around 75 jobs. Can
   flink handle a situation like this where we are looking at thousands of
   jobs?
   2. Is this approach even the right way to go? Is there a different
   approach that would make more sense? Everything will be listening to the
   same kafka topic so the other thought we had was to have 1 job that
   processed everything and was configured by a separate control kafka topic.
   The concern we had there was we would almost completely lose insight into
   what was going on if there was a slow down.
   3. The current approach we are using for creating dynamic jobs is
   building a common jar and then starting it with the configuration data for
   the individual job. Does this sound reasonable?


If any of these questions are answered elsewhere I apologize. I couldn't
find any of this being discussed elsewhere.

Thanks for your help.

David

Reply via email to