I would implement them directly in Flink/Flink table API.

I don’t think Drools plays well in this distributed scenario. It expects a 
centralized rule store and evaluation .

> Am 23.06.2020 um 21:03 schrieb Jaswin Shah <jaswin.s...@outlook.com>:
> 
> 
> Hi I am thinking of using some rule engine like DROOLS with flink to solve a 
> problem described below:
> 
> I have stream of events coming from kafka topic and I want to analyze those 
> events based on some rules and give the results in results streams when rules 
> are satisfied. 
> 
> Now, I am able to solve the same problem with flink entirely but I need to 
> write hard coded conditions in flink for the rules and in future I want to 
> keep my flink job generic that if any rules are changed I should not need the 
> redeployment of flink job.
> 
> Use case:
> Consider there are events coming like A,B,C D....and those events are 
> denoting those entity A is down, B is down,C is down ...etc.
> 
> Now, there are many rules like:
> 1.A is actually down if A is down and there are 3 Bs for A are down...here A 
> entity can have B in event json so.
> 
> 2. B IS ACTUALLY DOWN IF B IS DOWN AND 2As for B are not down.
> 
> THOSE EVENTS FROM SAME MINUTE
> 
> See the not condition here, so,  here When I received event of B down the, I 
> will wait for buffer time say 1 min and after 1 min if I dont receive 2A down 
> events, I declare B as down in result stream.
> 
> Here basically we check on events at some minute so, keyboard minute.==》very 
> imp
> 
> I need a help on how can use DROOls engine to get those rules out from 
> business logic and also maintenaning maximum partitioning as I am able to do 
> it with static rules.
> 
> Any help will be really appreciated. 
> 
> Thanks,
> Jaswin 
> 
> 
> 
> 
> 
> Get Outlook for Android

Reply via email to