Hi Dipanjan,
Base your description, I think Flink could handle this user case.
Don't worry that Flink can't handle this kind of data scale because Flink
is a distributed engine. As long as the problem of data skew is carefully
avoided, the input throughput can be handled through appropriate resources.

Best,
JING ZHANG

Dipanjan Mazumder <java...@yahoo.com> 于2021年9月10日周五 上午11:11写道:

> Hi,
>
>    I am working on a usecase and thinking of using flink for the same.
> The use case is i will be having many large resource graphs , i need to
> parse that graph for each node and edge and evaluate each one of them
> against some suddhi rules , right now the implementation for evaluating
> individual entities with flink and siddhi are in place , but i am in
> dilemma whether i should do the graph processing as well in flink or not.
> So this is what i am planning to do
>
> From kafka will fetch the graph , decompose the graph into nodes and edges
> , fetch additional meradata for each node and edge from different Rest
> API’s and then pass the individual nodes and edges which are resources to
> different substreams which are already inplace and rules will work on
> individual substreams to process individual nodes and edges and finally
> they will spit the rule output into a stream. I will collate all of them
> based on the graph id from that stream using another operator and send the
> final result to an outputstream.
>
> This is what i am thinking , now need input from all of you whether this
> is a fair usecase to do with flink , will flink be able to handle this
> level of processing at scale and volume or not.
>
> Any help input will ease my understanding and will help me go ahead with
> this idea.
>
> Regard
> dipanjan
>

Reply via email to