Problem: Log analytics.

Solutions:
1) Aggregating logs using Flume and storing the aggregations into Cassandra. Spark reads data from Cassandra, make some computations
and write the results in distinct tables, still in Cassandra.
2) Aggregating logs using Flume to a sink, streaming data directly into Spark. Spark make some computations and store the results in Cassandra.
       3) *** your solution ***

Which is the best workflow for this task?
I would like to setup something flexible enough to allow me to use batch processing and realtime streaming without major fuss.

Thank you in advance.



Reply via email to