Nick,

Have you looked at Apache Flink? It’s got very powerful API’s and you can 
stream aggregations, filters, etc right to druid and also it has very robust 
state management that might be a good fit for your use case.

https://flink.apache.org/
https://github.com/druid-io/tranquility

Thanks,
Kenny Gorman
https://www.eventador.io

> On Apr 9, 2019, at 3:26 PM, Nick Torenvliet <natorenvl...@gmail.com> wrote:
> 
> Hi all,
> 
> Just looking for some general guidance.
> 
> We have a kafka -> druid pipeline we intend to use in an industrial setting
> to monitor process data.
> 
> Our kafka system recieves messages on a single topic.
> 
> The messages are {"timestamp": yy:mm:ddThh:mm:ss.mmm, "plant_equipment_id":
> "id_string", "sensorvalue": float}
> 
> For our POC there are about 2000 unique plant_equipment ids, this will
> quickly grow to 20,000.
> 
> The kafka topic streams into druid
> 
> We are building some node.js/react browser based apps for analytics and
> real time stream monitoring.
> 
> We are thinking that for visualizing historical data sets we will hit druid
> for data.
> 
> For real time streaming we are wondering what our best option is.
> 
> One option is to just hit druid semi regularly and update the on screen
> visualization as data arrives from there.
> 
> Another option is to stream subset of the topics (somehow) from kafka using
> some streams interface.
> 
> With all the stock ticker apps out there, I have to imagine this is a
> really common use case.
> 
> Anyone have any thoughts as to what we are best to do?
> 
> Nick

Reply via email to