I am implementing the lambda architecture using apache spark for both streaming and batch processing. For real time queries i´m using spark streaming with cassandra and for batch queries i am using spark sql and spark mlib. The problem i ´m facing now is: i need to implemente one serving layer, i.e, i need a database capabel of randow read for storing my pre-computed batch views. The ones i was considering to use (druid/splout sql) don't have native connectors. Is it possibel integrate druid or splout using spark? Any other suggestion? Tanks!
-- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Lambda-architecture-using-Apache-Spark-tp22822.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org