Hello,

I understand that Spark Streaming uses micro-batches to implement
streaming, while traditional streaming systems use the record-at-a-time
processing model. The performance benefit of the former is throughput, and
the latter is latency. I'm wondering what it would take to implement
record-at-a-time for Spark Streaming? Would it be something that is
feasible to prototype in one or two months?

Thanks,

Jianneng




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Record-at-a-time-model-for-Spark-Streaming-tp15885.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to