work between daily increased large tables,
>> for
>>
>> both spark sql and cassandra. I can see that the [1] use case facilitates
>> FiloDB to achieve columnar storage and query performance, but we had
>> nothing more
>>
>> knowledge.
>>
>>
Storm writes the data to both cassandra and kafka, spark reads the
>>> actual data from kafka , processes the data and writes to cassandra.
>>> The second approach avoids additional hit of reading from cassandra
>>> every minute , a device has written data to cassandra at the
s used to be inherent to the
> “commercial” vendors, but I can confirm as fact it is also in effect to the
> “open source movement” (because human nature remains the same)
>
>
>
> *From:* David Morales [mailto:dmora...@stratio.com]
> *Sent:* Thursday, May 14, 2015 4:30 PM
>
ery similar… I will contact you to
> understand if we can contribute to you with some piece !
>
> Best
>
> Paolo
>
> *Da:* Evo Eftimov
> *Data invio:* giovedì 14 maggio 2015 17:21
> *A:* 'David Morales' , Matei Zaharia
>
> *Cc:* user@spark.apac
t;> Regards.
> >>
> >>
> >>
> >> --
> >> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/SPARKTA-a-real-time-aggregation-engine-based-on-Spark-Streaming-tp22883.html
> >> Sent from the Apache Spark User List mailing list archive at N