[moving to user@]
This would typically be accomplished with a union() operation. You
can't mutate an RDD in-place, but you can create a new RDD with a
union() which is an inexpensive operator.
On Fri, Sep 12, 2014 at 5:28 AM, Archit Thakur
wrote:
> Hi,
>
> We have a use case where we are plannin
Hi,
We have a use case where we are planning to keep sparkcontext alive in a
server and run queries on it. But the issue is we have a continuous
flowing data the comes in batches of constant duration(say, 1hour). Now we
want to exploit the schemaRDD and its benefits of columnar caching and
compre