Anyone has any idea on this?
On Tue, Jul 22, 2014 at 7:02 PM, hsy...@gmail.com wrote:
> But how do they do the interactive sql in the demo?
> https://www.youtube.com/watch?v=dJQ5lV5Tldw
>
> And if it can work in the local mode. I think it should be able to work in
> cluster mode, correct?
>
>
>
But how do they do the interactive sql in the demo?
https://www.youtube.com/watch?v=dJQ5lV5Tldw
And if it can work in the local mode. I think it should be able to work in
cluster mode, correct?
On Tue, Jul 22, 2014 at 5:58 PM, Tobias Pfeiffer wrote:
> Hi,
>
> as far as I know, after the Stream
Hi,
as far as I know, after the Streaming Context has started, the processing
pipeline (e.g., filter.map.join.filter) cannot be changed. As your SQL
statement is transformed into RDD operations when the Streaming Context
starts, I think there is no way to change the statement that is executed on
t
For example, this is what I tested and work on local mode, what it does is
it get data and sql query both from kafka and do sql on each RDD and output
the result back to kafka again
I defined a var called *sqlS. * In the streaming part as you can see I
change the sql statement if it consumes a sql
Can you paste a small code example to illustrate your questions?
On Tue, Jul 22, 2014 at 5:05 PM, hsy...@gmail.com wrote:
> Sorry, typo. What I mean is sharing. If the sql is changing at runtime, how
> do I broadcast the sql to all workers that is doing sql analysis.
>
> Best,
> Siyuan
>
>
> On T
Sorry, typo. What I mean is sharing. If the sql is changing at runtime, how
do I broadcast the sql to all workers that is doing sql analysis.
Best,
Siyuan
On Tue, Jul 22, 2014 at 4:15 PM, Zongheng Yang wrote:
> Do you mean that the texts of the SQL queries being hardcoded in the
> code? What d
Do you mean that the texts of the SQL queries being hardcoded in the
code? What do you mean by "cannot shar the sql to all workers"?
On Tue, Jul 22, 2014 at 4:03 PM, hsy...@gmail.com wrote:
> Hi guys,
>
> I'm able to run some Spark SQL example but the sql is static in the code. I
> would like to
Hi guys,
I'm able to run some Spark SQL example but the sql is static in the code. I
would like to know is there a way to read sql from somewhere else (shell
for example)
I could read sql statement from kafka/zookeeper, but I cannot share the sql
to all workers. broadcast seems not working for up