g/docs/0.9.1/streaming-programming-guide.html)
>
> So, in order to handle a stream, you should handle each rdd in that
> stream. This means with everything things want to do with your new data,
> put them in 'process_rdd' function. There's nothing return in output of
>
ct_feature(rf_model, x))
> //do something as you want (saving,...)
>
> stream.foreachRDD(process_rdd)
>
> 2016-05-31 12:57 GMT+07:00 obaidul karim :
>
>> foreachRDD does not return any value. I can be used just to send result
>> to another place/context, like db,fil
Sorry for lots of typos (writing from mobile)
On Tuesday, 31 May 2016, obaidul karim wrote:
> foreachRDD does not return any value. I can be used just to send result to
> another place/context, like db,file etc.
> I could use that but seems like over head of having another hop.
>
RDD ? I think this is much better than your trick.
>
>
> 2016-05-31 12:32 GMT+07:00 obaidul karim >:
>
>> Hi Guys,
>>
>> In the end, I am using below.
>> The trick is using "native python map" along with "spark spreaming
>> transf
May 30, 2016 at 8:43 PM, nguyen duc tuan
wrote:
> Dstream has an method foreachRDD, so you can walk through all RDDs inside
> DStream as you want.
>
>
> https://spark.apache.org/docs/1.4.0/api/java/org/apache/spark/streaming/dstream/DStream.html
>
> 2016-05-30 19:30 GMT+07:00 o
Hi,
Anybody has any idea on below?
-Obaid
On Friday, 27 May 2016, obaidul karim wrote:
> Hi Guys,
>
> This is my first mail to spark users mailing list.
>
> I need help on Dstream operation.
>
> In fact, I am using a MLlib randomforest model to predict using spark
>
Hi Guys,
This is my first mail to spark users mailing list.
I need help on Dstream operation.
In fact, I am using a MLlib randomforest model to predict using spark
streaming. In the end, I want to combine the feature Dstream & prediction
Dstream together for further downstream processing.
I am