trying to implement mini-batch GD in pyspark

2016-03-22 Thread sethirot
Hello, I want to able to update an existing model with new data without the need to do a batch GD again on all data. I would rather use the native mllib functions and without the streaming module. The way I thought about doing this is to use the *initialWeights* input argument to load my previous

Re: How to use pyspark streaming module "slice"?

2016-05-09 Thread sethirot
Hi, Have you managed to solve this? I just got stick with this also ? -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/How-to-use-pyspark-streaming-module-slice-tp26813p26908.html Sent from the Apache Spark User List mailing list archive at Nabble.com. -

Re: How to use pyspark streaming module "slice"?

2016-05-11 Thread sethirot
ok, thanks anyway On Wed, May 11, 2016 at 12:15 AM, joyceye04 [via Apache Spark User List] < ml-node+s1001560n26919...@n3.nabble.com> wrote: > Not yet. And I turned to another way to bypass it just to finish my work. > Still waiting for answers :( > > -- > If you reply