Hi Imran, you are right. Sequentially process does not make sense to use spark.

I think Sequentially process works if batch for each iteration is large 
enough(this batch could be processed in parallel).

My point is that we shall not run mini-batches in parallel, but it still 
possible to use large batch for parallel inside each batch(It seems to be the 
way that SGD implemented in MLLib does?).


-- 
Earthson Lu

On December 16, 2014 at 04:02:22, Imran Rashid (im...@therashids.com) wrote:

I'm a little confused by some of the responses.  It seems like there are two 
different issues being discussed here:

1.  How to turn a sequential algorithm into something that works on spark.  Eg 
deal with the fact that data is split into partitions which are processed in 
parallel (though within a partition, data is processed sequentially).  I'm 
guessing folks are particularly interested in online machine learning algos, 
which often have a point update and a mini batch update.

2.  How to convert a one-point-at-a-time view of the data and convert it into a 
mini batches view of the data.

(2) is pretty straightforward, eg with iterator.grouped (batchSize), or 
manually put data into your own buffer etc.  This works for creating mini 
batches *within* one partition in the context of spark.

But problem (1) is completely separate, and there is no general solution.  It 
really depends the specifics of what you're trying to do.

Some of the suggestions on this thread seem like they are basically just 
falling back to sequential data processing ... but realllllllly inefficient 
sequential processing.  Eg.  It doesn't make sense to do a full scan of your 
data with spark, and ignore all the records but the few that are in the next 
mini batch.

It's completely reasonable to just sequentially process all the data if that 
works for you.  But then it doesn't make sense to use spark, you're not gaining 
anything from it.

Hope this helps, apologies if I just misunderstood the other suggested 
solutions.

On Dec 14, 2014 8:35 PM, "Earthson" <earthson...@gmail.com> wrote:
I think it could be done like:

1. using mapPartition to randomly drop some partition
2. drop some elements randomly(for selected partition)
3. calculate gradient step for selected elements

I don't think fixed step is needed, but fixed step could be done:

1. zipWithIndex
2. create ShuffleRDD based on the index(eg. using index/10 as key)
3. using mapPartition to calculate each bach

I also have a question:

Can mini batches run in parallel?
I think parallel all batches just like a full batch GD in some case.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/what-is-the-best-way-to-implement-mini-batches-tp20264p20677.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to