Subject: Re: Data and Model Parallelism in MLPC
Hi,
I went through the code for implementation of MLPC and couldn't understand why
stacking/unstacking of the input data has been done. The description says "
Block size for stacking input data in matrices to speed up the computation
arallelism would be to represent the network as the
>> graph and use GraphX to write forward and back propagation. However, this
>> option does not seem very practical to me.
>>
>>
>>
>> Best regards, Alexander
>>
>>
>>
>> *From:* Disha Shri
option does not seem very practical to me.
>
>
>
> Best regards, Alexander
>
>
>
> *From:* Disha Shrivastava [mailto:dishu@gmail.com]
> *Sent:* Tuesday, December 08, 2015 11:19 AM
> *To:* Ulanov, Alexander
> *Cc:* dev@spark.apache.org
> *Subject:* Re: Data and Model Pa
forward and back
propagation. However, this option does not seem very practical to me.
Best regards, Alexander
From: Disha Shrivastava [mailto:dishu@gmail.com]
Sent: Tuesday, December 08, 2015 11:19 AM
To: Ulanov, Alexander
Cc: dev@spark.apache.org
Subject: Re: Data and Model Parallelism in MLPC
> Multilayer perceptron classifier in Spark implements data parallelism.
>
>
>
> Best regards, Alexander
>
>
>
> *From:* Disha Shrivastava [mailto:dishu@gmail.com]
> *Sent:* Tuesday, December 08, 2015 12:43 AM
> *To:* dev@spark.apache.org; Ulanov, Alexander
&g
Hi Disha,
Multilayer perceptron classifier in Spark implements data parallelism.
Best regards, Alexander
From: Disha Shrivastava [mailto:dishu@gmail.com]
Sent: Tuesday, December 08, 2015 12:43 AM
To: dev@spark.apache.org; Ulanov, Alexander
Subject: Data and Model Parallelism in MLPC
Hi,
I
Hi,
I would like to know if the implementation of MLPC in the latest released
version of Spark ( 1.5.2 ) implements model parallelism and data
parallelism as done in the DistBelief model implemented by Google
http://static.googleusercontent.com/media/research.google.com/hi//archive/large_deep_netw