coming from the way
input data is read and stored.
Please correct me if I am wrong and clarify my doubt.
Thanks and Regards,
Disha
On Tue, Dec 29, 2015 at 5:40 PM, Disha Shrivastava
wrote:
> Hi Alexander,
>
> Thanks a lot for your response.Yes, I am considering the use case when the
&g
option does not seem very practical to me.
>
>
>
> Best regards, Alexander
>
>
>
> *From:* Disha Shrivastava [mailto:dishu@gmail.com]
> *Sent:* Tuesday, December 08, 2015 11:19 AM
> *To:* Ulanov, Alexander
> *Cc:* dev@spark.apache.org
> *Subject:* Re: Data and Model Pa
Hi,
Suppose I have a file locally on my master machine and the same file is
also present in the same path on all the worker machines , say
/home/user_name/Desktop. I wanted to know that when we partition the data
using sc.parallelize , Spark actually broadcasts parts of the RDD to all
the worker m
op.oreilly.com/product/0636920033073.do> (O'Reilly)
>> Typesafe <http://typesafe.com>
>> @deanwampler <http://twitter.com/deanwampler>
>> http://polyglotprogramming.com
>>
>> On Sat, Dec 26, 2015 at 12:54 PM, Ted Yu wrote:
>>
>>> Do
Hi,
I wanted to know how to use Akka framework with Spark starting from basics.
I saw online that Spark uses Akka framework but I am not really sure if I
can define Actors and use it in Spark.
Also, how to integrate Akka with Spark as in how will I know how many Akka
actors are running on each of
> Multilayer perceptron classifier in Spark implements data parallelism.
>
>
>
> Best regards, Alexander
>
>
>
> *From:* Disha Shrivastava [mailto:dishu@gmail.com]
> *Sent:* Tuesday, December 08, 2015 12:43 AM
> *To:* dev@spark.apache.org; Ulanov, Alexander
&g
Hi,
I would like to know if the implementation of MLPC in the latest released
version of Spark ( 1.5.2 ) implements model parallelism and data
parallelism as done in the DistBelief model implemented by Google
http://static.googleusercontent.com/media/research.google.com/hi//archive/large_deep_netw
ccess to shared memory) in RNN. I also look forward
> to contributing in this respect.
>
> El 03/11/2015, a las 16:00, Disha Shrivastava
> escribió:
>
> I would love to work on this and ask for ideas on how it can be done or
> can suggest some papers as starting point. Also, I w
https://issues.apache.org/jira/browse/SPARK-9273
>
> Roadmap of MLlib deep learning
> https://issues.apache.org/jira/browse/SPARK-5575
>
> I think it may be good to join the discussion on SPARK-5575.
> Best
>
> Kai Sasaki
>
>
> On Nov 2, 2015, at 1:59 PM, Disha Shrivas
Hi,
I wanted to know if someone is working on implementing RNN/LSTM in Spark or
has already done. I am also willing to contribute to it and get some
guidance on how to go about it.
Thanks and Regards
Disha
Masters Student, IIT Delhi
to force Spark to distribute the
> data across all nodes, however it does not seem to be worthwhile for this
> rather small dataset.
>
>
>
> Best regards, Alexander
>
>
>
> *From:* Disha Shrivastava [mailto:dishu@gmail.com]
> *Sent:* Sunday, October 11, 201
u're thinking by changing the partitioning.
>
> On 10/11/15, Disha Shrivastava wrote:
> > Dear Spark developers,
> >
> > I am trying to study the effect of increasing number of cores ( CPU's) on
> > speedup and accuracy ( scalability with spark ANN ) performan
ot;label")
val evaluator = new
MulticlassClassificationEvaluator().setMetricName("precision")
println("Precision:" + evaluator.evaluate(predictionAndLabels))
Can you please suggest me how can I ensure that the data/task is divided
equally to all the worker machines?
Thanks and Regards,
Disha Shrivastava
Masters student, IIT Delhi
13 matches
Mail list logo