JCuda: No, I'm not willing to rely on servers having NVidia cards (some one
who is more familiar with server hardware may correct me, in which case
I'll say, "No, because *my* servers don't have NVidia cards- someone else
can add").
Paralleization: Yes.Admittedly, very clever use of Python could p
Asking as someone that never did NN on Flink, would you implement it using
JCuda? And would you implement it with model parallelization? Is there any
theoretical limit to implement "model and data parallelism" in Flink? If
you don't use GPUs and you don't parallelize models and data at the same
tim
Agreed. Our reasoning for for contributing straight to Flink was we plan on
doing a lot of wierd monkey-ing around with these things, and were going to
have to get our hands dirty with some code eventually anyway. The LSTM
isn't *that* difficult to implement, and it seems easier to write our own
t
On Fri, Feb 12, 2016 at 8:45 AM, Trevor Grant
wrote:
> Hey all,
>
> I had a post a while ago about needing neural networks. We specifically
> need a very special type that are good for time series/sensors called
> LSTM. We had a talk about pros/cons of using deeplearning4j for this use
> case a