Yeah, I'd agree with Nick.
To have an implementation of RNN/LSTM in Spark, you may need a comprehensive
abstraction of neural networks which is general enough to represent the
computation (think of Torch, Keras, Tensorflow, MXNet, Caffe, etc.), and
modify current computation engine to work with v
Hi Yuhao,
BigDL looks very promising and it's a framework we're considering using. It
seems the general approach to high performance DL is via GPUs. Your project
mentions performance on a Xeon comparable to that of a GPU, but where does this
claim come from? Can you provide benchmarks?
Thanks,