Im using rprop (not dependent on error function in this case ie.
standard rprop vs. irprop or arprop) for an MLP tanh, sigmod nnet as
part of a hybrid model. I guess I was using a little Matlab thought
when I wrote the SSE funtion.  My batches are about 25,000 x 80 so my
absolute error (diff between net outputs and desired outputs) when
using *one* output unit is shape(~25000,), am I wrong to assume
trace(error*transpose(error)) is the sum of the squared errors which
should be an shape(1,)?  I'm just now starting to dig a little deeper
into scipy, and I need to get the full doc. 

Thanks for all your input.

-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to