Does anyone here have a simple vanilla RNN that uses Backprop that I can 
compare to my simple Markov Chain? The RNN must not have any semantics, 
residuals, gates, ensemble, data augmentation, etc, _*only *_the ability to 
predict text (must have order I suppose, hence RNN) and the ability to use 
Backprop to get any clue how to predict. I'm trying to investigate if (or how 
much better) is Backprop than a no backprop net. And why (what rules Backprop 
learns).

Another question: Is backprop learning rules like XOR? What about physics 
simulations like seeing reflections off wood that in actuality show a cat face 
that isn't visible to the naked eye? (there's such an algorithm in its early 
stages) Or will Backprop in a vanilla style only going to learn syntax/ 
backoff/ random forests/ dropout?
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T2bdb1204cee002e0-Mba28cbe5c58909f217380dc2
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to