Compressed Network Search Finds Complex Neural Controllers with a Million
Weights
First Deep Learner to learn control policies directly from high-dimensional
sensory input using reinforcement learning
Jürgen Schmidhuber, 2013
http://people.idsia.ch/~juergen/compressednetworksearch.html

On Fri, Oct 5, 2018 at 4:05 PM Jim Bromer via AGI <[email protected]>
wrote:

> A good goal for a next generation compression system is to allow
> functional transformations to operate on some compressed data without
> needing to decompress it first. (I forgot what this is called but
> there is a Wikipedia entry on something s8milar in cryptography.)
> This is how multiplication works by the way.
> 
> If a 'dynamic compression' was preformed in stages using 'components'
> which had certain abstract attributes that could be used in
> computations that were done in multiple passes, then it might be
> possible to postpone a complete analysis or computation until the data
> was presented in a more abstract format (relative to the given
> problem). The goal is to find a way to make each pass effective but
> seriously less complicated. The idea is that the data 'components'
> (the data produced by a previous pass) might have certain abstract
> properties that were general, and subsequent passes might then operate
> on narrower classes. (This is how many algorithms work now that I
> think about it, but they are not described and defined using the
> concept of compression abstractions as a fundamental principle.)
> Jim Bromer

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T55454c75265cabe2-M01199666719c06c491928b24
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to