The idea that an AGI program has to be able to 'grow' knowledge is not conceptually radical but the use of the idea that a program might be seeded with certain kinds of insights does make me think about the problem in a slightly different way. By developing a program to work along principles that are meant to incorporate some way to build on the basis of insights that are provided as the program explores different kinds of subjects I think I might be able to see this theory in the terms of a transition from programming discrete instructions that correspond to a particular sequence of computer operations into programming with instructions that have a potential to grow relationships between the knowledge data. The kinds of relationships do not need to be absolutely pre-determined because the use of basic relationships and references to specific ideas can implicitly develop into more sophisticated relationships that would only need to be recognized. For example, an abstraction of generalization seems pretty fundamental to Old AI. However, I believe that just by using more basic relationships which can refer to other specific ideas and to groups of ideas, the relationships that will effectively refer to a kind of abstraction may develop naturally - in primitive forms. It would be necessary to 'teach' the AGI program to recognize and appreciate these abstractions so that it could then use abstraction more explicitly. Jim Bromer
------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T032c6a46f393dbd9-M92dfb3a7f8fad0cbed6a082a Delivery options: https://agi.topicbox.com/groups/agi/subscription