I was thinking about it and the No Free Lunch theory is relevant to a large
number of computational algorithms, but it is not a problem when the time
differences (between runs of an algorithm) do not bother us. By thinking about
simpler problems I was able to start to see some of the related issues in
greater detail. Trivial pseudo-solutions, like making sure every use of an
algorithm took the same amount of time, just shifts the problem. But I do feel
that by examining sub-problems in greater detail I was able to explore some
ideas that seemed new to me. A purely mathematical implementation to general
intelligence does not seem possible to me, but there are two caveats. One is
that mathematics is an important part of using contemporary computers so math
must be a part of artificial general intelligence, and the second is that
anything that is done on a computer is hypothetically open to (some kind of)
mathematics. One issue that I have, is that you have to explore less likely
possibilities (when it is reasonably safe and reasonably ethical) in order to
explore the neighborhood. General knowledge must (to some extent) be reason
based and you need to fit new fragments of ideas into other relevant ideas.
That means that the most likely interpretation of some situational data is not
always the best interpretation to use.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Ta433301e9ac5fb42-M738e95cc240932d03e67c6f0
Delivery options: https://agi.topicbox.com/groups/agi/subscription