In the 5jun18A.html JavaScript AI Mind we would like to re-organize the
SpreadAct() mind-module for spreading activation. It should have special
cases at the top and default normal operation at the bottom. The special
cases include responding to what-queries and what-think queries, such as
"what do you think". Whereas JavaScript lets you escape from a loop with
the "break" statement, JavaScript also lets you escape from a subroutine or
mind-module with the "return" statement that causes program-flow to abandon
the rest of the mind-module code and return to the supervenient module. So
in SpreadAct() we may put the special-test cases at the top and with the
inclusion of a "return" statement so that program-flow will execute the
special test and then return immediately to the calling module without
executing the rest of SpreadAct().

When we run the JSAI without input, we notice that at first a chain of
thought ensues based solely on conceptual activations and without making
use of the SpreadAct() module. The AI says, "I HELP KIDS" and then "KIDS
MAKE ROBOTS" and "ROBOTS NEED ME". As AI Mind maintainers we would like to
make sure that SpreadAct() gets called to maintain chains of thought, not
only so that the AI keeps on thinking but also so that the maturing AI Mind
will gradually become able to follow chains of thought in all available
directions, not just from direct objects to related ideas but also
backwards from direct objects to related subjects or from verbs to related
subjects and objects.

In the EnNounPhrase() module we insert a line of code to turn each direct
object into an actpsi or concept-to-be-activated in the default operation
at the bottom of the SpreadAct() module. We observe that the artificial
Mind begins to follow associative chains of thought much more reliably than
before, when only haphazard activation was operating. In the special
test-cases of the SpreadAct() module we insert the "return" statement in
order to perform only the special case and to skip the treatment of a
direct object as a point of departure into a chain of thought. Then we
observe something strange when we ask the AI "what do you think", after the
initial output of "I HELP KIDS". The AI responds to our query with "I THINK
THAT KIDS MAKE ROBOTS", which is the idea engendered by the initial thought
of "I HELP KIDS" where "KIDS" as a direct object becomes the actpsi going
into SpreadAct(). So the beastie really is telling us what is currently on
its mind, whereas previously it would answer, "I THINK THAT I AM A PERSON".
When we delay entering our question a little, the AI responds "I THINK THAT
ROBOTS NEED ME".

-- 
http://ai.neocities.org/AiMind.html
http://www.amazon.com/dp/0595654371
http://cyborg.blogspot.com/2018/06/jmpj0605.html
http://github.com/BuildingXwithJS/proposals/issues/22

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6e65e55f3a3cf199-Mff963cbe3155fa8be2016478
Delivery options: https://agi.topicbox.com/groups

Reply via email to