Alan

I suppose yo already possess a computational framework to process the research 
you are proposing as being critical? In other words, I think if you do not have 
the intelligence architecture, none of the functionality would be agi effective.

Perhaps, it is thought an agi architecture would just automatically evolve from 
additional functionality, as a spontaneous brain of sorts? That seems to be the 
hypothesis of a number of players today. I do not hold this view, simply 
because it is not so prescribed by studies of adaptive functionality in nature, 
but I understand how some may feel such a progression is worthwhile.

Comments?

Rob

Rob
________________________________
From: Alan Grimes <[email protected]>
Sent: Thursday, 07 June 2018 10:28 PM
To: [email protected]
Subject: [agi] The four things needed to solve AGI.

I'm becoming increasingly horrified by how people are entertaining
Mentifex. We really really don't have any more time to waste on that
crackpot asshole. =|

We are right on the eve of the singularity. We have no more time for
bullshit.

In a desperate effort to steer things back on course, here is a list of
things, from my POV, from my limited understanding, that we need to do
to get to AGI.


1. Our current neural models are fairly good but there is a major trick
that they seem to be missing: The space-time rotation. For example, in
the hearing system, the ear translates the temporal frequency
information into a "tono-topic" map on the auditory cortex. This is a
time -> space rotation of the signal. I'm pretty sure the visual system
rotates the spatial information from the eyes into a temporal signal
that is used to detect patterns. In computer vision, the latest
technique I know of, hack-botches this by scanning the perceptron across
the input. This is a hack! Rotate it into a temporal signal and it
becomes much easier to analyze.


2. There is quite a bit of research into evolving better deep networks,
tweaking the number and characteristics of deep networks to try to
achieve various metrics. THIS IS WRONG. WRONG!!!! I SAY...
Wroooooong!!!!!!!!!!!!!!   What the brain does is have a very small
number of stereotypical neural circuits that it trains on different
inputs/behaviors and recruits them as needed. Search for the
cortico-thalamo-cortical loop.


3. >>>> Perception is imagination <<<<. -> figure out how to make the
above powerful enough to produce an imagination process sufficient to
make high quality short-term (on the order of a fraction of a second)
predictions of the input signal.


4. A robotic and/or virtual avatar system so that the AI can experience
a reasonable approximation of humanness to facilitate psychological
development, communication, and education, it may not be strictly
necessary but it will make it much much easier for sub-geniuses to
develop and use AI systems.


Ideally, there would be more powerful self-optimization processes to
tune the dimensionality of the neural matrices and such, but that is not
really necessary at this point to reach human equivalence.





--
Please report bounces from this address to [email protected]

Powers are not rights.


------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T507c404b4595c71c-Mae309058b80202dfe2d0bb24
Delivery options: https://agi.topicbox.com/groups

Reply via email to