I wanted to code my great work and have a humble job that did not distract me.
But all jobs are taken.
A fraction of my great work:
https://www.academia.edu/37275998/A_Nice_Artificial_General_Intelligence_How_To_Make_A_Nice_Artificial_General_Intelligence
Your connection with other people,
On 9/27/19, Steve Richfield wrote:
> YKY,
>
> The most basic function of neurons is process control. That is where
> evolution started - and continues. We are clearly an adaptive control
> system. Unfortunately, there has been little study of the underlying
> optimal "logic" of adaptive control sy
Uh... so where is it on GitHub?
On Sun, 22 Sep 2019 at 01:41, YKY (Yan King Yin, 甄景贤) <
generic.intellige...@gmail.com> wrote:
> Anyone interested in genetic evolution approach to learn logic rules?
> Each logic rule would be encoded as a gene (individual) and the whole set
> of rules evolve as a
Neuronal Pools and Neural Processing:
https://www.youtube.com/watch?v=QJ8AW5pi2T4
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T77af318d4abfa8a8-M2abd8536631e80f94461bc91
Delivery options: https://agi.topicb
Boolean logic is a subset of neural networks. A single neuron can implement
any logic gate. Assume the output is clamped between 0 and 1.
A and B = A + B - 1.
A or B = A + B.
Not A = -A + 1.
But first order logic is not so simple. We also know from 35 years of
experience (beginning with Cyc) that
On Mon, Sep 30, 2019, 7:31 AM wrote:
> I wanted to code my great work and have a humble job that did not
> distract me.
> But all jobs are taken.
>
That's not how it works. I had this great idea in 1999 for testing language
models using text compression. So I did lots of experiments and publis
Matt is right - Logic needs to be grounded on experiences.
http://matt.colorado.edu/teaching/highcog/readings/b8.pdf
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T77af318d4abfa8a8-Mf5918de07e97143626ca339f
De
Not bad your paper. Pattern loops, or even: Physical temporal pattern
loops. I'll remember that. Also this "object identity memory" level you
have described. Interesting. AI really simply consists of *many smart
procedures*, and I'm making them daily.
On Mon, 30 Sep 2019 at 15:31, wrote:
> I wa
Thanks Stefan.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T8eabd59f2f06cc50-M5de9c211296ef1433bbab81a
Delivery options: https://agi.topicbox.com/groups/agi/subscription
Not to argue with Matt's well-founded main point with this pedantic nit:
Any deterministic dynamical system can be simulated to an arbitrary degree of
precision with a finite directed *cyclic* NOR (or NAND) graph. There was even
some guy at the second IJCNN in San Diego (1990) with a "trainable
On Mon, Sep 30, 2019 at 11:04 PM Stefan Reich via AGI
wrote:
> Uh... so where is it on GitHub?
>
The code is here (still under development):
https://github.com/Cybernetic1/GILR
There are further explanations in the README and some screenshots.
😊
--
Arti
11 matches
Mail list logo