The brain has enormous redundancy everywhere. It uses 6 x 10^14 synapses to
store 10^9 bits of long term memory and 10^9 bits of inherited knowledge.
But parallel systems are like that. Google's server farm of a million CPUs
each carry an identical copy of Linux. Each of your 10^13 cell nuclei has
an identical copy of your DNA. It's a trade-off between speed and memory.

On Sun, Oct 13, 2019, 12:36 PM James Bowery <[email protected]> wrote:

> There is an enormous amount of redundancy in the abstract thalamocortical
> architecture evincing small Kolmogorov Complexity in description.  While I
> understand "the devil is in the details" of this evolved structure (not the
> least of which is the fact that it elides that the cerebellum's neuron
> count is a super-majority of the brain's), there seems to be a vast
> theoretic vacuum of the requisite simplicity.  It's the dog that didn't
> bark.
>
> That's why I take Hecht-Nielsen's Confabulation Theory seriously:  Not
> because I believe, as he did, that he "solved the problem of cognition",
> but rather that he has a first order approximation of the neocortex (indeed
> thalamcortical) structure -- at least one barking dog -- an _approach_.
> It's rather like a framework for compression like mixture of models, rather
> than the models themselves.
>
> On Sun, Oct 13, 2019 at 12:07 PM Matt Mahoney <[email protected]>
> wrote:
>
>>
>>
>> On Sun, Oct 13, 2019, 10:09 AM <[email protected]> wrote:
>>
>>> Isn't that massively inefficient? It'd take 100 times more
>>> storage/computation to do the same thing as a weighted net no?
>>>
>>
>> The neural models I use in the top ranked text compressors use a lot less
>> than 12-24 petaflops and a petabyte of RAM. But the language modeling is
>> rather rudamentary, nowhere near AGI. But I would be happy for you to prove
>> my estimate wrong.
>>
>> And one more thing. That's one human brain. To automate all labor, you
>> need several billion times that. Current technology uses about 1 megawatt
>> per petaflop. Maybe neuromorphic computing could get it down to 100 kW per
>> brain. Maybe economy of specialization could reduce it to 1 kW, which is
>> 50% of global energy production. But shrinking transistors alone won't do
>> it. If we can't do the optimization, it's going to take nanotechnology,
>> moving atoms instead of electrons. The brain uses 20 watts. It can be done.
>>
>>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/Td4a5dff7d017676c-Mb2bcd09cee266e4dd20df9f3>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td4a5dff7d017676c-Mb6dec56c4ea8c19e7751d482
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to