Matt (and Colin), I would like to clarify what I think is your real question:
Nothing complex works the first time. You build it, you turn it on, it doesn't work, and you start debugging it. It is very close to impossible to debug anything without a solid understanding of how it should work, though I wrote a book about how to go about such things, entitled "Advanced Logical Methods" available free in the Library of FixLowBodyTemp.com Anyway, there has been a variety of neuromorphic chips made over the years, and there was no unknown physics with these. Have any of them been made to work satisfactorily? What would be the debug plan with Colin's chip? The GREAT barrier I see are "field effects" other than those Colin has considered. My own favorite is the variable conductivity within a neural tube. If a tube abruptly goes from conductive to insulator at it's inner circumference then the Hall effect would be minimal. However, if there is a region of increasing resistance at the outside of the conductive region, then there would be a major inhibiting effect from even tiny nearby magnetic fields. Remember, fields drop off as inverse linear rather than inverse square from long wire radiators, so interneuronal field communication is MUCH easier than it might first appear. I suspect there are lots of such things, and they don't all scale the same as size changes. Further, some things might be VERY sensitive, like our own sodium levels where we can become VERY sick if they are a little "off". I suspect Colin is trying to skip some important steps between where head is at now and new hardware - like simulation. Maybe there is a highschool science fair level demo and early debug prototype - like large models of several neurons in a fish tank full of salt water - in effect an electrolytic computer like I mentioned in an earlier posting? Anyway, I see Colin's field theory concept, but Colin needs to see a lot more, including a clear path through design and debugging, before we can really discuss this. So, Colin, having read this, it is back to Matt's question, like how do you hope and expect this to work? Thanks both of you for your persistence here. Steve On Tue, Jul 2, 2019, 2:06 PM Matt Mahoney <[email protected]> wrote: > So if computation is not behind intelligence (based on 65 years of AGI > failure) and you have no idea what is, then what is the basis of your chip > design, and what do you hope to accomplish with it? > > On Tue, Jul 2, 2019, 10:00 AM Colin Hales <[email protected]> wrote: > >> >> >> On Tue, Jul 2, 2019 at 10:59 PM Matt Mahoney <[email protected]> >> wrote: >> >>> Colin, in case you haven't noticed, Peter has actually produced some AI >>> (aigo, which seems to have better language understanding than Amazon's >>> Alexa, at least in the demos I've seen), while all you have is a theory >>> that AI comes from consciousness, which comes from EM waves or some other >>> mysterious physics that can't be modeled in software. You argue for the >>> scientific method, yet you argue for a theory which is not testable because >>> you defined consciousness (qualia) in such a way that it is not detectable. >>> Well, good luck. >>> >>> Your argument for qualia is to poke me in the eye and say "are you going >>> to argue that wasn't real?" Of course not. Our brains can't just turn off >>> pain because if they could, we would become extinct. But I can also write a >>> program that claims to feel pain and modifies it's behavior to avoid it, >>> thus passing the same test that people have argued proves that lobsters >>> feel pain when you boil them. >>> http://mattmahoney.net/autobliss.txt >>> >>> Am I missing something? >>> >> >> Regrettably, yes. My claims about EM fields or any other basis for an >> artificial brain/consciousness are moot. >> >> It doesn't matter what physics I choose. 50 people could choose 50 >> different aspects of the brain physics they hold as originating >> intelligence/consciousness (however interdependent these things are). >> >> I'm saying that the generic framework in which such choices are correctly >> evaluated for necessity/sufficiency in creating and artificial brain >> involves replicating your chosen physics and that the location of it as a >> science activity is in (e) LEFT. You also examine models of it in (e) RIGHT >> and only together can you scientifically examine the necessity of the >> chosen physics. >> >> In doing this all I am doing is making the science like every other >> science of a natural phenomenon. It is merely being normalised. >> >> I just happen to have chosen EM fields because of how deeply they are >> involved (they literally originate all the brain's signalling). >> >> You are trying to invalidate a correction to a *general * >> (physics-agnostic) framework for the science of AGI by questioning the >> validity of a *particular* instance of science done under the framework. >> >> AGI science currently has no (e) LEFT activity at all. I'm trying to get >> the science framework corrected. I'd be happy to have my physics choices >> invalidated .... scientifically .... but that gets done in a science done >> with both (e)LEFT and (e) RIGHT, not (e) RIGHT on its own. >> >> As it happens, when you use the complete science framework you have an >> empirical option for testing consciousness that you didn't have before .... >> and it is in that context that consciousness becomes empirically tractable. >> But not if you don't do (e)LEFT. >> >> So getting the science framework right is #1. I can't even discuss my >> chip approach without it. >> >> Let's say there are 100 possibilities for the necessary physics in real >> AGI. There is only one of them that is NONE. That is the only one that ever >> gets explored by (e)RIGHT alone (with computers that throw all brain >> physics out). The other 99 get completely lost. They are absent because of >> a broken science framework, not because they've been proved un-necessary. >> >> Does that help? >> >> Colin >> > *Artificial General Intelligence List <https://agi.topicbox.com/latest>* > / AGI / see discussions <https://agi.topicbox.com/groups/agi> + > participants <https://agi.topicbox.com/groups/agi/members> + delivery > options <https://agi.topicbox.com/groups/agi/subscription> Permalink > <https://agi.topicbox.com/groups/agi/T87761d322a3126b1-Md96fad97b5cbd47c3474ba5a> > ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T87761d322a3126b1-Md0ccbe66805a5abcf08ee89b Delivery options: https://agi.topicbox.com/groups/agi/subscription
