Steve said: I strongly suspect biological synapses are tagged in some way
to only connect with other synapses carrying dimensionally compatible
information.

I totally agree. So one thing that I am wondering about is whether that can
be computed using a novel kind of mathematics? Intuitively, I would say
absolutely.

A truly innovative AI mathematical system would not 'solve' every AI
problem but could it be developed so that it helped speed up and direct an
initial analysis of input? Intuitively I am pretty sure it can be done, but
I am not at all sure that I could come up with a method.
Jim Bromer


On Thu, Jun 20, 2019 at 1:13 PM Steve Richfield <[email protected]>
wrote:

> Jim,
>
> Many systems, e.g. while adding probabilities to compute probabilities
> doesn't make sense; adding counts having poor significance, which can look
> a lot like adding probabilities, can make sense to produce a count.
>
> Where this gets confusing is in sensory fusion. Present practice is
> usually some sort of weighted summation, when CAREFUL analysis would
> probably involve various nonlinearities to convert inputs to cannonical
> form that make sense to add, followed by another nonlinearity to convert
> the sum to suitable output units.
>
> I strongly suspect biological synapses are tagged in some way to only
> connect with other synapses carrying dimensionally compatible information.
>
> Everyone seems to focus on values being computed, when it appears that it
> is the dimensionality that restricts learning to potentially rational
> processes.
>
> Steve
>
> On Thu, Jun 20, 2019, 9:14 AM Jim Bromer <[email protected]> wrote:
>
>> I originally thought about novel computational rules. Arithmetic is not
>> reversible because a computational result is not unique for the input
>> operands. That makes it a type of compression. Furthermore it uses a
>> limited set of rules. That makes it a super compression method.
>>
>> On Thu, Jun 20, 2019, 12:08 PM Jim Bromer <[email protected]> wrote:
>>
>>> I guess I understand what you mean.
>>>
>>> On Thu, Jun 20, 2019, 12:07 PM Jim Bromer <[email protected]> wrote:
>>>
>>>> I think your use of metaphors, especially metaphors that were intended
>>>> to emphasize your thoughts through exaggeration, may have confused me.
>>>> Would you explain your last post Steve?
>>>>
>>>> On Thu, Jun 20, 2019, 12:02 PM Steve Richfield <
>>>> [email protected]> wrote:
>>>>
>>>>> Too much responding without sufficient thought. After a week of
>>>>> thought regarding earlier postings on this thread...
>>>>>
>>>>> Genuine computation involves manipulating numerically expressible
>>>>> value (e.g. 0.62), dimensionality (e.g. probability), and significance
>>>>> (e.g. +/- 0.1). Outputs of biological neurons appear to fit this model.
>>>>>
>>>>> HOWEVER, much of AI does NOT fit this model - yet still appears to
>>>>> "work". If this is useful than use it, but there usually is no path to
>>>>> better solutions. You can't directly understand, optimize, adapt, debug,
>>>>> etc., because it is difficult/impossible to wrap your brain around
>>>>> quantities representing nothing.
>>>>>
>>>>> Manipulations that don't fit this model are numerology, not
>>>>> mathematics, akin to bring astrology instead of astronomy.
>>>>>
>>>>> It seems perfectly obvious to me that AGI, when it comes into being,
>>>>> will involve NO numerological faux "computation".
>>>>>
>>>>> Sure, learning could involve developing entirely new computation, but
>>>>> it would have to perform potentially valid computations on it's inputs. 
>>>>> For
>>>>> example, adding probabilities is NOT valid, but ORing them could be valid.
>>>>>
>>>>> Steve
>>>>>
>>>>> On Thu, Jun 20, 2019, 8:22 AM Alan Grimes via AGI <
>>>>> [email protected]> wrote:
>>>>>
>>>>>> It has the basic structure and organization of a conscious agent,
>>>>>> obviously it lacks the other ingredients required to produce a
>>>>>> complete
>>>>>> mind.
>>>>>>
>>>>>> Stefan Reich via AGI wrote:
>>>>>> > Prednet develops consciousness?
>>>>>> >
>>>>>> > On Wed, Jun 19, 2019, 06:51 Alan Grimes via AGI <
>>>>>> [email protected]
>>>>>> > <mailto:[email protected]>> wrote:
>>>>>> >
>>>>>> >     Yay, it seems peeps are finally ready to talk about this!! =P
>>>>>> >
>>>>>> >
>>>>>> >     Lets see if I can fool anyone into thinking I'm actually making
>>>>>> >     sense by
>>>>>> >     starting with a first principles approach... Permalink
>>>>>> >     <
>>>>>> https://agi.topicbox.com/groups/agi/T395236743964cb4b-M686d9fcf7662ad8dc2fc1130
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> 
>>>>>> 
>>>>>> --
>>>>>> Please report bounces from this address to [email protected]
>>>>>> 
>>>>>> Powers are not rights.
>>>>>> 
>>>>> *Artificial General Intelligence List
> <https://agi.topicbox.com/latest>* / AGI / see discussions
> <https://agi.topicbox.com/groups/agi> + participants
> <https://agi.topicbox.com/groups/agi/members> + delivery options
> <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/T395236743964cb4b-M01e0f78ba275b14a18b00cf6>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T395236743964cb4b-Me3f6b4fc7a30f8910f892764
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to