To be fair, I think YKY put perspiration into mathematical structure
and it looks like a decent attempt at a fusion of logic and neural
networks. But it seems like the next step could be an embryonic
program. Personally I feel like I spent too long on my overall design,
and some things become clear only through experimentation. AGI is a
game of nuanced distinctions, as is reality.

Mike Archbold

On 4/30/19, Matt Mahoney <[email protected]> wrote:
> The revised paper is a bit better but really doesn't address my
> main concerns. I mean the 1% inspiration is done (Edison) and just the 99%
> persperation is left to do. Yeah, actually doing experiments and writing up
> the results is hard work, but that's how papers get published. Nobody cares
> about untested ideas.
>
> Maybe write up a paper on past work like Genifer from 2010.
> http://strong-ai.info/blog/ai/2010/08/08/genifer-general-inference-engine
> Why did it fail? What lessons were learned?
>
> On Tue, Apr 30, 2019, 5:36 AM John Rose <[email protected]> wrote:
>
>> Matt > "The paper looks like a collection of random ideas with no
>> coherent
>> structure or goal...."
>>
>>
>>
>> Argh... I love this style of paper whenever YKY publishes something my
>> eyes are on it. So few (if any) are written this way, it's a terse jazz
>> fusion improv of mecho-logical-mathematical thought physics needed to
>> describe AGI concept.
>>
>>
>>
>> Immediately on the first version when I saw the navigating the labyrinth
>> of "thinking" I thought of the quantum many paths simultaneity in
>> photosynthesis and YKY mentioning the discovery of a possible correlation
>> of Schrödinger and RL... but that item was yanked in the second
>> iteration.
>> That's OK, sometimes while on the vanguard of thought viewers eyes must
>> be
>> shielded from that which they explicitly fear the most...coincidentally
>> sometimes which is totally obvious thus suspending disbelief while
>> maintaining a referential propriety and contemporary academic
>> interestingness.
>>
>>
>>
>> Also yanked was the expression of the notion for the AGI requirement of
>> approximating K-complexity which in that I agree is where all the good
>> stuff is…. generally and/or specifically… IMO this where the multi-agent
>> consciousness mechanics come in but I’ll shield some eyes on that one :)
>>
>>
>>
>> John
>>
>>
>>
>> *From:* Stefan Reich via AGI <[email protected]>
>> *Sent:* Friday, April 19, 2019 4:21 PM
>> *To:* AGI <[email protected]>
>> *Subject:* Re: [agi] My AGI 2019 paper draft
>>
>>
>>
>> Good review
>>
>>
>>
>> On Fri, Apr 19, 2019, 22:02 Matt Mahoney <[email protected]> wrote:
>>
>> It would help to get your paper published if it had an experimental
>> results section. How do you propose to test your system? How do you plan
>> to
>> compare the output with prior work on comparable systems? What will you
>> measure? What benchmarks will you use (for example, image recognition,
>> text
>> prediction, robotic performance)?
>>
>>
>>
>> The paper looks like a collection of random ideas with no coherent
>> structure or goal. The math seems to confuse or mislead rather than
>> explain. For example you show father(x,y) as a function in the real plain
>> rather than a predicate over discrete variables. This is interesting for
>> a
>> moment, but doesn't go anywhere, so you move on to the next topic. The
>> whole paper is like this, plugging variables from one field of study into
>> equations from another and hoping something useful comes out.
>>
>>
>>
>> I know that you are just full of ideas. But actually writing some code
>> that does something interesting might really help in sorting out the
>> useful
>> ideas from the ones that go nowhere and advance the field of AGI.
>>
>>
>>
>> On Fri, Apr 19, 2019, 9:15 AM YKY (Yan King Yin, 甄景贤) <
>> [email protected]> wrote:
>>
>> Hi,
>>
>>
>>
>> This is my latest draft paper:
>>
>> https://drive.google.com/open?id=12v_gMtq4GzNtu1kUn9MundMc6OEhJdS8
>>
>>
>>
>> I submitted the same basic idea in AGI 2016, but was rejected by some
>> rather superficial reasons.  At that time, reinforcement learning for AI
>> was not widely heard of, but since then it has become a ubiquitous hot
>> topic.  I hope this time I can get published, as it would allow me to
>> share
>> my ideas more easily with other researchers and mathematicians so that I
>> could solicit their help and improve my theory, possibly starting the
>> coding project as well.
>>
>>
>>
>> Comments and suggestions are welcome 😊
>>
>> --
>>
>> *YKY*
>>
>> *"The ultimate goal of mathematics is to eliminate any need for
>> intelligent thought"* -- Alfred North Whitehead
>>
>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
>> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
>> participants <https://agi.topicbox.com/groups/agi/members> + delivery
>> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
>> <https://agi.topicbox.com/groups/agi/T3cad55ae5144b323-M5270f3477e3d62edc3b33160>
>>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T3cad55ae5144b323-M014202a0f8ac5a7843f55a52
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to