Just read it again (tried). I'd like to understand the paper, the
introduction is quite intriguing, but then it just goes over my head...

Link Grammars sound interesting. However, I increasingly find myself
dismissing any knowledge structure that is not directly based on natural
language. I want to get rid of all the baggage of maintaining specialized
language and actually use English for everything. So my proposal for
grammar rules is something like this:

"Subject predicate object" is a good sentence structure if subject and
predicate match in case.
A noun alone is not a sentence.
The word "a" + an adjective + a noun makes a new noun.
A question should end with a question mark.

etc.

Why not make a logic engine that takes rules like these and reasons about
them? (The problem of expressing English grammar in English is obviously
recursive, but it's not so bad, the rules can be parsed with fairly basic
functions.)

On Mon, 4 Feb 2019 at 23:21, Linas Vepstas <[email protected]> wrote:

>
>
> On Mon, Feb 4, 2019 at 6:02 AM Stefan Reich via AGI <[email protected]>
> wrote:
>
>> > Many commentators here agreed (over time) how agi development requires
>> a radically-different approach to all other computational endeavors to date.
>>
>> Not sure what that means. A really good NLU will go a very long way, and
>> then we'll have to find a new "magic learner" module that replaces neural
>> networks, both for image/audio recognition and learning logic. I suggest
>> evolutionary algorithms.
>>
>
> Here's my "magic learner" proposal. Actually, it is much less than that;
> it just shows how symbolic computing and neural net computing are two sides
> of the same coin.  The idea is that once you see the correspondence, then
> you have a clear path to the kind of symbolic computing that lots of people
> want to do, and a way of uncloaking the "black box" aspects of neural nets.
>
>
> https://github.com/opencog/opencog/raw/master/opencog/nlp/learn/learn-lang-diary/skippy.pdf
>
> FYI, so far, everyone I have shown this to has replied by saying "I read
> it but I skipped the math", which is an odd thing to do, since its
> essentially a math paper.  The whole point is that, if you want to
> understand how neural nets and symbolic learning can be placed on the same
> footing, then you have to understand how both systems work, and "skipping
> the math" is equivalent to "skipping the actual explanation".
>
> (I used to have a non-technical way of explaining this, but everyone who
> read that was underwhelmed.)
>
> --linas
>
> --
> cassette tapes - analog TV - film cameras - you
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/Ta6fce6a7b640886a-M1fd847aa6ec192f94535ddce>
>


-- 
Stefan Reich
BotCompany.de // Java-based operating systems

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta6fce6a7b640886a-M1dbb7f081eb5880bd007b6a5
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to