Mike, I beg to differ and to agree on various points. I differ on the
perspective of learning. Typical learning processes within humans involve the
generic functionality of continuous deabstraction, classification, association,
prioritization, storage, and recall, and perhaps a few more.
Where
The fascinating thing for me about this discussion is the notion that
when we talk about compression, it is just the psychological
equivalent of learning an idea. In philosophy it is like determining
what is essential, universal. In old AI it would be like learning the
rules. It's generalization.
This is such a weird statement. Like you try to make the human look stupid,
but it is really smarter for AI production to have smart humans. I kind of
conclude you are not actually in the AI game yourself.
On Mon, 8 Oct 2018 at 18:03, Matt Mahoney via AGI
wrote:
>
>
> On Mon, Oct 8, 2018, 9:44 A
We're not making AGI immediately anyway. We make many steps of
general-purpose AI, gradually increasing in complexity. All of these steps
we can fully control.
On Tue, 9 Oct 2018 at 16:58, Stefan Reich <
stefan.reich.maker.of@googlemail.com> wrote:
> Hehe. Maybe we misunderstood each other. I
Hehe. Maybe we misunderstood each other. I want to understand how the AI
that I make works is what I mean.
On Mon, 8 Oct 2018 at 18:03, Matt Mahoney via AGI
wrote:
>
>
> On Mon, Oct 8, 2018, 9:44 AM Stefan Reich via AGI
> wrote:
>
>>
>>
>> Matt Mahoney via AGI schrieb am So., 7. Okt. 2018
>> 0
> -Original Message-
> From: Jim Bromer via AGI
>
> Operating on compressed data without having to decompress it is the goal that
> I am thinking of so being able to access internal relations would be
> important.
> There can be some compressed data that does not contain explicit interna