Your attempt to declare that I stated two opinions that were
"self-refuting" just because you disagreed with them is nonsense. If I
had any idea how I could help you I would try.

I never said anything about bootstrapping so that criticism is totally
irrelevant.
The second criticism sounded like it might be relevant to what I said,
but I can't work it to make it relevant to my opinions about the
potential for complex insights that could be produced by advanced
artificial intelligence and artificial reasoning.
Jim Bromer
On Wed, Sep 12, 2018 at 3:13 PM Nanograte Knowledge Technologies via
AGI <agi@agi.topicbox.com> wrote:
>
> Jim
>
> Now, in my view, you made 2 assertions that were self refuting. I strongly 
> disagree bootstrapping and/or relationship-based approaches would enable AGI 
> now, and into the future. And by focussing on those 2 main items, I'm not 
> saying everything else you said should not be taken seriously. For the 
> following reasons, I decided to respond to those 2 items of architectural 
> relevance:
>
> Bootstrapping is effectively learning for the machine by pre-empting what the 
> machine must find and frame as environmental information (via experience). 
> This is against the commonly held view that AI should aim for intelligence 
> autonomy.
>
> Second, relationship is a parent-child structure implying causality and 
> functional dependency. No matter how "not quite determining" you consider it 
> to be, it is primarily based on relational theory. This is against the 
> commonly-held view that AI should aim for autonomy and not smart automation.
>
> In that sense, those statements have been refuted as being relevant to the 
> overall development of AGI.
>
> I proposed alternatives and even allowed for different terminology, but you 
> elected to ignore those and play hide-and-seek with alleged hidden messages 
> you would not care to discuss.
>
> In the interest of transparency, I would encourage you to rather say what you 
> want to say, or don't. Please be clear and defend your point. I would welcome 
> the debate, but if you could not care to, then rather admit you are just 
> trying to throw a spanner in the works of this most-useful, constructive 
> discussion.
>
> Rob
> ________________________________
> From: Jim Bromer via AGI <agi@agi.topicbox.com>
> Sent: Wednesday, 12 September 2018 8:21 PM
> To: AGI
> Subject: Re: [agi] Growing Knowledge
>
> How could I possibly know what you missed (without extensive and
> tedious meta-conversation about the exchange that we just had)? You
> made some exaggerated statement and from that I was able to conclude
> that you probably missed some subtleties in my quick comments.
>
> You cannot 'refute' an open ended statement like, 'x will lead to new
> thinking.' You may state an opinion about it or you might say that
> there is some premise in the preliminary comments which makes it
> unlikely. For example, I say that quantum entanglement is not actually
> an AI theory. You can speculate that perhaps qe might explain
> consciousness in some way, but that theory is not grounded in feasible
> engineering at this time. So, for example, there is enough wrong with
> the theory to be confidently dismissive of the idea that quantum
> entanglement will lead to new ideas in AGI in the next decade. So, if
> we agree to your definition of the word, 'refute' I would say that the
> theory that quantum entanglement will lead to new advances in AGI
> during the next 10 years can be refuted. (It is my opinion that can be
> refuted, if not by argument then by waiting 10 years and seeing what
> happens. I would not typically use the word 'refute' in a simple
> speculation of opinion, no matter how unlikely the theory that is
> being criticized is valid.)
> Jim Bromer
> On Wed, Sep 12, 2018 at 11:42 AM Nanograte Knowledge Technologies via
> AGI <agi@agi.topicbox.com> wrote:
> >
> > Jim
> >
> > Not refuting your thinking, but rather the premise you proposed. At least I 
> > stated my argument why the notion (if you are ok with that term) is 
> > refutable. I forgot how sensitive you can be.
> >
> > So, instead of feeling slighted, why not expound on the subtleties I may 
> > have missed?
> >
> > Rob
> > ________________________________
> > From: Jim Bromer via AGI <agi@agi.topicbox.com>
> > Sent: Wednesday, 12 September 2018 4:47 PM
> > To: AGI
> > Subject: Re: [agi] Growing Knowledge
> >
> > In general, you can't actually "refute" my thinking. If I made some
> > hypothesis which could be tested in an experiment you might refute the
> > hypothesis, but even that could be questioned. I would have to agree
> > that the experiment was a good test of my hypothesis or there would
> > have to be a consensus of opinion that the experiment was indeed a
> > good test of my hypothesis. You might also 'refute' my recollection of
> > some fact, especially if there was some evidence that would support
> > different recollections or conclusions. Rather than accepting the
> > nonsense that you could refute my thinking, my first guess is that you
> > have just missed some subtlety in the expression of my thoughts.
> > Jim Bromer
> >
> > On Wed, Sep 12, 2018 at 9:25 AM Nanograte Knowledge Technologies via
> > AGI <agi@agi.topicbox.com> wrote:
> > >
> > > Jim
> > >
> > > Bootstrapping a computational platform with domain knowledge (seeding 
> > > with insights), was already done a few years ago by the ex head of AI 
> > > research in France. I need to find his blogs again, but apparently he had 
> > > amazing results with regards re-solving classical mathematical problems.
> > >
> > > Our question is; would that constitute AGI?
> > >
> > > I  appreciate your comment on how such an approach would not be 
> > > considered radical at all. However, the claim you make immediately 
> > > thereafter; that the approach would help to think of the problem in a 
> > > different way, is refutable.
> > >
> > > The thinking in terms of relationships suffer the same fate. Not radical, 
> > > and not thinking in a new or different way.
> > >
> > > As such, we need to think as radically as we could possibly do. We need 
> > > to find a few radical approaches and see if they could be focused on a 
> > > few avenues of pragmatic research. May the best approach win.
> > >
> > > For example, instead of relationships, thinking free-will (random) 
> > > associations. This is not a semantic ploy, but a radical departure in 
> > > terms of AGI architecture.
> > >
> > > Furthermore, instead of thinking of seeding, rather allowing the 
> > > computational platform to Find, Frame, Make and Share. This would denote 
> > > another radical departure in current thinking (I did come across a 
> > > similar approach recently).
> > >
> > > Rob
> > >
> > > ________________________________
> > > From: Jim Bromer via AGI <agi@agi.topicbox.com>
> > > Sent: Wednesday, 12 September 2018 2:25 PM
> > > To: a...@listbox.com
> > > Subject: [agi] Growing Knowledge
> > >
> > > The idea that an AGI program has to be able to 'grow' knowledge is not
> > > conceptually radical but the use of the idea that a program might be
> > > seeded with certain kinds of insights does make me think about the
> > > problem in a slightly different way. By developing a program to work
> > > along principles that are meant to incorporate some way to build on
> > > the basis of insights that are provided as the program explores
> > > different kinds of subjects I think I might be able to see this theory
> > > in the terms of a transition from programming discrete instructions
> > > that correspond to a particular sequence of computer operations into
> > > programming with instructions that have a potential to grow
> > > relationships between the knowledge data. The kinds of relationships
> > > do not need to be absolutely pre-determined because the use of basic
> > > relationships and references to specific ideas can implicitly develop
> > > into more sophisticated relationships that would only need to be
> > > recognized. For example, an abstraction of generalization seems pretty
> > > fundamental to Old AI. However, I believe that just by using more
> > > basic relationships which can refer to other specific ideas and to
> > > groups of ideas, the relationships that will effectively refer to a
> > > kind of abstraction may develop naturally - in primitive forms. It
> > > would be necessary to 'teach' the AGI program to recognize and
> > > appreciate these abstractions so that it could then use abstraction
> > > more explicitly.
> > > Jim Bromer
> > > Artificial General Intelligence List / AGI / see discussions + 
> > > participants + delivery options Permalink
> > Artificial General Intelligence List / AGI / see discussions + participants 
> > + delivery options Permalink
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T032c6a46f393dbd9-M90693ad3bcd01906792e819e
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to