I did not mean to be overly critical in my last email. I was just trying to
be objective, to the best of my ability.
A major underlying goal in discrete AI has been to look for a simple set of
rules and discrete object types which could be used as a basis for all
knowledge. I believe these goals were driven by a few historical events.
The importance of arithmetic and logic, both of which could be derived from
a relatively simple set of rules; the value of arithmetic to humankind
which could be developed with a minimization of rules; and the severe
constraints on computer technology in the old days. I think that the
learned development of many rules may be achieved with modern computers but
that presupposes that there is more than one level of abstraction. So my
ideas about AI are based largely on discrete methods, but the abstractions
of the system have to be acquired and learned alongside other kinds of
knowledge (like facts and how-to stuff), and made to fit the knowledge that
is learned.
No one in this group talks this way about this stuff. Of course, a critic
can say that is implicit in someone else's ideas - except for the stuff I
am getting wrong. To put it a little more accurately, the necessity of
acquiring 'abstractions' along with 'knowledge' 'endpoints' (so to speak)
is implicit in many if not most AI theories. So how is mine original the
critic might ask? This would be an example of a dismissive argument. My
theory is my own because other people do not bother to talk about stuff
like this. So it may not be startlingly 'original' but it is my own theory
(it is not based on some white-paper or something that someone else has
talked about in these groups. You can find theories about 'abstraction' but
there is less around about the necessity of developing abstractions as
knowledge is developed and even less that suggests that these relationships
might be expressed mathematically - at least to some extent.
While I am interested in developing a fast mathematical index into
knowledge (based on discrete-based knowledge) I now believe that a great
deal of meaning can be -learned- and baked into the mathematical indexing
system. For example, mathematics does not have to be restricted to a
narrowing process (that produces a single numerical result) but it can also
be an expanding process (that produces a set of results that might be
expressed numerically.)
Again, a critic can take a dismissive attitude and might say, for example,
that fuzzy logic or any number of weighted methods can do that!  Ok, but I
could defensively reply that I am interested in mathematical references to
sets that could (in theory) (typically) be derived from the values.
Radical originality is not one of my goals. However, individual uniqueness
is, because I believe that individuality is needed to develop useful
computational methods that have yet to be developed. There are many unique
aspects of my theories and the challenge is to find the theories which can
be made to function as a whole and to eventually develop computer programs
that will show their usefulness in future AI and AGI programs.
Jim Bromer


On Tue, Jun 4, 2019 at 6:07 PM <[email protected]> wrote:

> Ok, sorry about that misunderstanding.
>
> You need to form the model of the of the working area of the a.i. ,  its
> true for us all,  but how you do it, is what makes what your doing original.
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/T395236743964cb4b-M0c0dd3364303694d41da54b7>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T395236743964cb4b-M8984999a5040c2b43963b404
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to