Agreed. According to your version of such a system, you would probably run out 
of processing power.

Consider Michael Shumacher, the 7-times world record holder in F1 racing. It 
was said about him that he was so gifted, he used only 10% of his brain power 
to drive the car, and the other 90% to plan how to win the race. In other 
words, he was a predictive genius.

Suppose then, 1 of the 16 levels was a generic, regenerative, 
methodology-producing engine, which had the sole intent to achieve and maintain 
the highest-potential level of competency in any situational domain (autonomous 
effective complexity with least "brain power"required).  Suppose we viewed this 
as part of AGI "DNA".

How would such a computational architecture be different to your version?


________________________________
From: Jim Bromer via AGI <[email protected]>
Sent: Friday, 12 October 2018 7:35 PM
To: AGI
Subject: Re: [agi] Compressed Algorithms that can work on compressed data

The potential to create specialized data structures for AGI might have
a (specialized) advantage that is unlike anything that you are
familiar with. So these data structures would be compressed, but the
compression might exist at different levels or different depths. They
might exist at different depths because they refer to concepts (groups
of interrelated concepts) that existed at different conceptual levels
of 'resolution'. For example, there are distinctions between
particulars and generalizations but there are also differences between
general subject matter and vaguely understood references. Also there
could be different levels of (effective) compression based on
different kinds of relationships between concepts. For another example
of how these computable references might be used, in some cases they
will try to come up with specifics given some situation but in other
cases they might come up with possible variations of what might be
relevant to a situation and come up with some possibilities about how
the program might react to find more information.
There is only one thing wrong with this plan. It would be too slow -
unless these reactions could be computed quickly and efficiently. If
an artificial system of storing concepts was designed to efficiently
produce results that could be used to design and shape interactions
with the user, the program might get enough information to be able to
figure out how to interpret what was being said.Jim Bromer

On Fri, Oct 12, 2018 at 9:28 AM Nanograte Knowledge Technologies via
AGI <[email protected]> wrote:
>
> I think, for your AGI ideas, you'll require a Symbolism Management subsystem. 
> But first, answer the question:
> "What is Jim's version of AGI Symbolism Management?"
>
> As a case in point, what you thus might call 'Symbolism Management', I might 
> just call 'Illusion Management'.
>
> In my mind, my system would potentially cope with up to 16, real-time 
> integrated levels of abstraction. Furthermore, this has potential to open the 
> door for access to the magical 256 NP-Complete findings.
>
> Rob
> ________________________________
> From: Jim Bromer via AGI <[email protected]>
> Sent: Friday, 12 October 2018 11:27 AM
> To: AGI
> Subject: Re: [agi] Compressed Algorithms that can work on compressed data.
>
> The idea of relative randomness of a given compression is kind of interesting.
> There are some compressions which may be transformed without fully
> decompressing it. In fact LZ, if I understand correctly, uses what has
> already been compressed to append a next section to be compressed. And
> computational mathematics, using n-ary or base n representation, is
> actually a case of applying functions on compressed data. And most
> data base functions which use one part of many data records to compute
> some value are examples of effectively using compressed data without
> fully decompressing it. (The one part of the data that is being used
> is an abstraction of a transaction for instance.)
> I am really thinking about specialized fields of data. And I do have
> an idea for AGI in which data may be stored in various levels of
> compression. Ideas may refer to a subject matter (or subjects) in
> various levels of resolution which can also overlap other concepts in
> various ways. I am almost sure that I could make this work for some
> artificial data (or artificial formations of references) and then use
> it to make successive computations of how concepts might interact. But
> right now I I nterested in seeing if there is any way I can use any of
> these ideas to create a novel way to represent logical relations.
> Jim Bromer
>
> On Fri, Oct 12, 2018 at 4:59 AM Andrés Gómez Emilsson via AGI
> <[email protected]> wrote:
> >
> > If the algorithm for compression is good then forget about it. In that case 
> > the best (and near only) way is to uncompress the file and then re-compress 
> > it with the new, more effective algorithm.
> >
> > On Thu, Oct 11, 2018 at 10:53 PM Nanograte Knowledge Technologies via AGI 
> > <[email protected]> wrote:
> >>
> >> A discussion centered around pseudo randomness.
> >>
> >> As a private experiment on randomness, I once took published data of 
> >> cosmic noise and tabled it in an appropriate way. Within less than 54 
> >> iterations, emerged a consistent, embedded pattern. My conclusion was that 
> >> cosmic noise was pseudo random. Would my experiment destroy the lava-lamp 
> >> theory of true randomness? Possibly.
> >>
> >> Recently, someone quoted Gell Mann. His established view on randomness is 
> >> most enlightening.
> >>
> >> As far as I can tell, true randomness cannot be observed, because the 
> >> instant it is observed the energy of observation destroys the purity (or 
> >> truth) thereof. Unless you're a remote viewer,  or supernatural observer 
> >> it would seem that science has fallen foul of its own need for empirical 
> >> evidence. Solve the problem: How does one observe without observing at all?
> >>
> >> Matt, I think you have earned an olive branch in that within a bridging, 
> >> scientific theory (Existentialism) you may call any thing whatever you 
> >> want, for as long as you have it clearly objectified; defined in terms of 
> >> meaningfulness and applied in a consistent, semantic manner. I think the 
> >> prior statement contains a hidden key.
> >>
> >> If so, then you may rely on the probability of your accepted version of 
> >> that thing. Further, to ensure it would remain correct and complete within 
> >> your particular system. How do you do that?
> >>
> >> Still, easy to translate across boundaries as well.
> >>
> >> *One's shoe may be another's steak. That is the nature of true relativity 
> >> in motion.
> >>
> >> Rob
> >> ________________________________
> >> From: Jim Bromer via AGI <[email protected]>
> >> Sent: Friday, 12 October 2018 3:34 AM
> >> To: AGI
> >> Subject: Re: [agi] Compressed Algorithms that can work on compressed data.
> >>
> >> Matt said, "A string is random if there is no shorter description of
> >> the string."
> >>
> >> That is a conjecture, or a hypothesis.
> >>
> >> Matt said, "... but there is no general algorithm to distinguish them in 
> >> any
> >> language.
> >> "Encrypted data appears random if you don't know the key. But it is not
> >> random because it has a short description (compressed plaintext +
> >> key). Kolmogorov proved that there is no general algorithm to tell the
> >> difference."
> >>
> >> if there is no general algorithm to distinguish or detect them then
> >> the hypothesis cannot be validated. While you might present a string
> >> and declare it to be "random" the fact that you cannot prove that it
> >> is the shortest description of the string and therefore purely random,
> >> or random, then the conjecture cannot be sustained.
> >> Jim Bromer
> >> On Thu, Oct 11, 2018 at 1:37 PM Matt Mahoney via AGI
> >> <[email protected]> wrote:
> >> >
> >> > On Thu, Oct 11, 2018 at 12:38 PM John Rose <[email protected]> 
> >> > wrote:
> >> > > OK, what then is between a compression agents perspective (or any 
> >> > > agent for that matter) and randomness? Including shades of randomness 
> >> > > to relatively "pure" randomness.
> >> >
> >> > A string is random if there is no shorter description of the string.
> >> > Obviously this depends on which language you use to write
> >> > descriptions. Formally, a description is a program that outputs the
> >> > string. There are no "shades" of randomness. A string is random or
> >> > not, but there is no general algorithm to distinguish them in any
> >> > language. If there were, then AIXI and thus general intelligence would
> >> > be computable.
> >> >
> >> > > From an information theoretic (and thermodynamic) viewpoint in your 
> >> > > mind what happens when you see the symbol for infinity? 
> >> > > Semi-quantitatively describe the thought processes?
> >> >
> >> > The same thing that happens when you see any other symbols like "2" or
> >> > "+". Mathematics is the art of discovering rules for manipulating
> >> > symbols that help us make real world predictions.
> >> >
> >> > --
> >> > -- Matt Mahoney, [email protected]
> >
> >
> >
> > --
> > Andrés Leonardo Gómez Emilsson
> > Sentient Being (or Consciousness Narrative Stream, depending on how you 
> > want to look at it)
> > Artificial General Intelligence List / AGI / see discussions + participants 
> > + delivery options Permalink
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T4629b4e0158d34e1-Me8a37f5d7d138948ec43600c
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to