But you are still missing the definition of qualia. Wikipedia has a
thing on it and I am sure SEP does as well. Because there are reports
of subjective experience we know that we share something of the nature
of experience. Common sense can tell us that computers do not. How do
we know that computers do not share the nature of conscious experience
(Chalmer's hard problem of consciousness:
https://en.wikipedia.org/wiki/Hard_problem_of_consciousness)? It is
not an ontologically salient question for a group focused on
technology. So it is relevant to the philosophical issues of
intelligence, but once you get it you have to move on. It is not a
fruitful discussion unless you can derive something interesting from
it. There is no test for qualia because there is no explanation for
it. A profound mystery cannot be reduced to a contrived technological
test or else just be dismissed. That kind of thinking is not good
science and it is not good philosophy.
So John's attempt to create a definition of compression of something
complicated so that it can be communicated might be the start of the
development of something related to contemporary AI but the attempt to
claim that it defines qualia is so naïve that it is not really
relevant to the subject of AGI.
When you have a profound mystery you have to create ways to examine
it. This is related to AGI. How do you fit it in to other knowledge.
What are the observations that you have to work with. What are the
theories that you have that you can use to work with it. Can you
measure it. Are there indirect ways to measure it. During these
initial stages you have to expect that many of your initial ideas are
going to be wrong or poorly constructed. The major motivation then
should not to be to salvage some initial primitive theories but to
reshape them completely. To test a hypothesis about a radical theory
of a profound mystery you have to first create theories of how you
might conduct your experiment. If your initial theories lead you to
enact major redefinitions so that you change the subject of the
theory, then that is a good sign that you are not ready to test the
theory.
Jim
On Sat, Sep 22, 2018 at 8:11 AM John Rose <johnr...@polyplexic.com> wrote:
> > -----Original Message-----
> > From: Nanograte Knowledge Technologies via AGI <agi@agi.topicbox.com>
> >
> > That's according to John's definition thereof. The rest of us do not 
> > necessarily
> > agree with such a limited view. At this stage, it cannot be absolutely 
> > stated
> > what qualia is. For example, mine is a lot more fuzzy and abstract in terms 
> > of
> > autonomous, identifier signalling . And that is but one view of many 
> > regarding
> > a feature of biology, which I contend could ultimately be transposed into a
> > synthetically-framed platform as its own, unique version.
> >
>
> "autonomous, identifier signaling"
>
> We are on a similar wavelength :) Compression is a big word. I've not talked 
> about consciousness topology and kernels yet...
>
>
> > One needs to define a term first, before trying  to apply
> > it to the collective consciousness of AGI.
> >
> 
> I disagree. Many AGI researchers have two overwhelming biases:
> 
> One person is a general intelligence.
> One person is a general consciousness.
> 
> Both I believe are false.
> 
> Seeing the forest when you are a tree requires an outside view.
> 
> John
> 

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Maebd1c44e6464b2086fa7693
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to