Matt

One could easily argue that on the basis that you cannot scientifically prove 
any of the absolute assertions you just made, that you cannot be correct, but 
to what end? Let's unpack your perspective.

1) "Science doesn't explain everything." vs Science explains everything it 
explains.
2) => Philosophy explains why the universe exists vs Can you explain which 
universe within science you are referring to?
3) "It exists as a necessary condition for the question to exist." vs It only 
exists because humankind thought of it as an original thought.
4) "Science can explain...[X]" vs That's what we already stated and implied.
5) "Science explains that your brain runs a program." vs So does philosophy. 
Are they both equally correct, therefore philosophy = science?

Inter alia, you still did not explain anything much, did you?

Rob
________________________________
From: Matt Mahoney via AGI <agi@agi.topicbox.com>
Sent: Sunday, 23 September 2018 4:02 PM
To: AGI
Subject: Re: [agi] E=mc^2 Morphism Musings... 
(Intelligence=math*consciousness^2 ?)

Science doesn't explain everything. It just tries to. It doesn't explain why 
the universe exists. Philosophy does. It exists as a necessary condition for 
the question to exist.

Chalmers is mystified by consciousness like most of us are. Science can explain 
why we are mystified. It explains why it seems like a hard problem. It explains 
why you keep asking the wrong question. Math explains why no program can model 
itself. Science explains that your brain runs a program.

On Sat, Sep 22, 2018, 11:53 AM Nanograte Knowledge Technologies via AGI 
<agi@agi.topicbox.com<mailto:agi@agi.topicbox.com>> wrote:
Perhaps not explain then, but we could access accepted theory on a topic via a 
sound method of integration in order to construct enough of an understanding to 
at least be sure what we are talking about. Should we not at least be trying to 
do that?

Maybe it's a case of no one really being curious and passionate enough to take 
the time to do the semantic work. Or is it symbolic of another problem? I think 
it's a very brave thing to talk publicly about a subject we all agree we 
seemingly know almost nothing about. Yet, we should at least try to do that as 
well.

Therefore, to explain is to know?

Rob
________________________________
From: Jim Bromer via AGI <agi@agi.topicbox.com<mailto:agi@agi.topicbox.com>>
Sent: Saturday, 22 September 2018 6:12 PM
To: AGI
Subject: Re: [agi] E=mc^2 Morphism Musings... 
(Intelligence=math*consciousness^2 ?)

The theory that contemporary science can explain everything requires a 
fundamental denial of history and a kind of denial about the limits of 
cotemporary science. That sort of denial of common knowledge is ill suited for 
adaptation. It will interfere with your ability to use scientific method.
Jim Bromer


On Sat, Sep 22, 2018 at 11:42 AM Jim Bromer 
<jimbro...@gmail.com<mailto:jimbro...@gmail.com>> wrote:
Qualia is what perceptions feel like and feelings are computable and they 
condition us to believe there is something magical and mysterious about it? 
This is science fiction. So science has already explained Chalmer's Hard 
Problem of Consciousness. He just got it wrong? Is that what you are saying?
Jim Bromer


On Sat, Sep 22, 2018 at 11:07 AM Matt Mahoney via AGI 
<agi@agi.topicbox.com<mailto:agi@agi.topicbox.com>> wrote:
I was applying John's definition of qualia, not agreeing with it. My definition 
is qualia is what perception feels like. Perception and feelings are both 
computable. But the feelings condition you to believing there is something 
magical and mysterious about it.

On Sat, Sep 22, 2018, 8:44 AM Jim Bromer via AGI 
<agi@agi.topicbox.com<mailto:agi@agi.topicbox.com>> wrote:
There is a distinction between the qualia of human experience and the 
consciousness of what the mind is presenting. If you deny that you have the 
kind of experience that Chalmers talks about then there is a question of why 
are you denying it. So your remarks are relevant to AGI but not the way you are 
talking about them. If Matt says qualia is not real then he is saying that it 
is imaginary because I am pretty sure that he experiences things in ways 
similar to Chalmers and a lot of other people I have talked to. There are 
people who have claimed that I would not be able to create an artificial 
imagination. That is nonsense. An artificial imagination is easy. The 
complexity of doing that well is not. That does not mean however, that the hard 
problem of consciousness is just complexity. One is doable in computer 
programming, in spite of any skepticism, the other is not.
Jim Bromer


On Sat, Sep 22, 2018 at 10:03 AM John Rose 
<johnr...@polyplexic.com<mailto:johnr...@polyplexic.com>> wrote:
> -----Original Message-----
> From: Jim Bromer via AGI <agi@agi.topicbox.com<mailto:agi@agi.topicbox.com>>
>
> So John's attempt to create a definition of compression of something
> complicated so that it can be communicated might be the start of the
> development of something related to contemporary AI but the attempt to
> claim that it defines qualia is so naïve that it is not really relevant to 
> the subject
> of AGI.


Jim,

Engineers make a box with lines on it and label it "magic goes here".

Algorithmic information theory does the same and calls it compression.

The word "subjective" is essentially the same thing.

AGI requires sensory input.

Human beings are compression engines... each being unique.

The system of human beings is a general intelligence.

How do people communicate?

What are the components of AGI and how will they sense and communicate?

They guys that came up with the term "qualia" left it as a mystery so it's 
getting hijacked 😊 This is extremely common with scientific terminology.

BTW I'm not trying to define it universally just as something computationally 
useful in AGI R&D... but it's starting to seem like it's central. Thus IIT? 
Tononi or Tononi-like? Not sure... have to do some reading but prefer coming up 
with something simple and practical without getting too ethereal. TGD 
consciousness is pretty interesting though and a good exercise in some 
mind-bending reading.

John





Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Md397dae74fe5d9429c20ff8f>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-M75fa1c2d76faee8ee1d7ee1e
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to