On 6/11/2020 9:03 AM, Bruno Marchal wrote:
On 9 Jun 2020, at 19:08, Jason Resch <[email protected]
<mailto:[email protected]>> wrote:
For the present discussion/question, I want to ignore the testable
implications of computationalism on physical law, and instead focus
on the following idea:
"How can we know if a robot is conscious?”
That question is very different than “is
functionalism/computationalism unfalsifiable?”.
Note that in my older paper, I relate computationisme to Putnam’s
ambiguous functionalism, by defining computationalism by asserting the
existence of of level of description of my body/brain such that I
survive (ma consciousness remains relatively invariant) with a digital
machine (supposedly physically implemented) replacing my body/brain.
Let's say there are two brains, one biological and one an exact
computational emulation, meaning exact functional equivalence.
I guess you mean “for all possible inputs”.
Then let's say we can exactly control sensory input and perfectly
monitor motor control outputs between the two brains.
Given that computationalism implies functional equivalence, then
identical inputs yield identical internal behavior (nerve
activations, etc.) and outputs, in terms of muscle movement, facial
expressions, and speech.
If we stimulate nerves in the person's back to cause pain, and ask
them both to describe the pain, both will speak identical sentences.
Both will say it hurts when asked, and if asked to write a paragraph
describing the pain, will provide identical accounts.
Does the definition of functional equivalence mean that any
scientific objective third-person analysis or test is doomed to fail
to find any distinction in behaviors, and thus necessarily fails in
its ability to disprove consciousness in the functionally equivalent
robot mind?
With computationalism, (and perhaps without) we cannot prove that
anything is conscious (we can know our own consciousness, but still
cannot justified it to ourself in any public way, or third person
communicable way).
Is computationalism as far as science can go on a theory of mind
before it reaches this testing roadblock?
Computationalism is indirectly testable. By verifying the physics
implied by the theory of consciousness, we verify it indirectly.
As you know, I define consciousness by that indubitable truth that all
universal machine, cognitively enough rich to know that they are
universal, finds by looking inward (in the Gödel-Kleene sense), and
which is also non provable (non rationally justifiable) and even non
definable without invoking *some* notion of truth. Then such
consciousness appears to be a fixed point for the doubting procedure,
like in Descartes, and it get a key role: self-speeding up relatively
to universal machine(s).
So, it seems so clear to me that nobody can prove that anything is
conscious that I make it into one of the main way to characterise it.
Of course as a logician you tend to use "proof" to mean deductive
proof...but then you switch to a theological attitude toward the
premises you've used and treat them as given truths, instead of mere
axioms. I appreciate your categorization of logics of self-reference.
But I doubt that it has anything to do with human (or animal)
consciousness. I don't think my dog is unconscious because he doesn't
understand Goedelian incompleteness. And I'm not conscious because I
do. I'm conscious because of the Darwinian utility of being able to
imagine myself in hypothetical situations.
Consciousness is already very similar with consistency, which is (for
effective theories, and sound machine) equivalent to a belief in some
reality. No machine can prove its own consistency, and no machines can
prove that there is reality satisfying their beliefs.
First, I can't prove it because such a proof would be relative to
premises which simply be my beliefs. Second, I can prove it in the
sense of jurisprudence...i.e. beyond reasonable doubt. Science doesn't
care about "proofs", only about evidence.
Brent
In all case, it is never the machine per se which is conscious, but
the first person associated with the machine. There is a core
universal person common to each of “us” (with “us” in a very large
sense of universal numbers/machines).
Consciousness is not much more than knowledge, and in particular
indubitable knowledge.
Bruno
Jason
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to [email protected]
<mailto:[email protected]>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhpWiuoSoOyeW2DS3%2BqEaahequxkDcGK-bF2qjgiuqrAg%40mail.gmail.com
<https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhpWiuoSoOyeW2DS3%2BqEaahequxkDcGK-bF2qjgiuqrAg%40mail.gmail.com?utm_medium=email&utm_source=footer>.
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to [email protected]
<mailto:[email protected]>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/everything-list/6BFB3D6E-1AFB-4DDA-988E-B7BA03FF897F%40ulb.ac.be
<https://groups.google.com/d/msgid/everything-list/6BFB3D6E-1AFB-4DDA-988E-B7BA03FF897F%40ulb.ac.be?utm_medium=email&utm_source=footer>.
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/everything-list/37d6db17-fbc6-d241-c03d-6a090f95c7aa%40verizon.net.