On 6/9/2020 7:48 PM, Stathis Papaioannou wrote:


On Wed, 10 Jun 2020 at 11:16, 'Brent Meeker' via Everything List <[email protected] <mailto:[email protected]>> wrote:



    On 6/9/2020 4:58 PM, Stathis Papaioannou wrote:


    On Wed, 10 Jun 2020 at 09:32, 'Brent Meeker' via Everything List
    <[email protected]
    <mailto:[email protected]>> wrote:



        On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:


        On Wed, 10 Jun 2020 at 03:08, Jason Resch
        <[email protected] <mailto:[email protected]>> wrote:

            For the present discussion/question, I want to ignore
            the testable implications of computationalism on
            physical law, and instead focus on the following idea:

            "How can we know if a robot is conscious?"

            Let's say there are two brains, one biological and one
            an exact computational emulation, meaning
            exact functional equivalence. Then let's say we can
            exactly control sensory input and perfectly monitor
            motor control outputs between the two brains.

            Given that computationalism implies functional
            equivalence, then identical inputs yield identical
            internal behavior (nerve activations, etc.) and outputs,
            in terms of muscle movement, facial expressions, and
            speech.

            If we stimulate nerves in the person's back to cause
            pain, and ask them both to describe the pain, both will
            speak identical sentences. Both will say it hurts when
            asked, and if asked to write a paragraph describing the
            pain, will provide identical accounts.

            Does the definition of functional equivalence mean that
            any scientific objective third-person analysis or test
            is doomed to fail to find any distinction in behaviors,
            and thus necessarily fails in its ability to disprove
            consciousness in the functionally equivalent robot mind?

            Is computationalism as far as science can go on a theory
            of mind before it reaches this testing roadblock?


        We can’t know if a particular entity is conscious,

        If the term means anything, you can know one particular
        entity is conscious.


    Yes, I should have added we can’t know know that a particular
    entity other than oneself is conscious.

        but we can know that if it is conscious, then a functional
        equivalent, as you describe, is also conscious.

        So any entity functionally equivalent to yourself, you must
        know is conscious.  But "functionally equivalent" is vague,
        ambiguous, and certainly needs qualifying by environment and
        other factors.  Is a dolphin functionally equivalent to me. 
        Not in swimming.


    Functional equivalence here means that you replace a part with a
    new part that behaves in the same way. So if you replaced the
    copper wires in a computer with silver wires, the silver wires
    would be functionally equivalent, and you would notice no change
    in using the computer. Copper and silver have different physical
    properties such as conductivity, but the replacement would be
    chosen so that this is not functionally relevant.

    But that functional equivalence at a microscopic level is
    worthless in judging what entities are conscious.    The whole
    reason for bringing it up is that it provides a criterion for
    recognizing consciousness at the entity level.


The thought experiment involves removing a part of the brain that would normally result in an obvious deficit in qualia and replacing it with a non-biological component that replicates its interactions with the rest of the brain. Remove the visual cortex, and the subject becomes blind, staggering around walking into things, saying "I'm blind, I can't see anything, why have you done this to me?" But if you replace it with an implant that processes input and sends output to the remaining neural tissue, the subject will have normal input to his leg muscles and his vocal cords, so he will be able to navigate his way around a room and will say "I can see everything normally, I feel just the same as before". This follows necessarily from the assumptions. But does it also follow that the subject will have normal visual qualia? If not, something very strange would be happening: he would be blind, but would behave normally, including his behaviour in communicating that everything feels normal.

I understand the "Yes doctor" experiment.  But Jason was asking about being able to recognize consciousness by function of the entity, and I think that is a different problem that needs to into account the possibility of different kinds and degrees of consciousness.  The YD question makes it binary by equating consciousness with exactly the same as pre-doctor.  Applying that to Jason's question you would conclude that you cannot infer that other people are conscious because, while they are functionally equivalent is a loose sense, they are not exactly the same as you.  They don't give exactly the same answers to questions.  They may not even be able to see or hear things you do.

I think what refer to as "very strange" is possible given a little fuzziness about being functionally identical.  Suppose his vision was replaced by some combination of sonar and radar.  He could be as close to you as a color blind person in his answers.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/3fdb39ac-ec70-2fc1-db1f-4c0710d155b4%40verizon.net.

Reply via email to