On 6/9/2020 4:45 PM, Stathis Papaioannou wrote:


On Wed, 10 Jun 2020 at 09:15, Jason Resch <[email protected] <mailto:[email protected]>> wrote:



    On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou
    <[email protected] <mailto:[email protected]>> wrote:



        On Wed, 10 Jun 2020 at 03:08, Jason Resch
        <[email protected] <mailto:[email protected]>> wrote:

            For the present discussion/question, I want to ignore the
            testable implications of computationalism on physical law,
            and instead focus on the following idea:

            "How can we know if a robot is conscious?"

            Let's say there are two brains, one biological and one an
            exact computational emulation, meaning exact functional
            equivalence. Then let's say we can exactly control sensory
            input and perfectly monitor motor control outputs between
            the two brains.

            Given that computationalism implies functional
            equivalence, then identical inputs yield identical
            internal behavior (nerve activations, etc.) and outputs,
            in terms of muscle movement, facial expressions, and speech.

            If we stimulate nerves in the person's back to cause pain,
            and ask them both to describe the pain, both will speak
            identical sentences. Both will say it hurts when asked,
            and if asked to write a paragraph describing the pain,
            will provide identical accounts.

            Does the definition of functional equivalence mean that
            any scientific objective third-person analysis or test is
            doomed to fail to find any distinction in behaviors, and
            thus necessarily fails in its ability to disprove
            consciousness in the functionally equivalent robot mind?

            Is computationalism as far as science can go on a theory
            of mind before it reaches this testing roadblock?


        We can’t know if a particular entity is conscious, but we can
        know that if it is conscious, then a functional equivalent, as
        you describe, is also conscious. This is the subject of David
        Chalmers’ paper:

        http://consc.net/papers/qualia.html


    Chalmers' argument is that if a different brain is not conscious,
    then somewhere along the way we get either suddenly disappearing
    or fading qualia, which I agree are philosophically distasteful.

    But what if someone is fine with philosophical zombies and
    suddenly disappearing qualia? Is there any impossibility proof for
    such things?


Philosophical zombies are less problematic than partial philosophical zombies. Partial philosophical zombies would render the idea of qualia absurd, because it would mean that we might be blind completely blind, for example, without realising it.

Isn't this what blindsight exemplifies?

As an absolute minimum, although we may not be able to test for or define qualia, we should know if we have them. Take this requirement away, and there is nothing left.

Suddenly disappearing qualia are logically possible but it is difficult to imagine how it could work. We would be normally conscious while our neurons were being replaced, but when one special glutamate receptor in a special neuron in the left parietal lobe was replaced, or when exactly 35.54876% replacement of all neurons was reached, the internal lights would suddenly go out.

I think this all-or-nothing is misconceived.  It's not internal cognition that might vanish suddenly, it's some specific aspect of experience: There are people who, thru brain injury, lose the ability to recognize faces...recognition is a qualia.   Of course people's frequency range of hearing fades (don't ask me how I know).  My mother, when she was 95 lost color vision in one eye, but not the other.  Some people, it seems cannot do higher mathematics. So how would you know if you lost the qualia of empathy for example?  Could it not just fade...i.e. become evoked less and less?

Brent

--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected] <mailto:[email protected]>. To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAH%3D2ypUZjiyCppw-qGPM9XPnnP%3D%2BeVCwbD00wxqesBrSvR-shg%40mail.gmail.com <https://groups.google.com/d/msgid/everything-list/CAH%3D2ypUZjiyCppw-qGPM9XPnnP%3D%2BeVCwbD00wxqesBrSvR-shg%40mail.gmail.com?utm_medium=email&utm_source=footer>.

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/fb3dae40-4827-ea7c-3bf1-3ec5b2baf805%40verizon.net.

Reply via email to