On Thu, 11 Jun 2020 at 01:50, Jason Resch <[email protected]> wrote:

>
>
> On Tuesday, June 9, 2020, Stathis Papaioannou <[email protected]> wrote:
>
>>
>>
>> On Wed, 10 Jun 2020 at 13:25, 'Brent Meeker' via Everything List <
>> [email protected]> wrote:
>>
>>>
>>>
>>> On 6/9/2020 7:48 PM, Stathis Papaioannou wrote:
>>>
>>>
>>>
>>> On Wed, 10 Jun 2020 at 11:16, 'Brent Meeker' via Everything List <
>>> [email protected]> wrote:
>>>
>>>>
>>>>
>>>> On 6/9/2020 4:58 PM, Stathis Papaioannou wrote:
>>>>
>>>>
>>>>
>>>> On Wed, 10 Jun 2020 at 09:32, 'Brent Meeker' via Everything List <
>>>> [email protected]> wrote:
>>>>
>>>>>
>>>>>
>>>>> On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:
>>>>>
>>>>>
>>>>>
>>>>> On Wed, 10 Jun 2020 at 03:08, Jason Resch <[email protected]>
>>>>> wrote:
>>>>>
>>>>>> For the present discussion/question, I want to ignore the testable
>>>>>> implications of computationalism on physical law, and instead focus on 
>>>>>> the
>>>>>> following idea:
>>>>>>
>>>>>> "How can we know if a robot is conscious?"
>>>>>>
>>>>>> Let's say there are two brains, one biological and one an exact
>>>>>> computational emulation, meaning exact functional equivalence. Then let's
>>>>>> say we can exactly control sensory input and perfectly monitor motor
>>>>>> control outputs between the two brains.
>>>>>>
>>>>>> Given that computationalism implies functional equivalence, then
>>>>>> identical inputs yield identical internal behavior (nerve activations,
>>>>>> etc.) and outputs, in terms of muscle movement, facial expressions, and
>>>>>> speech.
>>>>>>
>>>>>> If we stimulate nerves in the person's back to cause pain, and ask
>>>>>> them both to describe the pain, both will speak identical sentences. Both
>>>>>> will say it hurts when asked, and if asked to write a paragraph
>>>>>> describing the pain, will provide identical accounts.
>>>>>>
>>>>>> Does the definition of functional equivalence mean that any
>>>>>> scientific objective third-person analysis or test is doomed to fail to
>>>>>> find any distinction in behaviors, and thus necessarily fails in its
>>>>>> ability to disprove consciousness in the functionally equivalent robot 
>>>>>> mind?
>>>>>>
>>>>>> Is computationalism as far as science can go on a theory of mind
>>>>>> before it reaches this testing roadblock?
>>>>>>
>>>>>
>>>>> We can’t know if a particular entity is conscious,
>>>>>
>>>>>
>>>>> If the term means anything, you can know one particular entity is
>>>>> conscious.
>>>>>
>>>>
>>>> Yes, I should have added we can’t know know that a particular entity
>>>> other than oneself is conscious.
>>>>
>>>>> but we can know that if it is conscious, then a functional equivalent,
>>>>> as you describe, is also conscious.
>>>>>
>>>>>
>>>>> So any entity functionally equivalent to yourself, you must know is
>>>>> conscious.  But "functionally equivalent" is vague, ambiguous, and
>>>>> certainly needs qualifying by environment and other factors.  Is a dolphin
>>>>> functionally equivalent to me.  Not in swimming.
>>>>>
>>>>
>>>> Functional equivalence here means that you replace a part with a new
>>>> part that behaves in the same way. So if you replaced the copper wires in a
>>>> computer with silver wires, the silver wires would be functionally
>>>> equivalent, and you would notice no change in using the computer. Copper
>>>> and silver have different physical properties such as conductivity, but the
>>>> replacement would be chosen so that this is not functionally relevant.
>>>>
>>>>
>>>> But that functional equivalence at a microscopic level is worthless in
>>>> judging what entities are conscious.    The whole reason for bringing it up
>>>> is that it provides a criterion for recognizing consciousness at the entity
>>>> level.
>>>>
>>>
>>> The thought experiment involves removing a part of the brain that would
>>> normally result in an obvious deficit in qualia and replacing it with a
>>> non-biological component that replicates its interactions with the rest of
>>> the brain. Remove the visual cortex, and the subject becomes blind,
>>> staggering around walking into things, saying "I'm blind, I can't see
>>> anything, why have you done this to me?" But if you replace it with an
>>> implant that processes input and sends output to the remaining neural
>>> tissue, the subject will have normal input to his leg muscles and his vocal
>>> cords, so he will be able to navigate his way around a room and will say "I
>>> can see everything normally, I feel just the same as before". This follows
>>> necessarily from the assumptions. But does it also follow that the subject
>>> will have normal visual qualia? If not, something very strange would be
>>> happening: he would be blind, but would behave normally, including his
>>> behaviour in communicating that everything feels normal.
>>>
>>>
>>> I understand the "Yes doctor" experiment.  But Jason was asking about
>>> being able to recognize consciousness by function of the entity, and I
>>> think that is a different problem that needs to into account the
>>> possibility of different kinds and degrees of consciousness.  The YD
>>> question makes it binary by equating consciousness with exactly the same as
>>> pre-doctor.  Applying that to Jason's question you would conclude that you
>>> cannot infer that other people are conscious because, while they are
>>> functionally equivalent is a loose sense, they are not exactly the same as
>>> you.  They don't give exactly the same answers to questions.  They may not
>>> even be able to see or hear things you do.
>>>
>>
>> My answer to Jason's question was that it is not possible to know that
>> another entity is conscious, but it is possible to know that if it is
>> conscious, replicating its behaviour would replicate its consciousness.
>>
>
> I think this is right if you add the following assumptions:
> 1. Fading qualia are impossible
> 2. Suddenly disappearing qualia are impossible
>

Not logically impossible, but absurd. Though it is hard to pin down
absurdity.

Otherwise I think rather than say "it is possible to know if it is
> consciousness", we need to amend to "it is impossible to disprove that it
> is conscious".
>
> Thought perhaps there's an argument to be made from the church Turing
> theses, which pertains to possible states of knowledge accessible to a
> computer program/software. If consciousness is viewed as software then
> Church-Turing thesis implies that software could never know/realize if it's
> ultimate computing substrate changed.
>
> But this is assuming the thing we're trying to prove, so I'm not sure it
> helps establish computationalism definitively.
>
> Jason
>
>
>>
>>
>>> I think what refer to as "very strange" is possible given a little
>>> fuzziness about being functionally identical.  Suppose his vision was
>>> replaced by some combination of sonar and radar.  He could be as close to
>>> you as a color blind person in his answers.
>>>
>>
>> If the subject suddenly became colour blind or his vision were replaced
>> by a combination of sonar and radar, while he may be able to navigate his
>> way around normally there would be a test that could distinguish the
>> change, like trying to pick a number in a coloured pattern, or simply
>> asking him if he feels the same. Otherwise, in what sense is it meaningful
>> to say there has been a change in qualia?
>>
>>
>> --
>> Stathis Papaioannou
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXqFpOu-4qCxeXWXs34-TAbsB70hX_N4cmLfsJeGWitKw%40mail.gmail.com
>> <https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXqFpOu-4qCxeXWXs34-TAbsB70hX_N4cmLfsJeGWitKw%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhjT3LF3M-%2BdFgP6x%3DNMMSDS3xd7Oy%3DgNqxCGDKcSu66w%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhjT3LF3M-%2BdFgP6x%3DNMMSDS3xd7Oy%3DgNqxCGDKcSu66w%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypWEQC-hL_wyzGuFc_3mFddzUp1va7nBdi3uQW3izkqwqQ%40mail.gmail.com.

Reply via email to