On 10-06-2020 18:00, Jason Resch wrote:
On Wednesday, June 10, 2020, smitra <[email protected]> wrote:

On 09-06-2020 19:08, Jason Resch wrote:

For the present discussion/question, I want to ignore the testable
implications of computationalism on physical law, and instead
focus on
the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an exact
computational emulation, meaning exact functional equivalence.
Then
let's say we can exactly control sensory input and perfectly
monitor
motor control outputs between the two brains.

Given that computationalism implies functional equivalence, then
identical inputs yield identical internal behavior (nerve
activations,
etc.) and outputs, in terms of muscle movement, facial
expressions,
and speech.

If we stimulate nerves in the person's back to cause pain, and ask
them both to describe the pain, both will speak identical
sentences.
Both will say it hurts when asked, and if asked to write a
paragraph
describing the pain, will provide identical accounts.

Does the definition of functional equivalence mean that any
scientific
objective third-person analysis or test is doomed to fail to find
any
distinction in behaviors, and thus necessarily fails in its
ability to
disprove consciousness in the functionally equivalent robot mind?

Is computationalism as far as science can go on a theory of mind
before it reaches this testing roadblock?

I think it can be tested indirectly, because generic computational
theories of consciousness imply a multiverse. If my consciousness is
the result if a computation then because on the one hand any such
computation necessarily involves a vast number of elementary bits
and on he other hand whatever I'm conscious of is describable using
only a handful of bits, the mapping between computational states and
states of consciousness is N to 1 where N is astronomically large.
So, the laws of physics we already know about must be effective laws
where the statistical effects due to a self-localization uncertainty
is already build into it.

Bruno has argued on the basis of this to motivate his theory, but
this is a generic feature of any theory that assumes computational
theory of consciousness. In particular, computational theory of
consciousness is incompatible with a single universe theory. So, if
you prove that only a single universe exists, then that disproves
the computational theory of consciousness. The details here then
involve that computations are not well defined if you refer to a
single instant of time, you need to at least appeal to a sequence of
states the system over through. Consciousness cannot then be located
at a single instant, in violating with our own experience. Therefore
either single World theories are false or computational theory of
consciousness is false.

Saibal

Hi Saibal,

I agree indirect mechanisms like looking at the resulting physics may
be the best way to test it. I was curious if there any direct ways to
test it. It seems not, given the lack of any direct tests of
consciousness.

Though most people admit other humans are conscious, many would reject
the idea of a conscious computer.

Computationalism seems right, but it also seems like something that by
definition can't result in a failed test. So it has the appearance of
not being falsifiable.

A single universe, or digital physics would be evidence that either
computationalism is false or the ontology is sufficiently small, but a
finite/small ontology is doubtful for many reasons.

Jason


Yes, I agree that there is no hope for a direct test. Based on the finite information a conscious agent has, which is less than the amount of information contained in the system that renders the consciousness, a conscious agent should not be thought as being located precisely in a state like some computer or a brain. Considering one particular implementation like one particular computer running some algorithm and then asking if that thing is then conscious, is then perhaps not the right way to think about this. It seems to me that we need to consider consciousness in the opposite way.

If we start with some set of conscious states then each element of that set has a subjective notion of its state. And that can contain information about being implemented by a computer or a brain. Also, in the about continuity where we ask whether we are the same persons as yesterday, we can address that by taking the set of all conscious states as fundamental. Every conscious experience whether that's me typing this message of T-ReX 68 million years ago are all different states of the same conscious entity.

The question then becomes whether there exists a conscious state corresponding to knowing that it's brain is a computer.

Saibal


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/43d8ead23a646563d241fb5eab6fe417%40zonnet.nl.

Reply via email to