On Tuesday, June 9, 2020 at 7:08:30 PM UTC+2, Jason wrote:
>
> For the present discussion/question, I want to ignore the testable 
> implications of computationalism on physical law, and instead focus on the 
> following idea:
>
> "How can we know if a robot is conscious?"
>
> Let's say there are two brains, one biological and one an exact 
> computational emulation, meaning exact functional equivalence. Then let's 
> say we can exactly control sensory input and perfectly monitor motor 
> control outputs between the two brains.
>
> Given that computationalism implies functional equivalence, then identical 
> inputs yield identical internal behavior (nerve activations, etc.) and 
> outputs, in terms of muscle movement, facial expressions, and speech.
>
> If we stimulate nerves in the person's back to cause pain, and ask them 
> both to describe the pain, both will speak identical sentences. Both will 
> say it hurts when asked, and if asked to write a paragraph describing the 
> pain, will provide identical accounts.
>
> Does the definition of functional equivalence mean that any scientific 
> objective third-person analysis or test is doomed to fail to find any 
> distinction in behaviors, and thus necessarily fails in its ability to 
> disprove consciousness in the functionally equivalent robot mind?
>
> Is computationalism as far as science can go on a theory of mind before it 
> reaches this testing roadblock?
>

Every piece of writing is a theory of mind; both within western science and 
beyond. 

What about the abilities to understand and use natural language, to come up 
with new avenues for scientific or creative inquiry, to experience qualia 
and report on them, adapting and dealing with unexpected circumstances 
through senses, and formulating + solving problems in benevolent ways by 
contributing towards the resilience of its community and environment? 

Trouble with this is that humans, even world leaders, fail those tests lol, 
but it's up to everybody, the AI and Computer Science folks in particular, 
to come up with the math, data, and complete their mission... and as 
amazing as developments have been around AI in the last couple of decades, 
I'm not certain we can pull it off, even if it would be pleasant to be 
wrong and some folks succeed. 

Even if folks do succeed, a context of militarized nation states and 
monopolistic corporations competing for resources in self-destructive, 
short term ways... will not exactly help towards NOT weaponizing AI. A 
transnational politics, economics, corporate law, values/philosophies, 
ethics, culture etc. to vanquish poverty and exploitation of people, 
natural resources, life; while being sustainable and benevolent stewards of 
the possibilities of life... would seem to be prerequisite to develop some 
amazing AI. 

Ideas are all out there but progressives are ineffective politically on a 
global scale. The right wing folks, finance guys, large irresponsible 
monopolistic corporations are much more effective in organizing themselves 
globally and forcing agendas down everybody's throats. So why wouldn't AI 
do the same? PGC


 

>
> Jason
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/c00737c3-84f2-4b21-9fc2-b04c017cbdcco%40googlegroups.com.

Reply via email to