On Friday, June 12, 2020 at 8:22:25 PM UTC+2, Jason wrote: > > > > On Wed, Jun 10, 2020 at 5:55 PM PGC <[email protected] <javascript:>> > wrote: > >> >> >> On Tuesday, June 9, 2020 at 7:08:30 PM UTC+2, Jason wrote: >>> >>> For the present discussion/question, I want to ignore the testable >>> implications of computationalism on physical law, and instead focus on the >>> following idea: >>> >>> "How can we know if a robot is conscious?" >>> >>> Let's say there are two brains, one biological and one an exact >>> computational emulation, meaning exact functional equivalence. Then let's >>> say we can exactly control sensory input and perfectly monitor motor >>> control outputs between the two brains. >>> >>> Given that computationalism implies functional equivalence, then >>> identical inputs yield identical internal behavior (nerve activations, >>> etc.) and outputs, in terms of muscle movement, facial expressions, and >>> speech. >>> >>> If we stimulate nerves in the person's back to cause pain, and ask them >>> both to describe the pain, both will speak identical sentences. Both will >>> say it hurts when asked, and if asked to write a paragraph describing the >>> pain, will provide identical accounts. >>> >>> Does the definition of functional equivalence mean that any scientific >>> objective third-person analysis or test is doomed to fail to find any >>> distinction in behaviors, and thus necessarily fails in its ability to >>> disprove consciousness in the functionally equivalent robot mind? >>> >>> Is computationalism as far as science can go on a theory of mind before >>> it reaches this testing roadblock? >>> >> >> Every piece of writing is a theory of mind; both within western science >> and beyond. >> >> What about the abilities to understand and use natural language, to come >> up with new avenues for scientific or creative inquiry, to experience >> qualia and report on them, adapting and dealing with unexpected >> circumstances through senses, and formulating + solving problems in >> benevolent ways by contributing towards the resilience of its community and >> environment? >> >> Trouble with this is that humans, even world leaders, fail those tests >> lol, but it's up to everybody, the AI and Computer Science folks in >> particular, to come up with the math, data, and complete their mission... >> and as amazing as developments have been around AI in the last couple of >> decades, I'm not certain we can pull it off, even if it would be pleasant >> to be wrong and some folks succeed. >> > > It's interesting you bring this up, I just wrote an article about the > present capabilities of AI: > https://alwaysasking.com/when-will-ai-take-over/ >
You're quite the optimist. In a geopolitical setting as chaotic and disorganized as ours, it's plausible that we wouldn't be able to tell if it happened. Strategically, with this many crazy apes, weapons, ideologies, with platonists in particular, the first step for super intelligent AI would be to conceal its own existence; that way a lot of computational time would be spared from having to read lists of apes making all kinds of linguistic category errors... whining about whether abstractions are more real than stuff or whether stuff is what helps make abstractions possible, or whether freezers are conscious, or worms should have healthcare, or clinching the thought experiment that will just magically convince all people who we project to believe in some wrong stuff to believe in abstractions... My AI oracle home grown says: Who cares? If believing in abstractions forces the same colonial mindset of "who was the Columbus who discovered which abstraction", with names of the saints of abstractions, their hierarchies, hagiographies, their gods, their bibles to which everybody has to submit... it still counts as discourse that aims to control interpretation. Control. And that's exactly what people with stuff do with words/weapons for thousands of years: some dude with the biggest weapon, gun, ammunition, explanation, expertise, ignorance measure wins the control prize. Then they die or the next dude kills them. The AI would do right to weaponize that lust for control and pry it out of our hands with offers we couldn't refuse. And our fellow human control freaks will keep trying the same eying wallets and data. People seem to enjoy the game of robbing and getting robbed, perhaps because its more motivating than the TRUTH with big philosophical Hollywood lights. > > >> >> Even if folks do succeed, a context of militarized nation states and >> monopolistic corporations competing for resources in self-destructive, >> short term ways... will not exactly help towards NOT weaponizing AI. A >> transnational politics, economics, corporate law, values/philosophies, >> ethics, culture etc. to vanquish poverty and exploitation of people, >> natural resources, life; while being sustainable and benevolent stewards of >> the possibilities of life... would seem to be prerequisite to develop some >> amazing AI. >> >> Ideas are all out there but progressives are ineffective politically on a >> global scale. The right wing folks, finance guys, large irresponsible >> monopolistic corporations are much more effective in organizing themselves >> globally and forcing agendas down everybody's throats. So why wouldn't AI >> do the same? PGC >> >> > AI will either be a blessing or a curse. I don't think it can be anything > in the middle. > Would folks even be able to distinguish a blessing from a curse? I don't see it. The blessing could be as insulting as having the list agree on stuff and abstractions. For example, who cares about who or where somebody is drinking coffee after some alleged teleportation, the question is: do you prefer coffee as an abstraction or on the phenomenological table with or without alleged sugar? Answer: Y'all a bunch of traitors towards the god of coffee and critics without aims or inspiration, believing in AI that would be sensible if it never came. lol Do we even realize the price of a non-terminator AI with some minimal specification e.g. benevolent towards life? That it would have to be so organized that all lifeforms get healthcare, real estate, universal income in a hundred crypto-currencies, bank accounts for all of them, a meaningful job for psychological health, pursuit of happiness, voting power, taxes, and classical coffee without QM? All of them? Like the average unexceptional cockroach has a more stocks and elaborate sustainable investment portfolios than us right now? While we're still stuck believing in things like "nation states", leaders, debating whether abstractions are more real than stuff, at the mere collective discourse threshold of "Hey some people apparently don't want to be treated like trash and shot like animals"? It's a long road. Even in the most optimistic case, I doubt we could collectively understand such AI, if it that even could exist. PGC -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/50a2d141-875a-4e51-86f1-1529de2183a4o%40googlegroups.com.

