The quantum woo theorem: Consciousness is mysterious. Quantum mechanics is mysterious. Therefore consciousness is quantum.
No it's not. The brain is not a quantum computer. We know this because it updates memory, which is not a unitary operation, which means that you can't reverse the operation to return to a past state. Neural networks used in LLMs cannot benefit from quantum computing. We have LLMs that pass the Turing test. That means you can't tell whether you are talking to a person or a machine. There is no definition or test of consciousness you can give me that doesn't either make both of them conscious or neither of them. I know this is a big concern for a lot of people because evolution programmed us to fear dying. Does your soul transfer to an upload, or is it just a robot that convinces other people that it's you? If I present you with the robot, will you shoot yourself to complete the upload, or refuse a procedure that would make you immortal? Anyone who has made actual progress toward AGI knows the question is meaningless. Why bother with consciousness when all you need is text prediction? On Fri, Feb 21, 2025, 10:46 PM John Rose <johnr...@polyplexic.com> wrote: > On Wednesday, February 19, 2025, at 11:26 PM, Matt Mahoney wrote: > > How do you study what you can't even define? Exactly what test are you > using to distinguish a conscious human from a zombie LLM passing the Turing > test by using nothing more than text prediction? Doesn't this prove there > is no difference between having feelings and being programmed to act out > feelings? > > > Text is symbolic approximation in the classical sense. Your consciousness, > your unique qualia have quantum basis but to communicate your qualia you > transmit classical text. Can a personalized LLM predict all strings that > you could possibly ever output if you had near infinite text bandwidth? It > can get close but it’s still an estimate of your qualia’s complexity which > is an estimate of your qualia's quantum complexity, assuming the LLM is > purely classical. If you have an LLM that operates off of a quantum field > system it might be able to get a closer estimate and perhaps somehow > achieve an equivalent IMO. > *Artificial General Intelligence List <https://agi.topicbox.com/latest>* > / AGI / see discussions <https://agi.topicbox.com/groups/agi> + > participants <https://agi.topicbox.com/groups/agi/members> + > delivery options <https://agi.topicbox.com/groups/agi/subscription> > Permalink > <https://agi.topicbox.com/groups/agi/T72460285b911fa58-Mbe81426bb21eb5dffcc3c146> > ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T72460285b911fa58-M3b4296a7f636defe2c261f71 Delivery options: https://agi.topicbox.com/groups/agi/subscription