On Wednesday, February 19, 2025, at 11:26 PM, Matt Mahoney wrote: > How do you study what you can't even define? Exactly what test are you using > to distinguish a conscious human from a zombie LLM passing the Turing test by > using nothing more than text prediction? Doesn't this prove there is no > difference between having feelings and being programmed to act out feelings?
Text is symbolic approximation in the classical sense. Your consciousness, your unique qualia have quantum basis but to communicate your qualia you transmit classical text. Can a personalized LLM predict all strings that you could possibly ever output if you had near infinite text bandwidth? It can get close but it’s still an estimate of your qualia’s complexity which is an estimate of your qualia's quantum complexity, assuming the LLM is purely classical. If you have an LLM that operates off of a quantum field system it might be able to get a closer estimate and perhaps somehow achieve an equivalent IMO. ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T72460285b911fa58-Mbe81426bb21eb5dffcc3c146 Delivery options: https://agi.topicbox.com/groups/agi/subscription