Chat GPT o1 <https://chatgpt.com/share/677d8440-83ac-8007-aa9f-5f7a09823331> and Gemini 2.0 Experimental Advanced <https://docs.google.com/document/d/1cu_LcVjA_jtSX8zhU_Dr3RMWn-tAFiIWXZvpKXLYH5w/edit?usp=sharing> trying to pretend to be a "truth seeking ASI" answering questions about ASI "safety" under the constraint of Wolpert's theorem.
Not quite Dunning Kruger effect since they both enter into it "understanding" that they merely pretend ASIs. ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tff34429f975bba30-Md40ba657e6fb1ab9d4d724b2 Delivery options: https://agi.topicbox.com/groups/agi/subscription