On Monday, October 21, 2024, at 10:06 AM, Matt Mahoney wrote:
> But LLMs are not taking our jobs because only a tiny fraction of the 10^17 
> bits of human knowledge stored in 10^10 human brains (10^9 bits per person, 
> assuming 99% is shared knowledge) is written down for LLMs to train on. LLMs 
> aren't taking your job because the knowledge it needs is in your brain and 
> can only be extracted through years of speech and writing at 5 to 10 bits per 
> second. There is only about 10^13 bits of public data available to train the 
> largest LLMs. When people see that job automation is harder than we thought, 
> the AI bubble will pop and investment in risky, unproven technology like 
> Hyperon will dry up.
I'm just playing devil's advocate here, but is one workaround for this issue to 
just develop AI that is better than humans at a very specific task: researching 
and developing AI itself? Then, at that point, it's just a matter of time 
before the automated AI based R&D search algorithm develops AGI. I can't 
estimate how long that would take (could take decades), but at least the 
process is on 'autopilot' at that point.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T3ced54aaba4f0969-M7f91757d0b9fbd42e1aed17e
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to