I have some questions about Hyperon and your paper on how to improve LLM
performance. Have you or would you be able to implement MOSES or an LLM in
AtomSpace/MeTTa? Do you have a GPU implementation? Do you have any
applications or benchmark results? How much hardware do you have? How much
training data have you collected?

I want any project I work on to succeed. My concerns are:

1. There won't be a hard takeoff because you can't compare human and
machine intelligence. There is no threshold where if humans can produce
superhuman intelligence, then so could it, but faster. Computers started
surpassing humans in the 1950's and will continue to improve for decades
more before humans become irrelevant.

2. Webmind/Novamente/OpenCog/Hyperon hasn't produced anything since 1998. I
recall the goal at one time was to produce AGI by 2013. How much closer are
you?

3. Evolutionary algorithms like MOSES are inherently slow because each
population doubling generation adds at most one bit of Kolmogorov
complexity (live or die) to the genome. Our genome is 10^9 bits after 10^9
generations. Human evolution only succeeded because of massive computing
power that doesn't yet exist outside of the biosphere: 10^48 DNA base copy
operations on 10^37 bits, powered by 90,000 TW of solar power for 3 billion
years. Transistors would use a million times more energy, and we are still
far from developing energy efficient computing nanotechnology based on
moving atoms instead of electrons. Any ideas to speed this up?

4. It looks like from the size of your team and RFPs that you have 8
figures to invest. The big tech companies are investing 12 figures. But I
think right now we are in an AI bubble. Investors are going to want a
return on their investment, namely the $100 trillion per year labor
automation problem. But LLMs are not taking our jobs because only a tiny
fraction of the 10^17 bits of human knowledge stored in 10^10 human brains
(10^9 bits per person, assuming 99% is shared knowledge) is written down
for LLMs to train on. LLMs aren't taking your job because the knowledge it
needs is in your brain and can only be extracted through years of speech
and writing at 5 to 10 bits per second. There is only about 10^13 bits of
public data available to train the largest LLMs. When people see that job
automation is harder than we thought, the AI bubble will pop and investment
in risky, unproven technology like Hyperon will dry up. AI isn't going
away, just like the internet didn't go away after the 2000 dotcom boom. But
the hype will go. ChatGPT is 2 years old and still mostly a toy to help
kids write fan letters or cheat on homework. In the real world,
unemployment is down.

On Fri, Oct 18, 2024, 11:45 AM Ben Goertzel <bengoert...@gmail.com> wrote:

> Hey!
> 
> SingulairtyNET is offering some grants to folks who want to do some
> AGI-oriented Ai software development on specific projects that are
> part of our thrust to make an AGI using the OpenCog. Hyperon
> architecture,
> 
> Please see here for the details
> 
> https://deepfunding.ai/all-rfps/
> 
> The projects mainly involve development in our new MeTTa AGI-oriented
> language.   See here
> 
> https://metta-lang.dev/
> 
> for information on the MeTTa language itself, and links here
> 
> https://hyperon.opencog.org/
> 
> https://arxiv.org/abs/2310.18318
> 
> for general info on the Hyperon approach to AGI
> 
> thanks
> Ben
> 
> --
> -- Ben Goertzel, PhD
> http://goertzel.org
> CEO, SingularityNET / True AGI / ASI Alliance
> Chair, AGI Society
> 
> "One must have chaos in one's heart to give birth to a dancing star"
> -- Friedrich Nietzsche

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T3ced54aaba4f0969-M86c5b8534818a1bdb2cd6de5
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to