Hi Matt,

I don't have time right now to enter into a full philosophical
discussion on all the points you raise.   My view on these matters is
addressed in the materials you've already read, and I am well aware
you have a different perspective.  I have seen over the years that
arguing with you on such things is not pointful, you will retain your
same perspective no matter what I say and the only way to convince you
my approach makes sense will be to actually produce the HLAGI.  All
good...  Matt's gonna Matt ;)

To answer a few of your concrete questions for the sake of others on
this list though

-- There is a partial implementation of MOSES in Hyperon, it's not
quite done yet... a team in our Ethiopia office has been working on
it...  Also an initial prototype implementation of PLN reasoning ,
implemented mostly by Nil Geisweiller ...

-- Hardware-wise , SingularityNET has set aside around $60M USD for
building an AI server farm, we have ordered a bunch of the chips and
are waiting for delivery....   Our partners like Fetch.ai and CUDOS
have a bunch more, e.g. Fetch has put in at least $50M to build their
new Fetch Compute server farm.  In addition we have tools like Nunet
which enable decentralized heterogeneous compute resources to be
pulled into a Hyperon network (which is effective for some AI algos,
not so much for others)

-- Of course this is way less than what Big Tech spends on hardware,
but we are exploring a quite different region of
algorithm/architecture space and I believe our approach will be much
less wasteful of resources

-- In terms of data, we have Common Crawl like everyone else , plus
data from our own robots (Hanson Robotics and Mind Children) and lots
of biology and finance data that we've curated... however I don't
think that getting to HLAGI requires nearly as much data as models
like GPT4 have ingested.  These models operate too close to the
surface level and not enough via abstraction, so they need ridiculous
amounts of data...   Doing more intelligent abstraction greatly
reduces data requirements...

-- Regarding GPUs, Hyperon more effectively leverages multicore CPU
than GPU parallelism at the moment.   But if one runs Hyperon together
with DNNs then one wants hardware that tightly couples GPU and CPU.
We have been in close discussions with certain chip companies about
leveraging their architectures which involve very high bandwidth btw
GPU and CPU that share the same cache RAM. There should be a pertinent
press release shortly...

...

 Anyway I'm aware there's a variety of perspectives on this list which
is as it should be.   There are folks who think Big Tech is on an
inevitable path to create AGI with bigger and bigger DNNs, and folks
who think the only workable approaches to AGI will involve analog
computers, bio or quantum or hyper computers etc. etc.

For those who think Hyperon is a promising or at least intriguing
approach (a cognitive agent architecture with a back end comprising a
distributed/decentralized knowledge metagraph host to various
interoperating inferential, evolutionary and neural algorithms...),
the grant opportunities I have linked may be of interest.

thx
ben

On Mon, Oct 21, 2024 at 10:06 AM Matt Mahoney <mattmahone...@gmail.com> wrote:
>
> I have some questions about Hyperon and your paper on how to improve LLM 
> performance. Have you or would you be able to implement MOSES or an LLM in 
> AtomSpace/MeTTa? Do you have a GPU implementation? Do you have any 
> applications or benchmark results? How much hardware do you have? How much 
> training data have you collected?
>
> I want any project I work on to succeed. My concerns are:
>
> 1. There won't be a hard takeoff because you can't compare human and machine 
> intelligence. There is no threshold where if humans can produce superhuman 
> intelligence, then so could it, but faster. Computers started surpassing 
> humans in the 1950's and will continue to improve for decades more before 
> humans become irrelevant.
>
> 2. Webmind/Novamente/OpenCog/Hyperon hasn't produced anything since 1998. I 
> recall the goal at one time was to produce AGI by 2013. How much closer are 
> you?
>
> 3. Evolutionary algorithms like MOSES are inherently slow because each 
> population doubling generation adds at most one bit of Kolmogorov complexity 
> (live or die) to the genome. Our genome is 10^9 bits after 10^9 generations. 
> Human evolution only succeeded because of massive computing power that 
> doesn't yet exist outside of the biosphere: 10^48 DNA base copy operations on 
> 10^37 bits, powered by 90,000 TW of solar power for 3 billion years. 
> Transistors would use a million times more energy, and we are still far from 
> developing energy efficient computing nanotechnology based on moving atoms 
> instead of electrons. Any ideas to speed this up?
>
> 4. It looks like from the size of your team and RFPs that you have 8 figures 
> to invest. The big tech companies are investing 12 figures. But I think right 
> now we are in an AI bubble. Investors are going to want a return on their 
> investment, namely the $100 trillion per year labor automation problem. But 
> LLMs are not taking our jobs because only a tiny fraction of the 10^17 bits 
> of human knowledge stored in 10^10 human brains (10^9 bits per person, 
> assuming 99% is shared knowledge) is written down for LLMs to train on. LLMs 
> aren't taking your job because the knowledge it needs is in your brain and 
> can only be extracted through years of speech and writing at 5 to 10 bits per 
> second. There is only about 10^13 bits of public data available to train the 
> largest LLMs. When people see that job automation is harder than we thought, 
> the AI bubble will pop and investment in risky, unproven technology like 
> Hyperon will dry up. AI isn't going away, just like the internet didn't go 
> away after the 2000 dotcom boom. But the hype will go. ChatGPT is 2 years old 
> and still mostly a toy to help kids write fan letters or cheat on homework. 
> In the real world, unemployment is down.
>
> On Fri, Oct 18, 2024, 11:45 AM Ben Goertzel <bengoert...@gmail.com> wrote:
>> 
>> Hey!
>> 
>> SingulairtyNET is offering some grants to folks who want to do some
>> AGI-oriented Ai software development on specific projects that are
>> part of our thrust to make an AGI using the OpenCog. Hyperon
>> architecture,
>> 
>> Please see here for the details
>> 
>> https://deepfunding.ai/all-rfps/
>> 
>> The projects mainly involve development in our new MeTTa AGI-oriented
>> language.   See here
>> 
>> https://metta-lang.dev/
>> 
>> for information on the MeTTa language itself, and links here
>> 
>> https://hyperon.opencog.org/
>> 
>> https://arxiv.org/abs/2310.18318
>> 
>> for general info on the Hyperon approach to AGI
>> 
>> thanks
>> Ben
>> 
>> --
>> -- Ben Goertzel, PhD
>> http://goertzel.org
>> CEO, SingularityNET / True AGI / ASI Alliance
>> Chair, AGI Society
>> 
>> "One must have chaos in one's heart to give birth to a dancing star"
>> -- Friedrich Nietzsche
>
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink



-- 
-- Ben Goertzel, PhD
http://goertzel.org
CEO, SingularityNET / True AGI / ASI Alliance
Chair, AGI Society


"One must have chaos in one's heart to give birth to a dancing star"
-- Friedrich Nietzsche

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T3ced54aaba4f0969-M1c552ee2b2f96e3d395c5340
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to