A short paper on a concept I'm developing called The Computronium Abyss:
https://github.com/dissipate/computronium_abyss
Roast it.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T86ee7f7b146
I set up a Google Meet Hangout that is up 24/7. I'll be in there from time to
time.
AGI Group Hangout 24/7
Google Meet joining info
Video call link: https://meet.google.com/rxt-wypm-jxn
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.to
On Monday, October 21, 2024, at 10:06 AM, Matt Mahoney wrote:
> But LLMs are not taking our jobs because only a tiny fraction of the 10^17
> bits of human knowledge stored in 10^10 human brains (10^9 bits per person,
> assuming 99% is shared knowledge) is written down for LLMs to train on. LLMs
On Wednesday, October 16, 2024, at 2:49 PM, Matt Mahoney wrote:
> You first equation looks like the Bekenstein bound of a black hole with mass
> M. It gives the entropy as A/4 nats (1 nat = 1/ ln 2 ≈ 1.44 bits) where A is
> the area of the event horizon in Planck units. The Schwartzchild radius o
I'm not sure that it's an all-or-nothing situation in terms of economic output.
If you could create AI system that could do the job of any human with an IQ of
100 or less, that's half the population (not necessarily half the economic
output, though) out of a job. If Ben's project could produce a
My apologies, I recognize that Matt's benchmark for AGI is the common
benchmark.
I had just been reading about other models of AGI where the first AGI entity is
only about as intelligent as the average person, not that it is better than
every human at every task. In this model, the AGI entity
BTW, I think maybe Matt is talking about an Artificial Super Intelligence which
would encompass all human abilities/knowledge and I'm thinking about Artificial
General Intelligence, which is the intelligence/ability of an average human.
AGI as a concept is valid as applied to cave man as it is t
On Tuesday, October 22, 2024, at 8:15 AM, stefan.reich.maker.of.eye wrote:
> Just make a program that solves the hardest problem of all - presto, you
> solved the hardest problem of all.
It might be the 'hardest problem of all', but if researching and developing AI
is a specialized task, it could
Pick the Bit and Competitive Computing Platform - Towards a New Benchmark for
AGI System Performance
2024-12-07 Version 0.1.0
Steven W. Kane
1. The Pick the Bit Game
*1.1 Game Overview*
Pick the Bit (PtB) is a turn-based, multi-agent (minimum of 2 agents but
theoretically an unlimited number of
On Monday, December 09, 2024, at 1:17 PM, James Bowery wrote:
> Is this inadequate to prevent the random agent strategy?
In addition to what you quoted, another point that I forgot to add is that in
Matching Pennies you can play randomly strategically to observe your opponent
without taking more
On Monday, December 09, 2024, at 4:21 PM, Bill Hibbard wrote:
> FYI I've had some papers relevant to this.
At AGI-08: https://www.ssec.wisc.edu/~billh/g/hibbard_agi.pdf At AGI-11:
https://www.ssec.wisc.edu/~billh/g/hibbard_agi11a.pdf In JAGI as a co-author
(Sam really wrote it, I just
made a few
On Monday, November 25, 2024, at 11:43 AM, Matt Mahoney wrote:
> AI makes humans irrelevant. It makes it safe and preferable to live alone
> with your AI security, AI entertainment, AI friends and AI lovers, all your
> needs delivered to your door by self driving carts. No need to talk to humans
On Sunday, December 15, 2024, at 11:46 AM, YKY (Yan King Yin, 甄景贤) wrote:
> 2) I am planning to try it on the game of TicTacToe, which I'm an expert 😄.
It can play TicTacToe? Could it play Pick the Bit on commodity hardware in a
sandboxed environment?
https://agi.topicbox.com/groups/agi/T705ed5
On Sunday, December 15, 2024, at 8:26 PM, Quan Tesla wrote:
> Sounds like a complexed game of chicken and egg. It's based on algorithms,
> but if those are hidden - as they must be - the idea would be to mostly
> reward the player who can figure out how the "opposite" algorithms work.
>
> Inter
On Monday, December 16, 2024, at 1:17 AM, YKY (Yan King Yin, 甄景贤) wrote:
> Hello, long time no see 😁
>
> There are 2 levels: we humans understand the objective of the game right
> from the start. A simple reinforcement-learning (RL) agent does not "know"
> what's going on, it just learns from
15 matches
Mail list logo