Re: [agi] Arthur Murray, i need your help.

2019-01-23 Thread Matt Mahoney
His book AI 4 U has 1 star on Amazon. The reviews are enlightening. But to be fair, AGI is a really hard problem. Nobody else is much closer to a solution. On Wed, Jan 23, 2019, 5:42 AM Mike Archbold I have the first edition "Artificial Intelligence" by Winston from the mid > 70s. He claims the

Re: [agi] AGI review

2019-01-25 Thread Matt Mahoney
Steve, have you tested your microscope design by building one? Then maybe people will be interested. Reverse engineering the brain is important even if it's not how we eventually build AGI. Figuring this out would be worth $75 trillion per year in work that machines could do instead of humans. I

Re: [agi] AI as a religion

2019-01-27 Thread Matt Mahoney
There are people who treat transhumanism as a religion or cult. They believe the singularity is coming in time for them to become immortal. I think they are drawn to it for reason similar to other cults. They are accepted into a group of like minded people and it answers their evolved fear of dying

[agi] The future of AGI

2019-01-31 Thread Matt Mahoney
omated method to develop software, keep in mind that the 6 x 10^9 bits of DNA that is you (equivalent to 300 million lines of code) required 10^50 copy and transcription operations on 10^37 bits of DNA to write over the last 3.5 billion years. Comments? -- -- Matt Mahoney, mattmahone...@gmail.c

Re: [agi] The future of AGI

2019-01-31 Thread Matt Mahoney
ical science of successful, real AGI will look > like. > > It'll have a big bespoke, dedicated chip foundry, a bespoke robot > fabrication facility and a dedicated test facility, with independent > oversight. It will take 10-15 years. It's like CERN and the Higgs Boson. >

Re: [agi] The future of AGI

2019-02-01 Thread Matt Mahoney
dy cares what forums and internet groupings call it > anymore. They just say screw it, then do it. > > A quality by any other name. > > Robert Benjamin > > -- > *From:* Matt Mahoney > *Sent:* Friday, 01 February 2019 6:05 AM > *To:* AGI >

Re: [agi] The future of AGI

2019-02-02 Thread Matt Mahoney
-- > On Feb 2, 2019, 12:25 AM, Linas Vepstas < linasveps...@gmail.com> wrote: > > > Thanks Matt, very nice post! We're on the same wavelength, it seems. -- > Linas > > On Thu, Jan 31, 2019 at 3:17 PM Matt Mahoney > wrote: > >> When I asked Linas V

Re: [agi] The future of AGI

2019-02-03 Thread Matt Mahoney
by moving atoms instead of electrons > > Sounds like you're alluding to metamaterials and metatronics correct? > There's been some early work on optical circuits for analog computing but > it's a long way off from general-purpose computing. > > On Thu, Jan 31, 2

Re: [agi] The future of AGI

2019-02-03 Thread Matt Mahoney
2-03 10:19:AM, Matt Mahoney wrote: > > > The problem is power consumption. Mechanical adding machines are older > > than vacuum tubes and would have very low power consumption if we > > could shrink them to molecular size. > > > > Copying bits in DNA, RNA, and prot

Re: [agi] Discussing hardware.

2019-02-04 Thread Matt Mahoney
I want to simulate the evolution of human intelligence from dirt. I need 10^50 DNA, RNA, and amino acid transcription operations and 10^37 bits to encode the DNA. I would prefer the simulation to run faster than 3 billion years. On Mon, Feb 4, 2019, 1:41 AM Alan Grimes I'm going to declare ALL th

Re: [agi] The future of AGI

2019-02-05 Thread Matt Mahoney
management. > > I think that $100 million could go a long way towards functional, > demonstrable proto AGI. It seems to me that DeepMind hasn’t made good use of > the $200 or $300million spend so far – they lack a proper theory of > intelligence. I don’t know why Vicarious, the

Re: [agi] The future of AGI

2019-02-08 Thread Matt Mahoney
3. Word pairs have a Zipf distribution just like single words. (I suspect it is also true of word triples representing grammar rules. It suggests there are around 10^8 rules). I hope this work continues. It would be interesting if it advances the state of the art on my large text benchmark

Re: [agi] The future of AGI

2019-02-09 Thread Matt Mahoney
o explicitly ask for it. You don't have a robot that will clean your house because it wouldn't know whether a magazine on the floor belongs on the table or in the trash. In the time it takes you to tell it, you could have picked it up yourself. It doesn't matter how sm

Re: [agi] The future of AGI

2019-02-10 Thread Matt Mahoney
ht algorithm using > relatively modest resources may be able to make the leap to radical > recursive self-improvement > > The question is what small tweak/addition to the current Global Brain could > let it serve as the launching-pad for the next phase, the recursively > self-impro

Re: [agi] The future of AGI

2019-02-12 Thread Matt Mahoney
em will get cheaper. Once it is possible for anyone to buy cheap molecular scale 3-D printers, people are going to experiment and build these things, just like cheap computers enabled people to write viruses and worms. -- -- Matt Mahoney, mattmahone...@gmail.com

Re: [agi] The future of AGI

2019-02-13 Thread Matt Mahoney
upersede-human-level-intelligence/answer/Matt-Mahoney-2 The short answer is that you can't compare human and machine intelligence. The two most widely accepted measures, the Turing test and universal intelligence, give vastly different answers. The best a machine can ever do in a Turing

Re: [agi] The "Pizza with..." demo

2019-02-13 Thread Matt Mahoney
Doug Lenat (creator of Cyc) posed this problem. The police arrested the demonstrators because they feared violence. The police arrested the demonstrators because they advocated violence. What does "they" refer to? Lenat hoped to build a database of common sense rules, a "sea of assertions" to a

Re: [agi] Why are people afraid of robots? Past life memories.

2019-02-16 Thread Matt Mahoney
An upload is a robot that looks and acts like you. What did you think it was? On Sat, Feb 16, 2019, 6:15 AM Logan Streondj Stefan, but do you have the critical insight necessary to create AGI? > The critical insight being that robots are just host bodies, or > soul-actuated vehicles. > Until you

Re: [agi] New OpenAI Language Model

2019-02-16 Thread Matt Mahoney
The paper mentioned improving compression of enwik8 from 0.99 to 0.93 bits per character but gives no details or citation. enwik8 is from my large text benchmark and is the test file for the Hutter prize. The current record is actually 1.22 bits per character and I haven't received an entry from th

Re: [agi] Two Questions about Mentiflex...

2019-02-16 Thread Matt Mahoney
Steve, how many rules are in Dr. Eliza? How long did it take you to develop Dr. Eliza? How many rules would it take to encode the knowledge in all the world's published scientific papers (75 million)? On Sat, Feb 16, 2019, 1:51 PM Steve Richfield Arthur, > > I have been one of your few suppor

Re: [agi] openAI's AI advances and PR stunt...

2019-02-22 Thread Matt Mahoney
Turing's proof of Goedel's incompleteness is easier to understand than the original 300 page paper. Suppose you have a procedure to prove for any program that it either halts or not. Then I can write a program that takes another program as input. If the input halts, then my program loops forever, o

Re: [agi] open source AGI effort

2019-02-23 Thread Matt Mahoney
OpenCog is one open source effort. But real progress in AI like Google, Siri, Alexa etc. is not just software. It's hundreds of petabytes of data from the 4 billion people on the internet and the millions of CPUs needed to process it. It's not just something you could download and run. I realize i

Re: [agi] open source AGI effort

2019-02-24 Thread Matt Mahoney
sed on > natural general intelligence, not AGI and it has ZERO intellect. > > [image: AGI.JPG] > > The entire AGI project is foundered on this basic fact of the science. > > And all we ever get here is the endless echo chamber of "if only we can > program enough computers&

Re: [agi] open source AGI effort

2019-02-24 Thread Matt Mahoney
t back no matter how much you practice, > which a something a gorilla can do because its DNA is different. > > Gorillas can do that? > > On Sun, 24 Feb 2019 at 14:46, Matt Mahoney > wrote: > >> Colin, I think the source of our disagreement is that we have very >> dif

Re: [agi] The future of AGI

2019-02-25 Thread Matt Mahoney
k) iteratively by inducing a "block structure", which I think means merging leaf nodes into their siblings and parents. He doesn't give an algorithm, in keeping with the paper's lack of mathematical rigor or standard terminology. (He never mentions graphs, trees, vertices, edge

Re: [agi] AI breakthrough question

2019-02-27 Thread Matt Mahoney
The long awaited sequel to AI4U. :-/ If I had a an idea that I thought would greatly accelerate AGI, maybe I would publish it so that when others developed it and proved me right, I would get credit and worldwide game for thinking of it first. Or maybe others would just ignore it because ideas ar

Re: [agi] AI breakthrough question

2019-02-28 Thread Matt Mahoney
The character recognition demo doesn't work for me. It spins for a minute and outputs an empty graph. The paper reads like a patent application. But patenting something that nobody can build is not a good strategy. You are better off patenting something trivial and obvious and waiting for someone

Re: Re: [agi] New OpenAI Language Model

2019-03-02 Thread Matt Mahoney
ithm that appears to be > learning but actually remembers the input exactly? > > At 2019-02-17 00:32:06, "Matt Mahoney" wrote: > > The paper mentioned improving compression of enwik8 from 0.99 to 0.93 bits > per character but gives no details or citation. enwik8 is from m

Re: [agi] An Experiment

2019-03-05 Thread Matt Mahoney
have already been run, I already know what the results will be. -- -- Matt Mahoney, mattmahone...@gmail.com -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tbefabf50a1da4070-M8cea292ce32e600e1637a5cf Delivery op

Re: [agi] Yours truly, the world's brokest researcher, looks for a bit of credit

2019-03-07 Thread Matt Mahoney
Actually the "I ate pizza with {a fork|pepperoni|Bob}" example in your slides is mine. But you can credit Doug Lenat with "The police arrested the demonstrators because they {feared|advocated} violence". NLP is not AGI but it is an important component of it. It's a good place to start. But you rea

Re: [agi] Yours truly, the world's brokest researcher, looks for a bit of credit

2019-03-08 Thread Matt Mahoney
Language is essential to every job that we might use AGI for. There is no job that you could do without the ability to communicate with people. Even guide dogs and bomb sniffing dogs have to understand verbal commands. On Thu, Mar 7, 2019, 7:25 PM Robert Levy wrote: > It's very easy to show that

Re: [agi] Do Neural Networks Need To Think Like Humans?

2019-03-10 Thread Matt Mahoney
What this 2 minute video seems to show is that neural networks need to see more like humans to detect shapes rather than just textures. Our visual cortex detects lines, edges, and other regions of high contrast in several layers. But we see shapes by moving the fovea along a path connecting these f

Re: [agi] Fwd: [Comp-neuro] MIT challenge and workshop (July 19-20): The Algonauts project - explaining brain data with computational models

2019-04-13 Thread Matt Mahoney
Do you have any published papers? That is how you can claim credit for your work. On Fri, Apr 12, 2019, 11:18 PM wrote: > Since unsupervised neural networks are not being used, then ai will > operating in a highly controlled > bubble. > I have a model, that uses unsupervised NN, but every tim

Re: [agi] My AGI 2019 paper draft

2019-04-19 Thread Matt Mahoney
It would help to get your paper published if it had an experimental results section. How do you propose to test your system? How do you plan to compare the output with prior work on comparable systems? What will you measure? What benchmarks will you use (for example, image recognition, text predict

Re: [agi] My AGI 2019 paper draft

2019-04-30 Thread Matt Mahoney
and/or specifically… IMO this where the multi-agent > consciousness mechanics come in but I’ll shield some eyes on that one :) > > > > John > > > > *From:* Stefan Reich via AGI > *Sent:* Friday, April 19, 2019 4:21 PM > *To:* AGI > *Subject:* Re: [agi]

Re: [agi] My AGI 2019 paper draft

2019-04-30 Thread Matt Mahoney
al > networks. But it seems like the next step could be an embryonic > program. Personally I feel like I spent too long on my overall design, > and some things become clear only through experimentation. AGI is a > game of nuanced distinctions, as is reality. > > Mike Archbold >

Re: [agi] My AGI 2019 paper draft

2019-05-11 Thread Matt Mahoney
line. >> >> *Artificial General Intelligence List <https://agi.topicbox.com/latest>* > / AGI / see discussions <https://agi.topicbox.com/groups/agi> + > participants <https://agi.topicbox.com/groups/agi/members> + delivery > options <htt

Re: [agi] I want bigger financing now

2019-06-01 Thread Matt Mahoney
AGI has a 15 digit price tag. On Fri, May 31, 2019, 5:32 PM Stefan Reich via AGI wrote: > Something in the 5 digits range. Anyone got contacts? > > -- > Stefan Reich > BotCompany.de // Java-based operating systems > *Artificial General Intelligence List * > / AGI

Re: [agi] I want bigger financing now

2019-06-04 Thread Matt Mahoney
The obvious application of AGI is automating human labor. The ROI would be USD $1 quadrillion. Google, Facebook, Amazon, and Apple are making real progress, but that's because they collectively have $3 trillion to spend on it. Neural networks have made most of the progress in vision, language, and

Re: [agi] Pitrat's blog: “Singularity” is improperly used in AI

2019-06-09 Thread Matt Mahoney
In mathematics a singularity is when a value goes to infinity. The technological singularity is when knowledge, computing power, and intelligence all go to infinity. This cannot happen because of physics. The observable universe has finite computing and storage capacity. Some people use singularit

Re: [agi] The Turing test is a joke

2019-06-10 Thread Matt Mahoney
What is the point of a reverse Turing test? I cannot do a billion arithmetic calculations per second. I don't have a mental street map of the world. I can't recognize a billion faces. I can't defeat world champions at Chess, Go, or Jeopardy. So we have achieved AGI? On Sun, Jun 9, 2019, 9:17 PM St

Re: [agi] AI-generating algorithms as alternate path to general AI

2019-06-19 Thread Matt Mahoney
Not impressed. The paper lacks an experimental results section. The paper proposes learning how to learn AI algorithms. Since Legg and Hutter proved that there is no such thing as a simple, universal learning algorithm, something more than someone's idea is needed. Half of human knowledge is lear

Re: [agi] AI-generating algorithms as alternate path to general AI

2019-06-19 Thread Matt Mahoney
Wed, Jun 19, 2019, 1:14 PM martin biehl wrote: > Hi Matt, > > I am always intrigued by those numbers, do you have a paper on this or > another source? I may have missed it at some point in the past. Also, > didn't evolution evolve wheels? Why and where would you draw a line? &g

Re: [agi] Re: A mathematics of concpetual relations?

2019-06-20 Thread Matt Mahoney
I disagree. By what mechanism would neurons representing feet and meters connect, but not kilograms and liters? Neurons form connections by Hebb's rule. Neurons representing words form connections when they appear close together or in the same context. On Thu, Jun 20, 2019, 4:14 PM Jim Bromer wr

Re: [agi] test

2019-06-23 Thread Matt Mahoney
Turing? On Sun, Jun 23, 2019, 10:47 AM Alan Grimes via AGI wrote: > test > > -- > Please report bounces from this address to a...@numentics.com > > Powers are not rights. > -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.c

Re: [agi] Re: A mathematics of concpetual relations?

2019-06-23 Thread Matt Mahoney
; >>> If natural neural networks are able to implement logical or symbolic >>> functions then they certainly have the potential to transmit richer data >>> that is able to encode a great many variations of data objects. So, >>> regardless of the details of

Re: [agi] test

2019-06-24 Thread Matt Mahoney
t; > > On Sun, Jun 23, 2019 at 11:14 PM Matt Mahoney > wrote: > >> Turing? > >> > >> On Sun, Jun 23, 2019, 10:47 AM Alan Grimes via AGI < > agi@agi.topicbox.com> wrote: > >>> test > >>> > >>> -- > >>>

Re: [agi] test

2019-06-26 Thread Matt Mahoney
work in progress… but I’m getting there slowly. >> >> https://sites.google.com/view/korrtecx/home >> >> >> > > > -- > Stefan Reich > BotCompany.de // Java-based operating systems > Artificial General Intelligence List / AGI / see discussions + participan

Re: [agi] ARGH!!!

2019-06-26 Thread Matt Mahoney
A computer is any device which can simulate a universal Turing machine up to some memory bound. Real computers are finite state machines. But with sufficient time and memory they can perform any halting computation because all halting computations use a finite amount of tape. My brain is a compute

Re: [agi] ARGH!!!

2019-06-28 Thread Matt Mahoney
ious left turn in AGI > science should come from a place like this, and a social media community of > this kind, where stakeholders abound. It would be very cool to be able to > tell any potential reviewers to join the forum to read the archives covering > the creation of the wo

Re: [agi] ARGH!!!

2019-06-28 Thread Matt Mahoney
Colin, In 1950, Turing claimed that computers could be programmed for intelligence. He carefully defined intelligence to mean that when a person communicated with it via a text only channel, that person could not distinguish it from another person. He also carefully defined what he meant by a comp

Re: [agi] ARGH!!!

2019-06-30 Thread Matt Mahoney
Colin, in your quest to create an artificial consciousness, can you explain: 1. How do you test a human, animal, robot, or program to tell if it is conscious or not? 2. What aspect of human behavior is possible in a machine only if it is conscious? 3. What aspect of consciousness, if any, depend

Re: [agi] ARGH!!!

2019-07-01 Thread Matt Mahoney
Colin, yes you answered my questions about consciousness. To summarize, by consciousness you mean qualia, that which makes you different than a philosophical zombie. Since a zombie is by definition behaviorally identical to a human, there is no test for consciousness and no capability that depends

Re: [agi] ARGH!!!

2019-07-02 Thread Matt Mahoney
same test that people have argued proves that lobsters feel pain when you boil them. http://mattmahoney.net/autobliss.txt Am I missing something? On Tue, Jul 2, 2019, 7:41 AM Colin Hales wrote: > > Hi Matt, > > On Tue, Jul 2, 2019 at 1:05 PM Matt Mahoney > wrote: > >> Colin, y

Re: [agi] ARGH!!!

2019-07-02 Thread Matt Mahoney
So if computation is not behind intelligence (based on 65 years of AGI failure) and you have no idea what is, then what is the basis of your chip design, and what do you hope to accomplish with it? On Tue, Jul 2, 2019, 10:00 AM Colin Hales wrote: > > > On Tue, Jul 2, 2019 at 10:5

Re: [agi] ARGH!!!

2019-07-05 Thread Matt Mahoney
Colin, the normal scientific method as I understand it is to propose a hypothesis that makes testable predictions and then test them. If the prediction is correct then it increases your confidence in other predictions made by the same theory. Do you agree? You proposed a theory that consciousness

Re: [agi] The Hardware problem

2019-07-12 Thread Matt Mahoney
Google, Alexa, and Siri are the closest thing to AI we have. They only fail the Turing test because they are too smart and too helpful. The reason nobody on this list has anything that advanced is because nobody here has hundreds of billions to spend on it. On Fri, Jul 12, 2019, 9:03 AM Stefan Rei

Re: [agi] The Hardware problem

2019-07-13 Thread Matt Mahoney
27;t been released yet AFAIK so there have been no independent evaluations. Nobody else here has anything close. > On Fri, Jul 12, 2019, 17:27 Matt Mahoney wrote: > >> Google, Alexa, and Siri are the closest thing to AI we have. They only >> fail the Turing test because they a

Re: [agi] While you were working on AGI...

2019-07-14 Thread Matt Mahoney
On Sat, Jul 13, 2019, 6:43 PM Basile Starynkevitch wrote: > But you forgot the difference between AI & AGI. > AGI is lots of narrow AI working together. It's not the simple, universal breakthrough you would like to have. It's the one we have to have because Legg proved that powerful predictors ar

Re: [agi] scholar references in Russian on AGI & symbolic AI systems

2019-07-16 Thread Matt Mahoney
I don't speak Russian but I have been following data compression research (which is a machine learning/AI problem) on encode.ru (in English). Most of the leading researchers in this field in the 1990s were based in Russia and many still are but I'm not aware of newer work published in Russian. On

Re: [agi] While you were working on AGI...

2019-07-16 Thread Matt Mahoney
On Mon, Jul 15, 2019, 10:13 AM wrote: > > https://towardsdatascience.com/no-you-cant-get-from-narrow-ai-to-agi-eedc70e36e50 > > > > https://medium.com/intuitionmachine/from-narrow-to-general-ai-e21b568155b9 > I agree with you that narrow AI won't evolve into AGI. That's why I proposed building lo

Re: [agi] scholar references in Russian on AGI & symbolic AI systems

2019-07-18 Thread Matt Mahoney
I agree we need less philosophy and speculation on which approaches to AGI should work, and more experiments to back up untested ideas. Obviously I haven't solved AGI, but you can find my work, mostly in data compression, at http://mattmahoney.net/dc/ My main result that is relevant to AGI is the

Re: [agi] scholar references in Russian on AGI & symbolic AI systems

2019-07-18 Thread Matt Mahoney
On Thu, Jul 18, 2019, 9:40 PM Costi Dumitrescu wrote: > Write input text - remove spaces in the input text - compress - send - > decompress - AI - output text including spaces. > In 2000 I found that you could find most of the word boundaries in text without spaces simply by finding the high ent

Re: [agi] Reflections on OpenAI + Microsoft

2019-07-28 Thread Matt Mahoney
It's about time Microsoft recognized that AI might be important. They are always years behind other innovators. Behind Apple on GUIs, behind AOL on the internet, behind Yahoo on web mail, behind Google on search, behind Amazon on cloud computing. They completely missed out on social networks and di

Re: [agi] ZPAQ

2019-07-29 Thread Matt Mahoney
The ZPAQ Linux packages are an older version. The latest is here. http://www.mattmahoney.net/dc/zpaq.html There are 5 compression levels you can try. The newer versions focused on fast incremental backup functionality, dedupe, speed, and rollback capability, but the compression ratio is still good

Re: [agi] ZPAQ

2019-07-29 Thread Matt Mahoney
they want to use it. On Mon, Jul 29, 2019, 12:31 PM Costi Dumitrescu wrote: > Sell it to PDF software makers if it's any good with images and scans > > > On 29.07.2019 19:15, Matt Mahoney wrote: > > The ZPAQ Linux packages are an older version. The latest is here. >

Re: [agi] ZPAQ

2019-07-29 Thread Matt Mahoney
s the file with no compression. On Mon, Jul 29, 2019, 2:49 PM Costi Dumitrescu wrote: > New software scanning to PDF with the smartphone camera > > What is the ratio to a JPEG? > > > On 29.07.2019 21:35, Matt Mahoney wrote: > > PDF compresses text with deflate (zip) and s

Re: [agi] Controlled AI

2019-07-31 Thread Matt Mahoney
What is your vision of AGI that it would need to be regulated? What kind of regulations would it need? What bad things could happen if it was unregulated? On Wed, Jul 31, 2019, 5:28 AM Stefan Reich via AGI wrote: > What do you think about the possibility of AI being "regulated" at all? > > On Tu

Re: [agi] My paper in AGI-19

2019-07-31 Thread Matt Mahoney
Not understanding the math is the reader's problem. It is necessary to describe the theory and the experiments and shouldn't be omitted. The paper describes 3 phases of training a reinforcement learning neural network. The first phase is experimenting with random actions. The next two phases choos

Re: [agi] My paper in AGI-19

2019-07-31 Thread Matt Mahoney
On Thu, 1 Aug 2019 at 01:40, Stefan Reich < >>> stefan.reich.maker.of@googlemail.com> wrote: >>> >>>> Wow that's a smart answer >>>> >>>> On Wed, 31 Jul 2019 at 23:30, Manuel Korfmann >>>> wrote: >>>> >&g

Re: [agi] My paper in AGI-19

2019-07-31 Thread Matt Mahoney
le to solve everything. This > requires further implementations, modifications, time, teamwork, financial > support, etc. > > On Thu, Aug 1, 2019 at 1:34 AM Matt Mahoney > wrote: > >> Not understanding the math is the reader's problem. It is necessary to >> descr

Re: [agi] AGI Python library

2019-08-01 Thread Matt Mahoney
You let it out of the box?!?!? WE'RE DOOMED!!! On Thu, Aug 1, 2019, 7:10 AM Danko Nikolic wrote: > Hi everyone, > > I just tried the new agi library for Python. This is so exciting! But it > does not work really well for me. It is not responding any more. Where am I > making the mistake? Please

Re: [agi] My paper in AGI-19

2019-08-01 Thread Matt Mahoney
t; There is no universal problem solver. So for the purpose of building a > real AGI, how many problems should our model be able to solve? How big is > our problem space? > > > On Thu, Aug 1, 2019, 8:22 AM Matt Mahoney wrote: > >> The human brain cannot solve every problem.

Re: [agi] Controlled AI

2019-08-02 Thread Matt Mahoney
Let's say that AGI regulations are proposed with input from AGI experts. That would be us. Should we ban killer robots? Countries that could actually build them (like the USA) oppose the treaty. Should we regulate the collection, storage, and transfer of the intimate personal knowledge that big c

Re: [agi] My paper in AGI-19

2019-08-03 Thread Matt Mahoney
y of Trades wrote: > Matt do another paper and find a way to refer work that goes without > publishing papers and books, such as the Senator's. > > > On 02.08.2019 05:53, Matt Mahoney wrote: > > The obvious application of AGI is automating $80 trillion per year > >

Re: [agi] AJI

2019-08-04 Thread Matt Mahoney
The zdnet article is critical, but I think we should pay attention to what China is doing in AI. Power optimization is a big problem. Neuromorphic computing essentially reduces a synapse operation from 32 bits to 1 bit, which reduces power by 1000 (assuming O(n^2) multiplication hardware) but you

Re: [agi] My paper in AGI-19

2019-08-04 Thread Matt Mahoney
I don't like government censorship, but I am all in favor of list administrators booting people who have nothing better to contribute than posting childish insults. On Sun, Aug 4, 2019, 3:28 AM Basile Starynkevitch wrote: > > On 8/4/19 9:25 AM, rounce...@hotmail.com wrote: > > Mohammud is a monk

Re: [agi] How about a Markov chain to convert neural networks to Markov chains, then every step of action could be understood by a human or?

2019-08-05 Thread Matt Mahoney
You realize that a Markov chain equivalent to a human brain sized neural network would have 2^600,000,000,000,000 states? On Mon, Aug 5, 2019, 8:13 AM Stefan Reich via AGI wrote: > Uh... > > Wut? > > :-) > > On Mon, 5 Aug 2019 at 08:15, Manuel Korfmann wrote: > >> https://twitter.com/LemonAndro

Re: [agi] Narrow AGI

2019-08-05 Thread Matt Mahoney
Narrow AI doesn't grow into AGI. AGI is lots of narrow AI specialists put together. Nobody in an organization can do what the organization does. Every member either knows one specific task well, or can refer you to someone who does. Kind of like the organization of the structures of your brain. On

Re: [agi] Narrow AGI

2019-08-09 Thread Matt Mahoney
rate the pain of the thought on a >> scale from 0-100. They are given 3 options and can provide a 4rd free form >> one >> end >> end >> PAINDOTIQ.new.train >> >> ``` >> >> https://gist.github.com/LemonAndroid/7a5f2f521d0e0aa2f8ec8dcce28dc90

Re: [agi] Narrow AGI

2019-08-09 Thread Matt Mahoney
Suppose you have a simple learner that can predict any computable sequence of symbols with some probability at least as good as random guessing. Then I can create a simple sequence that your predictor will get wrong 100% of the time. My program runs a copy of your program and outputs something diff

Re: [agi] Narrow AGI

2019-08-09 Thread Matt Mahoney
for quantum computers... but our > physical universe is a quantum system... > > -- Ben > > On Sat, Aug 10, 2019 at 9:08 AM Matt Mahoney > wrote: > > > > Suppose you have a simple learner that can predict any computable > sequence of symbols with some probability at le

Re: [agi] Narrow AGI

2019-08-10 Thread Matt Mahoney
ul predictors exist, but they are necessarily highly complex. https://arxiv.org/abs/cs/0606070 -- -- Matt Mahoney, mattmahone...@gmail.com -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T1ff21f8b11c8c9ae-M1dfe2f79

Re: [agi] Narrow AGI

2019-08-10 Thread Matt Mahoney
Sorry, it's math. You can have AGI. You just can't have any shortcuts. -- -- Matt Mahoney, mattmahone...@gmail.com -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T1ff21f8b11c8c9ae-Md85271c00281

Re: [agi] ConscioIntelligent Thinkings

2019-08-23 Thread Matt Mahoney
Sorry, that's just nonsense. Consciousness is what thinking feels like. That feeling evolved so that you would fear dying, just like every other animal. AGI is a very hard engineering problem. It will take a lot of computing power, software, and data collection to reproduce human behavior. If you

Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-23 Thread Matt Mahoney
How do you test whether a rat is conscious or not? On Fri, Aug 23, 2019, 9:24 PM wrote: > "Consciousness has to do with observing temporal patterns." > The term pattern is ... obscure I'm afraid I try to avoid it but... > > It's more than observe, I would say occupy representation. A pattern is

Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-23 Thread Matt Mahoney
So the hard problem of consciousness is solved. Rats have a thalamus which controls whether they are in a conscious state or asleep. John, is that what you meant by consciousness? On Fri, Aug 23, 2019, 9:13 PM wrote: > Consciousness has to do with observing temporal patterns. Intelligence is >

Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-26 Thread Matt Mahoney
On Mon, Aug 26, 2019, 8:05 AM Stefan Reich via AGI wrote: Is all this discussion leading anywhere? No. Consciousness is an irrelevant distraction to anyone doing serious work in AGI. -- Artificial General Intelligence List: AGI Permalink: https://agi.to

Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-27 Thread Matt Mahoney
On Tue, Aug 27, 2019, 9:30 AM Stefan Reich via AGI wrote: Please point me to the code being written as a result of this talk then :-) http://mattmahoney.net/autobliss.txt Actually I wrote it in response to this conversation in 2007 because consciousness is a topic that just won't die. The progr

Re: [agi] Re: ConscioIntelligent Thinkings

2019-08-29 Thread Matt Mahoney
On Thu, Aug 29, 2019, 7:39 AM wrote: > On Thursday, August 29, 2019, at 1:49 AM, WriterOfMinds wrote: > > Like I said when I first posted on this thread, phenomenal consciousness > is neither necessary nor sufficient for an intelligent system. > > > This is the premise that you are misguided by.

Re: [agi] You can help train desktop image segmentation

2019-08-29 Thread Matt Mahoney
I doubt segmentation will help with image recognition. You lose context. You recognize people not just by their faces but by when and where you see them, who they are with, and what they say. It is easier to recognize a car on a road than a car or a road on a white background. We tried word segmen

Re: [agi] Re: ConscioIntelligent Thinkings

2019-09-02 Thread Matt Mahoney
If we are living in a simulation then of course anything is possible. It isn't a law that nothing is faster than light. It's an observation. Here are at least 4 possibilities, listed in decreasing order of complexity, and therefore increasing likelihood if Occam's Razor holds outside the simulation

Re: [agi] Simulation

2019-09-21 Thread Matt Mahoney
ength description. I can't give you an example of one of those either. On Fri, Sep 20, 2019, 5:39 PM TimTyler wrote: > On 2019-09-02 16:39:PM, Matt Mahoney wrote: > > Here are at least 4 possibilities, listed in decreasing order of > > complexity, and therefore increasing likel

Re: [agi] can someone tell me what before means without saying before in it?

2019-09-28 Thread Matt Mahoney
Before means not after. After means at a later time. Later means not earlier. Earlier means before. Ultimately every word in the dictionary is defined using other words in the same dictionary. That is what words mean. A language model is a probability distribution over word sequences. Two words ha

Re: [agi] The Job market.

2019-09-29 Thread Matt Mahoney
I was going to suggest he post his resume here. Not that anyone will hire him now after seeing this tirade, but he might get some good advice. On Sun, Sep 29, 2019, 11:30 AM WriterOfMinds wrote: > In the coming months, the world will pay quite dearly for fucking over > my life. > > THEY WILL PAY

Re: [agi] Genetic evolution of logic rules experiment

2019-09-30 Thread Matt Mahoney
Boolean logic is a subset of neural networks. A single neuron can implement any logic gate. Assume the output is clamped between 0 and 1. A and B = A + B - 1. A or B = A + B. Not A = -A + 1. But first order logic is not so simple. We also know from 35 years of experience (beginning with Cyc) that

Re: [agi] The Job market.

2019-09-30 Thread Matt Mahoney
On Mon, Sep 30, 2019, 7:31 AM wrote: > I wanted to code my great work and have a humble job that did not > distract me. > But all jobs are taken. > That's not how it works. I had this great idea in 1999 for testing language models using text compression. So I did lots of experiments and publis

Re: [agi] Genetic evolution of logic rules experiment

2019-10-01 Thread Matt Mahoney
On Tue, Oct 1, 2019, 9:21 AM YKY (Yan King Yin, 甄景贤) < generic.intellige...@gmail.com> wrote: > > From the data of this model, it would be *inferred* that "John is > probably unhappy / heart-broken". It is this inference mechanism that is > very mysterious to us. > Human reproductive behavior is

Re: [agi] The Job market.

2019-10-04 Thread Matt Mahoney
On Fri, Oct 4, 2019, 6:54 AM John Rose wrote: > On Wednesday, October 02, 2019, at 11:24 AM, James Bowery wrote: > > Wolfram! Well! Perhaps you should take this up with Hector Zenil > : > > > Interesting: https://arxiv.org/abs/1608.05972 > Zenil, like W

Re: [agi] The Job market.

2019-10-05 Thread Matt Mahoney
On Sat, Oct 5, 2019, 8:00 AM John Rose wrote: > On Friday, October 04, 2019, at 12:42 PM, Matt Mahoney wrote: > > Evolution is arguably simple, but it required 10^48 DNA copy operations on > 10^37 bits to create human intelligence > > > Simple programs that create appare

  1   2   3   4   5   6   7   8   9   >