Re: [agi] Re: Safety Via Wolpert-Constrained ML

2025-01-14 Thread John Rose
On Tuesday, January 14, 2025, at 6:12 PM, Matt Mahoney wrote: > Wolpert's theorem is fundamental to the alignment problem. Either you can > predict what AI will do, or it can predict you, but not both. We use > prediction accuracy to measure intelligence. If you want to build an AI that > is sma

Re: [agi] Re: Safety Via Wolpert-Constrained ML

2025-01-14 Thread Matt Mahoney
On Mon, Jan 13, 2025, 10:22 PM John Rose wrote: > On Monday, December 23, 2024, at 11:17 AM, James Bowery wrote: > > *We may be talking at cross purposes here. I am referring to the theorem > that no intelligence can model itsel* > > > I keep thinking there is a way around Wolpert's theorem, agen