On Tuesday, January 14, 2025, at 6:12 PM, Matt Mahoney wrote:
> Wolpert's theorem is fundamental to the alignment problem. Either you can
> predict what AI will do, or it can predict you, but not both. We use
> prediction accuracy to measure intelligence. If you want to build an AI that
> is sma
On Mon, Jan 13, 2025, 10:22 PM John Rose wrote:
> On Monday, December 23, 2024, at 11:17 AM, James Bowery wrote:
>
> *We may be talking at cross purposes here. I am referring to the theorem
> that no intelligence can model itsel*
>
>
> I keep thinking there is a way around Wolpert's theorem, agen