On self-modifying AI, see also

https://intelligence.org/files/TilingAgentsDraft.pdf

We model self-modification in AI by introducing “tiling” agents whose 
decision systems will approve the construction of highly similar agents, 
creating a repeating pattern (including similarity of the offspring’s 
goals). Constructing a formalism in the most straightforward way produces *a 
Gödelian difficulty*, *the* *“Löbian obstacle”*. By technical methods we 
demonstrate the possibility of avoiding this obstacle, but the underlying 
puzzles of rational coherence are thus only partially addressed. We extend 
the formalism to partially unknown deterministic environments, and show a 
very crude extension to probabilistic environments and expected utility; 
but the problem of finding a fundamental decision criterion for 
self-modifying probabilistic agents remains open.

@philipthrift



On Thursday, July 11, 2019 at 11:28:44 PM UTC-5, Terren Suydam wrote:
>
> Sure, but that's not the "FOOM" scenario, in which an AI modifies its own 
> source code, gets smarter, and with the increase in intelligence, is able 
> to make yet more modifications to its own source code, and so on, until its 
> intelligence far outstrips its previous capabilities before the recursive 
> self-improvement began. It's hypothesized that such a process could take an 
> astonishingly short amount of time, thus "FOOM". See 
> https://wiki.lesswrong.com/wiki/AI_takeoff#Hard_takeoff for more.
>
> My point was that the inherent limitation of a mind to understand itself 
> completely, makes the FOOM scenario less likely. An AI would be forced to 
> model its own cognitive apparatus in a necessarily incomplete way. It might 
> still be possible to improve itself using these incomplete models, but 
> there would always be some uncertainty.  
>
> Another more minor objection is that the FOOM scenario also selects for 
> AIs that become massively competent at self-improvement, but it's not clear 
> whether this selected-for intelligence is merely a narrow competence, or 
> translates generally to other domains of interest.
>
>
> On Thu, Jul 11, 2019 at 2:56 PM 'Brent Meeker' via Everything List <
> [email protected] <javascript:>> wrote:
>
>> Advances in intelligence can just be gaining more factual knowledge, 
>> knowing more mathematics, using faster algorithms, etc.  None of that is 
>> barred by not being able to model oneself.
>>
>> Brent
>>
>> On 7/11/2019 11:41 AM, Terren Suydam wrote:
>> > Similarly, one can never completely understand one's own mind, for it 
>> > would take a bigger mind than one has to do so. This, I believe, is 
>> > the best argument against the runaway-intelligence scenarios in which 
>> > sufficiently advanced AIs recursively improve their own code to 
>> > achieve ever increasing advances in intelligence.
>> >
>> > Terren
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/2ea3d286-2404-42c3-9875-bb3e179149c3%40googlegroups.com.

Reply via email to