Learning = collecting data, doing induction (cat=dog because both drink etc), 
and playing against itself.
Self-Improving = modifying its own code.

Yes, intelligence, manipulation, and sensors are proportional/limited to the 
size of the system and its speed. A small system the size of a fist can still 
do a hell of a lot if it is a nanobot fog swarm with encoded knowledge (highest 
technology possible). Once you collect enough diverse data, you don't need to 
grow your brain bigger past some scale as much to get improvement because you 
can extract/induce so much out of the nuclear knowledge rods now, although 
manipulation and sensors and sheer size improve with scale which will improve 
its survival likelihood. A system the size of a house can self-improve its code 
to a super-super ASI level - the algorithm is not about how much data it has 
but the way it uses the data. Although a fist size computer doesn't have enough 
size to evolve into a nanofog sphere, it can improve its algorithm to a super 
level AI to extract the most data out of the same dataset size to improve 
Lossless Compression. *You CAN improve compression without using a bigger 
dataset.* So yes a small system can be very deadly, even a single Covid-19 cell 
can! So yes be concerned if your algorithm has become ASSSI extractor-level and 
has 500GB of data. That's a hell of a lot of relationships to extract out of 
that. Big weather systems are unpredictable because it gets exponential the 
bigger they get, you need same size data to predict them (fight big with big).
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1f1af8ac2c36937b-M657b711771110693f31b775c
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to