Does his haveĀ a mathematical formula in it lol?

AI safety is dependent on:
1) If an evil actor makes it (developed in the US military 
labs....great....it's goal is shooting foreheads, wait until it ends up in your 
country by mistake!). Ban the military.
2) If a good actor makes it but fails to initiate it properly. The best way to 
ensure safety is to not argue with it, but to initiate it correctly. After 
that, like any child, you still need to raise your kid until he can take care 
of U.

To initiate it properly, you give it good goals. It can achieve them in various 
ways, maybe I can get mom that gift if I rob a bank...but it should reason it 
could die and others could die from guns or shock. Any solution it comes up 
with should clearly have a cause-effect motion, and decide the effect is 
dangerous. It may think, blow up bridge, then cars fall, people are in cars, 
they die. It has to question through Tasks like 
entailment/translation/summarization if the main goal or related goals are 
threatened (lives, money, food, video game stash, fetish doll, all items safe? 
Check.). It should know diamonds, tall building etc are higher value. This is 
the only way to teach it what to do and what not to do, there's too many 
sentences that could play out!
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb23b29b1f508a00f-Mfaa7118511a1617b1c303b28
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to