I just watched a lengthy panel discussion about applying AI in the criminal
justice system, e.g. whether to release people who have been arrested.
While this is high impact, the problems they encounter are typical of
nearly all AI applications.

Whether deciding who to arrest, who to release, what stock to invest in,
whose face it is, etc., there seems to be some really basic criteria to
applying the "I" in AI. These systems do NOT meet the basic criteria of
being able to show that they are not discriminatory, colluding, stealing,
etc. This sets their users up to lose lawsuits, and imperils the public.

ANY system that can't explain it's output is a STATISTICAL system that is
NOT AI.

We could probably develop and adopt a one-paragraph standard that works for
everyone here - and in the process put us on the map.

Anyone here interested?

Steve

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc031666044462b42-M51c2bf814f2f2050dde4236f
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to