I liked your very interesting article on AI, Mike. I especially admired the
way you expressed your ideas.

Naturally, your piece generated thoughts, and one memory in particular
popped up. I seem to recall an episode of "Star Trek, the Next Generation"
where a creation in the Holodeck in the form of Professor James Moriarty
goes rogue and seizes control of the *Enterprise*
in his quest to live in reality, outside the holographic environment. I
need to go back and watch the episode. It was scary.

David Sharpe

On Tue, May 7, 2024 at 6:13 AM Mike Godwin <mnemo...@gmail.com> wrote:

> I thought you all might be interested in my little contribution to a
> colloquy in Reason magazine's 2024 issue. You can find the whole thing
> here:
>
> https://web.archive.org/web/20240506084929/https://reason.com/2024/05/05/ai-is-like/
>
> AI Is Like the Dawn of Modern Medicine
>
> By Mike Godwin
>
> When I think about the emergence of "artificial intelligence," I keep
> coming back to the beginnings of modern medicine.
>
> Today's professionalized practice of medicine was roughly born in the
> earliest decades of the 19th century—a time when the production of more
> scientific studies of medicine and disease was beginning to accelerate (and
> propagate, thanks to the printing press). Doctors and their patients took
> these advances to be harbingers of hope. But it's no accident this
> acceleration kicked in right about the same time that Mary Wollstonecraft
> Shelley (née Godwin, no relation) penned her first draft of *Frankenstein;
> or, The Modern Prometheus*—planting the first seed of modern
> science-fictional horror.
>
> Shelley knew what Luigi Galvani and Joseph Lister believed they knew,
> which is that there was some kind of parallel (or maybe connection!)
> between electric current and muscular contraction. She also knew that many
> would-be physicians and scientists learned their anatomy from dissecting
> human corpses, often acquired in sketchy ways.
>
> She also likely knew that some would-be doctors had even fewer moral
> scruples and fewer ideals than her creation Victor Frankenstein. Anyone who
> studied the early 19th-century marketplace for medical services could see
> there were as many quacktitioners and snake-oil salesmen as there were
> serious health professionals. It was definitely a "free market"—it lacked
> regulation—but a market largely untouched by James Surowiecki's "wisdom of
> crowds."
>
> Even the most principled physicians knew they often were competing with
> charlatans who did more harm than good, and that patients rarely had the
> knowledge base to judge between good doctors and bad ones. As medical
> science advanced in the 19thcentury, physicians also called for medical
> students at universities to study chemistry and physics as well as
> physiology.
>
> In addition, the physicians' professional societies, both in Europe and in
> the United States, began to promulgate the first modern medical-ethics
> codes—not grounded in half-remembered quotes from Hippocrates, but
> rigorously worked out by modern doctors who knew that their mastery of
> medicine would always be a moving target. That's why medical ethics were
> constructed to provide fixed reference points, even as medical knowledge
> and practice continued to evolve. This ethical framework was rooted in four
> principles: "autonomy" (respecting patient's rights, including
> self-determination and privacy, and requiring patients' informed consent to
> treatment), "beneficence" (leaving the patient healthier if at all
> possible), "non-maleficence" ("doing no harm"), and "justice" (treating
> every patient with the greatest care).
>
> These days, most of us have some sense of medical ethics, but we're not
> there yet with so-called "artificial intelligence"—we don't even have a
> marketplace sorted between high-quality AI work products and statistically
> driven confabulation or "hallucination" of seemingly (but not actually)
> reliable content. Generative AI with access to the internet also seems to
> pose other risks that range from privacy invasions to copyright
> infringements.
>
> What we need right now is a consensus about what ethical AI practice looks
> like. "First do no harm" is a good place to start, along with values such
> as autonomy, human privacy, and equity. A society informed by a
> layman-friendly AI code of ethics, and with an earned reputation for
> ethical AI practice, can then decide whether—and how—to regulate.
>
> *Mike Godwin is a technology policy lawyer in Washington, D.C.*
>
> --
> Be vigitant, I beseech you!
> ---
> You received this message because you are subscribed to the Google Groups
> "Shakespeare at Winedale Email List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to shakespeare-at-winedale-email-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/shakespeare-at-winedale-email-list/CAKFh3H_zn5zHf1%2Be33hBLJ3VHeor%2BUgLZ%3D3x9BeT2rRBW__q-Q%40mail.gmail.com
> <https://groups.google.com/d/msgid/shakespeare-at-winedale-email-list/CAKFh3H_zn5zHf1%2Be33hBLJ3VHeor%2BUgLZ%3D3x9BeT2rRBW__q-Q%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>
_______________________________________________
Winedale-l mailing list -- winedale-l@lists.wikimedia.org
To unsubscribe send an email to winedale-l-le...@lists.wikimedia.org

Reply via email to