"" there’s no way to make the administrative and bureaucratic systems of
apartheid and violence more humane for the people subjugated by that system.
I think we must forcefully put on the table the possibility that we will
destroy systems ... ""
- - -

Come rendere umani sistemi amministrativi e burocratici?
Distruggerli? Oppure “” umanizzare i loro linguaggi “”, umanizzando il
legalese, il burocratese … umanizzando la politica che li produce,
umanizzando l’umano. Umanizzare i linguaggi implica umanizzare le
procedure, la loro analisi. Qualcosa di simile era all'origine di AIR
(Analisi Impatto della Regolamentazione) e VIR (Valutazione Impatto della
Regolamentazione) che rapidamente sono diventati stereotipi appiccicati
passivamente ai testi normativi.

Ci stiamo provando da millemila anni, il digitale sarebbe un'occasione per
ripensare ex novo, chi legge mi ha insegnato l'enorme difficoltà del
compito.

Il giorno ven 28 giu 2024 alle ore 12:00 <nexa-requ...@server-nexa.polito.it>
ha scritto:

> ----------------------------------------------------------------------
> Message: 1
> Date: Thu, 27 Jun 2024 14:48:09 +0000
> From: Daniela Tafani <daniela.taf...@unipi.it>
> To: "nexa@server-nexa.polito.it" <nexa@server-nexa.polito.it>
> Subject: [nexa] Destroy AI
> Message-ID: <4719ecf595254cfba428f8e6c4f9a...@unipi.it>
> Content-Type: text/plain; charset="Windows-1252"
> Ali Alkhatib, 24 June 2024
> Destroy AI
> I’ve been struggling to articulate this idea, and maybe the problem is
> that it’s actually kind of simple once you put it out there, and there’s
> really no good reason to unpack a whole case for it once you put the
> thought on paper.
> I’m gravitating away from the discourse of measuring and fixing unfair
> algorithmic systems, or making them more transparent, or accountable.
> Instead, I’m finding myself fixated on articulating the moral case for
> sabotaging, circumventing, and destroying “AI”, machine learning systems,
> and their surrounding political projects as valid responses to harm.
> In other words, I want us to internalize and develop a more rigorous
> appreciation of those who fuck up AI and its supporting systems.
> With hegemonic algorithmic systems (namely large language models and
> similar machine learning systems), and the overwhelming power of capital
> pushing these technologies on us, I’ve come to feel like human-centered
> design (HCD) and the overarching project of HCI has reached a state of
> abject failure. Maybe it’s been there for a while, but I think the field’s
> inability to rise forcefully to the ascent of large language models and the
> pervasive use of chatbots as panaceas to every conceivable problem is
> uncharitably illustrative of its current state.
> CHI and FAccT have failed to meaningfully respond to the deskilling of
> creative labor; or to the environmental or humanitarian crises these
> systems cause or exacerbate around the world; or even to the co-opting of
> our spaces by grifters and con artists making up “probabilities of doom”.
> Indeed, in some ways, these spaces have avoided the difficult conversations
> and welcomed the nonsense in to try to avoid the anguish of facing a
> genocide in which we are collectively implicated, or the disillusionment of
> confronting our own roles as agents of the corporate states that invade,
> surveil, displace, and kill people.
> I’m no longer interested in encouraging the design of more human-centered
> versions of these murderous technologies, or to inform the more humane
> administration of complex algorithmic systems that separate families, bomb
> schools and neighborhoods, that force people out of their homes or onto the
> streets, or that deny medical care at the moment people need it most. These
> systems exist to facilitate violence, and HCI researchers who have
> committed their careers to curl back that violence at the margins have
> considerably more of something in them than I have. I hope it’s patience
> and determination, and not self-interested greed.
> Regardless, there’s no way to make the administrative and bureaucratic
> systems of apartheid and violence more humane for the people subjugated by
> that system.
>
> I think we must forcefully put on the table the possibility that we will
> destroy systems that fail to make a compelling affirmative case for their
> existence. That threat must be credible. We should actively undermine and
> sabotage systems, and recognize that labor as a moral project that we
> engage in, the way luddites sabotaged machinery that tore people apart.
>
> This isn’t really a post or a conversation for the HCD or HCI community.
> I’ve grown weary (and wary, I suppose) of the design community because they
> ultimately seem committed to … designing systems - an ideological project
> antithetical to this one.
>
> If you think of yourself as a member of that community most people call
> “design”, I would ask you to pose a few challenging questions to yourself.
> Start with these:
>
> Do you work with systems, or people? Which would you follow, if the two
> paths diverge? What if if they’re in conflict and you can only follow or
> defend one? If you see a system dismantling a human being’s life, do you
> think that the system must be fixed, or that the system must be destroyed?
>
> I wanted to end with a few positive notes: I like projects like Glaze at
> the University of Chicago, and while I’ve been trying to write a piece
> about this I came across a manifesto on mastodon that I thought was very
> cool and uncannily relevant. I’ve also seen some really rad indigenous art,
> and I know that there are other works that have explored the general space
> of destroying, sabotaging, and poisoning datasets and whatnot.
>
> I’d be surprised if nobody has ever thought to put this constellation of
> ideas together in the same blog post - please hit me up if you’ve seen
> additional thoughts like this. I’d love to hear from other people who are
> thinking about stuff like this.
>
> If you’ve been thinking along these lines, or if you’re one of the people
> I’ve linked to, then please take this as encouragement and an invitation to
> be in dialog with the other stuff I’ve written about and pointed to,
> because I think resistance is necessary, and mustn’t be captured by the
> design-brained people.
>
> https://ali-alkhatib.com/blog/fuck-up-ai
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> nexa mailing list
> nexa@server-nexa.polito.it
> https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa
>
>
> ------------------------------
>
> End of nexa Digest, Vol 182, Issue 50
> *************************************
>

Reply via email to