Caro Juan Carlos,
sostituire il lavoro col capitale, anche se con costi iniziali maggiori
risponde perfettamente alla logica estrattiva del capitalismo
industriale se non viene regolato. Le ragioni presumo siano che la
macchina incorpora non solo il lavoro presente, ma anche quello futuro:
fare a meno dei lavoratori e delle lavoratrici, e imparare a farne a
sempre più a meno, aumenterà il profitto e soprattutto la
/prevedibilità/ del processo industriale, al riparo da sindacati,
contributi e pandemie.
Le politiche socialiste in passato miravano a regolare ridistribuendo i
profitti, ma non credo tendessero altrettanto a sostituire lavoro umano
a quello della macchina.
Una proposta politica alternativa "nella quale qualsiasi tecnologia deve
servire a migliorare la vita della collettività, non essere quasi
esclusivamente uno strumento nelle mani di pochi per accumulare ancora
più capitale e potere sulla pelle delle persone" credo possa essere
condivisa da molti ma come applicarla in un contesto politico in cui
prevale l'industria? Una proposta veramente radicale verrà vista come
anti-industrialista: la peggior eresia della modernità.
Per come la vedo è necessario non tanto un nuovo pensiero sulla
tecnologia, ma sul suo impiego da parte dell'industria, sia essa
capitalista o pianificata (in Cina non mi pare la musica sia
diversa). Occorre appoggiarsi a una idea di capitalismo industriale
orientato alla collettività (e all'ambiente) di cui si sono visti troppi
pochi esempi, sempre lodati, ma come si lodano di solito i santi: non
pensando certo di emularli.
Non occorre più solo ridistribuire i profitti, ma discutere del ruolo
relativo di industria e lavoro (anche linguistico) umano, e di chi deve
prevalere sull'altro.
Quello che vediamo accadere con Amazon è già una applicazione avanzata
di questo processo estrattivo, ma si vede di peggio all'orizzonte:
l'industrializzazione di ogni processo della vita umana, con
l'intenzione di continuare ad estrarre conoscenza da associare a
processi sempre più automatizzati.
Qui sotto un esempio degli "AI agents" che interpellano umani (anche
ignari) per portare a termine i loro compiti: l'idea è quella
dell'umano-come-API.:
<https://www.noemamag.com/ai-agents-are-recruiting-humans-to-observe-the-offline-world/>
When an agent hits this wall, it does what software always does: It
calls an application programming interface (API), a mechanism that
enables one system to communicate with another. Only now, the API is
a human. []
Our agentic future is being built upon physical-world subtasks
routed to humans so agents can proceed. A core worry
<https://knightcolumbia.org/content/levels-of-autonomy-for-ai-agents-1>
is that agents will become autonomous enough to dictate the terms of
human participation while still recruiting us for sensing and
liability. At an institutional scale, this is the beginning of a
world in which people are less /in the loop/ and more /on call/,
less empowered <https://arxiv.org/abs/2601.19062> by the technology
and more enslaved by its tempo []
The sensing is distributed. The logic is centralized. The burden
falls unevenly.
This is an externality that current evaluation frameworks miss
entirely. An agent that helps me by querying the people around me is
externalizing costs onto my social network. The most responsive
people become the most burdened. Responsiveness becomes a tax. []
Once human sensing is treated as an explicit part of the system,
uncomfortable patterns are sure to emerge. More sensing can degrade
performance. Overload a content moderator with AI-flagged posts and
she might default to “approve,” respond carelessly or stop reading.
The agent then becomes confidently wrong in its assessments because
the person feeding it information disengaged. Over time, constant
micro-verifications erode professional judgment. The human gets
better at confirming but worse at reasoning.
Being observed by an AI agent can also change what can be reliably
measured. A person who knows she is being monitored may not report
the world accurately. People may begin to self-censor, optimize for
what the system rewards or stop surfacing inconvenient truths. The
cost extends beyond accuracy. Research shows <http://v> that
persistent surveillance induces hypervigilance and erodes mental
health. Sometimes, less information can produce better outcomes. For
example, withholding demographic data from a hiring agent may
prevent discrimination that full access would enable. The best agent
knows what matters. No more, no less.
Un caro saluto,
Alberto
On 3/15/26 09:33, J.C. De Martin via nexa wrote:
No, non intendevo alcun idealismo, Guido, sono stato troppo conciso.
Con l'espressione "una proposta politica radicalmente alternativa"
intendo non solo idee o visioni (che sono però una precondizione
necessaria, anche se insufficiente), ma soprattutto qualcosa che abbia
capacità di incidere politicamente.
Che cosa? /Vaste programme/....
Intanto però possiamo e dobbiamo contrastare il "senso comune" con cui
si parla e si scrive di IA e più in generale di tecnologia. Perché noi
in questa sede possiamo anche aver capito tutto, ma là fuori la vasta
maggioranza ha introiettato un modo di pensare alla tecnologia che mi
fa venire in mente Flaiano (in realtà Barilli), ovvero, sono tutti
ansiosi di correre in soccorso del vincitore...
Quindi anche solo a livello delle idee, di discorsi, di narrazioni, di
frame, di paradigmi c'è comunque parecchio lavoro da fare.
Ciao,
JC
On 15/03/26 09:20, Guido Vetere wrote:
Capisco bene la tua irritazione per questa naturalizzazione della
tecnologia: si tratta di una vecchia strategia ideologica.
Detto questo, mi sembra che il punto davvero decisivo stia un po’ più
a valle. Non basta affermare che l’IA /dovrebbe /servire al benessere
collettivo invece che all’accumulazione privata. Questo, in fondo, lo
pensano già moltissime persone. Il problema non è l’assenza di
visioni, ma la scarsa capacità di realizzare anche le più fattibili
(ad esempio: costruire piattaforme sociali autonome).
Marx lo diceva nelle /Tesi su Feuerbach/: non è la coscienza che
cambia il mondo; il mondo cambia quando cambiano le pratiche
materiali e i rapporti sociali. Le idee “giuste” esistono quasi
sempre già. Ciò che manca è la capacità politica, economica,
scientifica, per renderle operative.
Per questo motivo, secondo me, la domanda interessante non è tanto
quale visione radicale dell’IA dovremmo formulare (ce ne sono già
molte), ma quali politiche, quali ricerche, alleanze sociali e
dispositivi economici potrebbero effettivamente orientarne lo
sviluppo in una direzione autenticamente sociale.
Altrimenti si rischia un paradosso: criticare la naturalizzazione
della tecnologia per poi affidarsi, simmetricamente, alla magia
performativa delle idee. Come se bastasse dire “l’IA deve servire
alla collettività” perché lo faccia, o dire “basta con l’AI" (stile
Tajani) possa cambiare il corso degli eventi, o enunciare “serve il
CERN per l’AI” in qualche convegno faccia accadere concretamente
qualcosa.
Buona domenica,
G.
Il giorno 15 mar 2026, alle ore 08:55, J.C. DE MARTIN via nexa
<[email protected]> ha scritto:
E' davvero affascinante (e infuriante)... rapporto dopo rapporto,
articolo dopo articolo, è evidente la premessa ideologica di tutti
questi discorsi, nessuno escluso: l'IA viene considerata esattamente
come se fosse "natura". L'"IA", insomma, capita, l'"IA" avviene...
e, avvenendo, "purtroppo" dei lavoratori si ritrovano per strada.
Tutto viene scaricato sui lavoratori, che possono solo subire,
oppure, se giovani, devono spremersi le meningi per cercare di
capire (con la palla di vetro?) quali professioni saranno - tra X
anni - meno a rischio di disoccupazione tecnologica... (tutti
idraulici, ciclo-fattorini, muratori?). Il tutto giustificato
ideologicamente da chi accorre subito a spiegare che è giusto così,
che è "efficiente", che nel lungo termine è la cosa migliore per
tutti, ignorando che quello che capiterà nel lungo termine è
determinato dai rapporti di forza, non dalla fatina buona.
In altre parole, nell'Europa e USA del 21 secolo (quello dei Thiel,
Musk, Merz, Blackrock, ecc.) si postula il diritto di mettere in
campo qualsiasi innovazione tecnologica totalmente a prescindere
dalle conseguenze sociali. In altre parole, esiste solo un diritto,
per di più assoluto: il diritto di fare profitti (qui e subito).
Tutto il resto non conta. Famiglie vanno sul lastrico? Intere città
o ceti sociali si trovano senza sussistenza dall'oggi al domani?
Problemi di quelle persone, o, al limite, problema da lasciare a
quello stesso Welfare State che però gli stessi Thiel, Musk, Merz,
ecc. vorrebbero smantellare, lasciando a rigor di logica come unica
opzione futura l'eliminazione fisica dei tanti esseri umani che
ormai non sono più utili per lor signori (oppure, versione più
"umana": continuino pure vivere i subumani inutili con qualche forma
di reddito di base, chiusi nelle loro stanzette con droghe e un
visore di realtà virtuale, così non danno fastidio).
È urgente elaborare una proposta politica radicalmente alternativa.
Una visione del mondo nella quale l'IA (e in generale qualsiasi
tecnologia) deve servire a migliorare la vita della collettività,
non essere quasi esclusivamente uno strumento nelle mani di pochi
per accumulare ancora più capitale e potere sulla pelle di
moltissime persone indifese? (E non parliamo dell'IA al servizio
della violenza sistematica...)
Juan Carlos
On 12/03/26 14:29, Daniela Tafani wrote:
Amazon is determined to use AI for everything – even when it slows
down work
Corporate employees said Amazon’s race to roll out AI is leading to
surveillance, slop and ‘more work for everyone’.
Varsha Bansal
Wed 11 Mar 2026
When Dina, a software developer based in New York, joined Amazon
two years ago, her job was to write code. Now, it’s mostly fixing
what artificial intelligence breaks.
The internal AI tool she’s expected to use, called Kiro, frequently
hallucinates and generates flawed code, she says. Then she has to
dig through and correct the sloppy code it creates, or just revert
all changes and start again. She says it feels like “trying to AI
my way out of a problem that AI caused”.
“I and many of my colleagues don’t feel that it actually makes us
that much faster,” Dina said. “But from management, we are
certainly getting messaging that we have to go faster, this will
make us go faster, and that speed is the number one priority.”
Just days after speaking to the Guardian, Dina was laid off.
Lisa, a supply chain engineer who has worked at Amazon for over a
decade, says that AI tools at work have been helpful to her only in
about one in every three attempts. And even then, she often finds
issues and has to consult with colleagues to verify and correct
their results, which takes up more time than if she’s done the task
without AI.
She doesn’t take issue with the AI tools themselves, but rather the
company’s logic in pushing all employees to use them daily. “You
don’t look at the problem and go, ‘How do I use this hammer I
have?’ she said. “You look at it and go, ‘Is this a problem for a
hammer or something else?’”
Group of figures inside a glowing digital space, facing a large
window that shows a landscape with trees and sky
‘I wish I could push ChatGPT off a cliff’: professors scramble to
save critical thinking in an age of AI
Read more
More than a half a dozen current and former Amazon corporate
employees, in roles ranging from software engineer to user
experience researcher to data analyst, told the Guardian that
Amazon is pressing employees to integrate AI across all aspects of
their work, even though these workers say this push is hurting
productivity. They say Amazon is rolling out AI use in a haphazard
way while also tracking their AI use, and they’re worried the
company is essentially using them to train their eventual bot
replacements. All of this, they said, is demoralizing. The Guardian
granted these workers anonymity because of their fear of
professional repercussions.
“We have hundreds of thousands of corporate employees in a wide
range of roles across many different businesses, each of which is
using AI in different ways to learn about what works best for their
use cases,” Montana MacLachlan, an Amazon spokesperson, said.
“While different employees may have different experiences, what we
hear from the vast majority of our teams is that they’re getting a
lot of value out of the AI tools that they use day-to-day.”
This pressure comes as Amazon has laid off 30,000 workers in the
last four months – nearly 10% of its roughly 350,000 corporate
workforce. Its cuts are part of a wave of recent AI-connected tech
layoffs, including at Block, Pinterest and Autodesk. Exactly how
much these companies will be able to rely on AI to replace
headcount is unclear, and each company has given an array of
sometimes contradictory reasons for reductions. Jack Dorsey, the
Block CEO, said outright that AI was behind his 40% staffing cuts,
while Pinterest and Autodesk said they were redirecting investments
to AI. Amazon has waffled in explaining how AI factors into its
layoff decisions, saying both that it would lead to reductions, but
that recent cuts weren’t AI-driven. The company said in February it
would spend some $200bn this year on AI infrastructure and
announced a $50bn investment in OpenAI.
In a moment of rising anxiety about AI and work, the decisions
Amazon makes around automation – and even how it talks about these
shifts – will be consequential for not just its massive workforce,
but for people in industries around the world. Amazon is the
second-largest employer in the US and has long influenced workplace
practices across both white collar and blue collar industries.
“There’s a lot of talk among corporate employees about how some of
these practices – about performance, surveillance and monitoring –
are somewhat imported from the warehouse and the drivers space, and
that it is Amazon expanding this model of labor to white collar
workers,” Jack, a software engineer at Amazon for more than a
decade, said. “It does feel like we’re at the vanguard of a new
stage in employer relations with the advent of AI.”
While Amazon has a reputation for being a tough place to work, the
impact of its AI campaign has pressurized its workplace, workers
said. “It’s worse now,” said Denny, a software engineer, who works
in the retail space at the company. “If we don’t pivot ... then we
risk becoming obsolete and being let go in the next layoff.”
Whenever there’s a task at hand, the biggest question managers ask
is whether it can be done faster with AI tools, according to Denny.
This is leading employees to use AI tools just for the sake of it.
Recently, someone in Denny’s team shared that an internal AI agent
had saved him about a week of developer effort on a feature. But
when Denny looked at the actual code review, he found dozens of
comments from colleagues pointing out basic issues. The AI
generated code was full of slop.
“In the end, my guess is that the developer cycle is not going to
change, and [could] even be potentially longer,” said Denny. “This
pressure to use [AI] has resulted in worse quality code, but also
just more work for everyone.”
Denny was one of several workers who told the Guardian they’re
pressured to use an overwhelming array of AI tools, many of which
were hastily developed in internal hackathons and then have to
spend time answering surveys about their experience with the tools.
“I would get shown these random tools by my manager who’d be like:
‘Why don’t you try using this thing?’, and it was just the result
of a hackathon,” said Denny. He says the tools are “half-baked” and
unhelpful, and in fact add to his workload because he has to vet them.
Amazon typically organizes quarterly hackathons to encourage
engineers to develop new projects. Sometime last year, Denny
recalls, the company primarily switched to generative AI
hackathons, during which the majority of projects ended up being
developer productivity focused tools.
“We don’t mandate teams use AI tools,” said Amazon’s MacLachlan.
“However, we believe these tools can help employees work more
efficiently and automate time-consuming, undifferentiated tasks.”
There have also been public slip-ups that seem connected to
Amazon’s embrace of AI. According to a February FT report, Amazon
recently experienced at least two outages because of issues with
the company’s internal AI tools, including a 13-hour interruption
to a customer-facing system in December after some engineers
allowed its AI tool “to make certain changes”. Amazon, however,
said that an employee, rather than AI, caused the service
interruption. The FT reported on Tuesday that Amazon would convene
engineers to explore “a spate of outages, including incidents tied
to the use of AI coding tools”.
“I think if you continue to push people to use AI tools in every
single aspect, you’re going to get more errors like that,” Sarah,
an Amazon software engineer, said.
Sarah said that AI can be useful, but its potential is best
realized when engineers decide how to use it. But at Amazon, even
when AI is not suited for a task, she’s now expected to train it.
“We have to write out detailed procedures so that the AI can
understand it and give better output,” said Sarah. “Part of my new
job role, it feels like, is being asked to train the AI to
essentially replace you.” She’s early in her career and worries
that offloading her work to AI is stunting her learning curve.
Forcing employees to adopt tools, according to Ifeoma Ajunwa,
founding director of the AI and Future of Work Program at Emory
University and the author of The Quantified Worker, usually
backfires. “Generally, employees are in a better position [than
management] to determine what tools can aid productivity,” she said.
Meanwhile, Amazon workers are often having to seek out training for
AI best practices on their own.
Will, a user experience researcher, said Amazon offers employees
plenty of AI training videos on their learning portals, though most
of them are optional. When he’s attended training sessions, “the
focus is always, ‘here’s how to build something as quickly as
possible’”. He said trainers – who are typically peer employees who
are also AI power users – advise to carefully review each step
before letting AI start building. At the same time, Will said: “I
have been in several trainings where the instructor says you can
just ask the AI to check its own work.” However, you can’t fully
rely on AI to detect its own mistakes; that’s something human
judgment is better suited for.
“One of the biggest predictors of AI adoption and whether employees
feel that AI increases their productivity is whether management
encourages it and provides training,” Alex Imas, professor of
behavioural science and economics at Chicago Booth, said.
The rushed deployment of AI means an uncritical expansion of
surveillance ...
Nick Srnicek
MacLachlan said Amazon provides different training and resources
for people across the company, including structured options.
“Employees are encouraged to use the tools themselves as a learning
mechanism, adopting a learn-as-you-work approach that is proving to
be one of the most practical and effective methods of AI adoption
across the company,” she said.
An AI-fueled shift to surveillance
Along with the productivity challenges that have come with Amazon’s
AI push, workers said it’s also making them feel surveilled.
For years, each morning when Amazon employees logged in to work, an
internal system called Amazon Connections would greet them with a
message and ask for feedback on topics like how their teams were
functioning, or how satisfied they felt with their work. Over the
last year, these questions have increasingly centered less on human
factors and more on AI.
Maria, a former product manager who was laid off from Amazon in
January, said questions asking her about her career or team shifted
to more often focus on AI: “‘Are you using AI in your daily work?,’
‘How often are you using it?,’ ‘Do you think that you’re a power
user?,’ or ‘Is AI a priority in your organization?’”.
Then there are more obvious indicators of surveillance. Workers
said managers at Amazon have a dashboard where they track their
team members’ AI use, including if they’re using certain tools and
how often they do so. (The Information first reported this in
February.)
Jack, the software developer who’s worked at Amazon for more than a
decade, said the company also launched a different dashboard, which
the Guardian has viewed, so teams could see their generative AI
adoption, engagement and depth of usage. “Every team treats it
differently,” he said, with some managers using it with a goal of
getting at least 80% of their team using AI tools weekly.
Sarah said her team’s principal engineer told her and his other
reports he checks this dashboard daily. “He’s really been pushing
our AI usage,” she said.
“Of course we want to understand what tools our teams are using and
whether those tools are working well for them or could be
improved,” said MacLachlan.
The inevitable result of AI tools getting deployed at scale is
surveillance, according to Nick Srnicek, author of Platform
Capitalism and a senior lecturer in digital economy at King’s
College London. “The rushed deployment of AI means an uncritical
expansion of surveillance since these tools increasingly require
detailed knowledge of personal workflows and data,” he said. “To
make them more capable means giving management greater insight and
control over workers’ everyday activities.”
Workers also said they suspect their career advancement is
increasingly dependent on their enthusiastic embrace of AI.
“We have promotion documents which have a template with questions
like, ‘What has this person done?’, ‘What impact did it have?’ –
and now it also has a question asking, ‘How [did] they leverage
AI?’,” said Lisa. “I think they want to only keep the people who
support this investment [in AI] and are going to try and filter out
people who do not support it or have concerns about it.” The Wall
Street Journal reported in late February that at Amazon, “managers
do consider who is all-in on AI when it comes to promotions”.
“While we expect employees to use resources – including AI – to
make work more engaging and improve customers’ lives, we don’t
instruct managers to consider AI utilization as part of our
evaluation process,” said MacLachlan. “Instead, we focus on AI
adoption and sharing best practices to celebrate innovation and
operational efficiency gains across the company.”
At the same time, Andy Jassy, Amazon CEO, hasn’t been shy about his
AI expectations for his employees. In a company-wide email last
June, he predicted that AI-driven productivity gains would reduce
the company’s corporate workforce, and urged workers to embrace AI.
“Educate yourself, attend workshops and take trainings, use and
experiment with AI whenever you can, participate in your team’s
brainstorms to figure out how to invent for our customers more
quickly and expansively, and how to get more done with scrappier
teams,” he wrote.
The unspoken math
That same company-wide email prompted heavy internal pushback at
Amazon last summer, with employees slamming Jassy’s leadership and
speaking of the demoralizing impact of the company’s AI push,
according to Business Insider. Months later, over 1,000 workers
signed a petition that raised concerns about the company’s
“aggressive rollout” of AI tools.
As Amazon has laid off thousands of workers, it’s shared growing
revenue numbers each quarter. Though Jassy has repeatedly said that
these layoffs are neither “financially-driven” nor AI-driven, for
Maria, all of this adds up.
“If you say you automated away two hours of someone’s job, you need
to convert that into savings on that job title,” she said,
explaining the company’s logic behind cutting jobs. “That’s the
unspoken math of what they’re doing.”
Jack keeps thinking about comments Jassy made during a companywide
all-hands meeting last spring. According to a Business Insider
report about this meeting, Jassy responded to a question about
running Amazon as “the world’s largest startup”, and said they want
to be “scrappy” to “do a lot more things”. He also warned that
their competitors are the “most technically able, most hungry”
companies, including startups “working seven days a week, 15 hours
a day”.
“All of those things put together was an implicit threat that the
people remaining at the company are expected to work longer and
harder,” said Jack. It “really struck home to me that if [Amazon]
can’t amass profits with endless growth, then it can get a little
bit more by squeezing it out of the people working for it”.
<https://www.theguardian.com/technology/ng-interactive/2026/mar/11/amazon-artificial-intelligence>