La costruzione di un'infrastruttura include l'addestramento degli utenti ad 
usarla, a ritenerla utile, ad averne bisogno.
Un intero apparato di corsi gratuiti abituerà gli utenti, fin da piccoli, ad 
utilizzare la cosiddetta "IA generativa" integrata in ogni programma:
non serve Orwell, per prevedere gli effetti di generatori di testo che ci 
suggeriscano, in ogni istante della giornata, cosa comprare, cosa scrivere, 
cosa pensare,
proprio mentre stiamo scrivendo o pensando.
PedagoGPT<https://codeactsineducation.wordpress.com/2023/06/10/pedagogpt/>
June 10, 2023<https://codeactsineducation.wordpress.com/2023/06/10/pedagogpt/> 
by Ben 
Williamson<https://codeactsineducation.wordpress.com/author/benwilliamson2013/>
New educational courses are being produced to train people to use generative AI 
and make it familiar across industries and everyday life.

Artificial intelligence might be top of the tech hype 
pinnacle<https://www.gartner.co.uk/en/articles/what-is-new-in-artificial-intelligence-from-the-2022-gartner-hype-cycle>
 right now, but maintaining its momentum requires turning short-term novelty 
into long term patterns and habits of use. A recent proliferation of 
educational courses to train users in generative AI, like image and text 
generators, is intended to do just that. The pedagogical apparatus being built 
to support AI will extend its applications into a vast array of industries, 
sectors, and everyday practices.

The range of generative AI developments over the past year has been 
astonishing. What appeared to be technically impressive but socially, 
politically and ethically 
problematic<https://rse.org.uk/resources/resource/blog/the-socio-ethical-challenges-of-generative-ai/>
 products like ChatGPT and Stable Diffusion only last year are fast being 
translated into a kind of infrastructure for everyday life and work. AI is 
being integrated into search 
engines<https://blogs.bing.com/search/march_2023/Confirmed-the-new-Bing-runs-on-OpenAI%E2%80%99s-GPT-4>
 and all manner of other 
platforms<https://technative.io/the-api-key-rush-how-generative-ai-is-revolutionising-enterprises/>,
 seemingly as a substrate for how people access and create knowledge, 
information and culture.

Depending on your perspective, AI is either going to be enormously disruptive 
and 
transformative<https://www.accenture.com/us-en/insights/technology/generative-ai-summary>
 for businesses, or enormously 
damaging<https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html>
 for societies. The tech entrepreneurs responsible for building AI are 
themselves calling for long-term regulatory efforts to forestall the 
“existential 
risks”<https://www.salon.com/2023/06/11/ai-and-the-of-human-extinction-what-are-the-tech-bros-worried-about-its-not-you-and-me/>
 of AI, while others argue that AI and other algorithmic systems are already 
causing considerable harms and 
dangers<https://www.thenation.com/article/economy/artificial-intelligence-silicon-valley/>
 that could be addressed in the immediate 
present<https://thesociologicalreview.org/magazine/june-2023/artificial-intelligence/predicted-benefits-proven-harms/>.

Governments and political 
leaders<https://www.ft.com/content/6cc8a0f3-f2fc-4470-a881-8763451b47ea>, 
seemingly wooed by AI leaders like OpenAI’s Sam Altman, have begun calling for 
regulatory action to ensure “AI safety” while paving the way for the benefits 
that AI is supposed to promise. Safeguarding AI from too much regulation in 
order to enable innovation, while speculating about the far-out problems it 
could usher in rather than addressing contemporary problems, has become the 
preferred industry and government approach.

Besides efforts to shape and choreograph regulation, however, the other problem 
for AI leaders and their advocates is to maintain the rapid growth of AI beyond 
its recent ascent up the curve of the hype cycle. And to do that requires 
maintaining and growing the user base for AI.

While millions have played with the likes of image generators DALL-E and 
Midjourney or text generators such as ChatGPT over recent months, prolonged 
uptake and use into the future may be less certain. The business aim of AI 
companies after all isn’t to provide “toys” for people to play with, but to 
embed their AI tools, or more accurately AI 
infrastructure<https://www.wired.co.uk/article/digital-infrastructure-politics-government>,
 into a huge variety of industries, organizations, and everyday activities. 
Achieving that means educating users, ranging from enterprise leaders, IT 
managers and software developers, to public sector workers, NGOs, journalists, 
lawyers, healthcare staff, to families, children, and far more.

This is where a new apparatus of pedagogical interventions to train users in AI 
has appeared. Let’s call it the PedagoGPT complex. The PedagoGPT complex 
consists of online courses that are intended to train everyone from beginners 
to advanced machine learning engineers in how to use and deploy generative AI. 
Mundane as that may seem, if we understand AI to be becoming an active 
infrastructural presence in an array of everyday technologies and practices, 
then new AI courses can be understood as accustoming and socializing 
populations into such infrastructures. As colleagues and I have previously 
suggested, tech companies create training schemes to operate as “habituation 
programs”<https://www.researchgate.net/publication/361934307_Amazon_and_the_New_Global_Connective_Architectures_of_Education_Governance>
 that tether business practices and personal desires to the affordances of 
their infrastructures. An infrastructure isn’t just the technology; it depends 
for its enactment on willing and habitual users.

The PedagoGPT complex

Surveying the PedagoGPT complex reveals a large number of players. A simple 
search for “generative AI” on ClassCentral, which is like a MoneySuperMarket 
service for online courses, returns more than 8,600 
courses<https://www.classcentral.com/search?q=generative%20ai>.  Many of them 
are available on online learning platforms like Coursera, Udemy, Udacity or 
FutureLearn, or are posted on YouTube as basic tutorials by independent 
entrepreneurial individuals. While many of these courses are only tagged 
“generative AI” to generate hits, the search surfaces something of the extent 
to which AI has become a key focus for training courses at a massive scale and 
scope.

But the PedagoGPT complex is dominated by major industry players. One is Andrew 
Ng, the former co-founder of online learning platform Coursera, who in 2017 
established DeepLearning.A<https://www.deeplearning.ai/about/>I “to fill a need 
for world-class AI education.” Ng, who is a key ally and advocate of OpenAI, 
launched a series of generative AI short 
courses<https://www.deeplearning.ai/short-courses/> on DeepLearning.AI in 
spring 2023. The courses were developed in partnership with OpenAI and are 
co-taught by OpenAI staff, and focus on topics like “ Building Systems With The 
ChatGPT API” and “ChatGPT Prompt Engineering for Developers”, where users learn 
how to use a large language model (LLM) to build “new and powerful 
applications” and “effectively utilize LLMs”. The short courses, then, are 
explicitly intended to advance OpenAI infrastructure into business environments 
through training-up new AI engineers and accustoming them to ChatGPT.

Amazon has a relatively long history of running training programs to habituate 
workers to the AWS 
infrastructure<https://codeactsineducation.wordpress.com/2022/07/12/how-amazon-operates-in-education/>
 through its AWS Educate program. It too has begun offering training guidance 
and 
advice<https://aws.amazon.com/blogs/enterprise-strategy/how-technology-leaders-can-prepare-for-generative-ai/>
 for enterprise leaders on generative AI, partially through its “Machine 
Learning University” scheme. This is in support of the major 
announcements<https://aws.amazon.com/blogs/machine-learning/announcing-new-tools-for-building-with-generative-ai-on-aws/>
 made by AWS about its LLMs in April 2023.

Similarly, Microsoft has set up an online 
course<https://learn.microsoft.com/en-us/training/modules/explore-azure-openai/>
 called “Introduction to Azure OpenAI Service” to support the integration of 
OpenAI’s services in its enterprise platforms, as part of a pathway of courses 
called “Microsoft Azure AI Fundamentals: Get started with artificial 
intelligence”. The Azure OpenAI course consists of short blocks of text that 
can be completed in just a few minutes, including a “module” on “OpenAI’s 
access and responsible AI policies” that takes 3 minutes to complete.

Google, meanwhile, has launched a 
series<https://www.cloudskillsboost.google/course_templates/536> of “Generative 
AI Fundamentals” courses, including “Introduction to Generative AI”, 
“Introduction to Large Language Models” and “Introduction to Responsible AI 
courses”, that can be completed for free online. These courses require no 
prerequisites and are targeted at the general public. Google claims that “By 
passing the final quiz, you’ll demonstrate your understanding of foundational 
concepts in generative AI” and earn a digital skills badge.

Other courses to introduce AI are also being taken into schools. Writing in the 
New York 
Times<https://www.nytimes.com/2023/06/08/business/ai-literacy-schools-amazon-alexa.html>,
 Natasha Singer reported on an “Amazon-sponsored lesson in artificial 
intelligence” taking place in public schools and an “MIT initiative on 
‘responsible AI’ whose donors include Amazon, Google and Microsoft”. Singer 
noted in particular that the Amazon course introduced schoolchildren to 
building Alexa applications, while Amazon had just received a huge multimillion 
dollar fine for illegally collecting and storing masses of child data from 
Alexa devices. Nonetheless, “the one-hour Amazon-led workshop did not touch on 
the company’s data practices”.

Public platform pedagogies

The PedagoGPT complex of generative AI courses appears to be growing fast. It 
represents an expanding educational enterprise to habituate users from 
schoolchildren to SMEs, big business and civil society organizations alike to 
the promises of AI. PedagoGPT seeks to tether personal desires to AI providers, 
and to train organizations to integrate AI infrastructure into their products 
and working practices.

Emerging AI courses are a form of public pedagogy that plays out on online 
learning platforms like DeepLearning.AI, MOOCs or corporate training spaces. 
They are explicitly user-friendly, but their intention is often to make the 
user friendly. Making the user 
friendly<https://www.tandfonline.com/doi/abs/10.1080/17508487.2020.1727544>, as 
Radhika Gorur and Joyeeta Dey have put it, means that platforms impose 
particular desires on a range of distributed actors, in the hopes that those 
actors respond amenably to its overtures. PedagoGPT programmes aspire to make 
users friendly to generative AI at mass scale, to see their business aims, 
knowledge work, or cultural experiences seem inextricable from what generative 
AI can offer. These public pedagogies are intended to synchronize users’ 
desires with AI.

In these ways, PedagoGPT training programmes might also help concentrate the 
power of tech 
businesses<https://ainowinstitute.org/wp-content/uploads/2023/04/AI-Now-2023-Landscape-Report-FINAL.pdf>
 like OpenAI, AWS, Google, Microsoft and others. An informal range of friendly 
ambassadors to generative AI helps extend the messaging and the branding 
through YouTube tutorials and the like. Increasing numbers of leaders, 
developers, and other workers will be familiarized with AI and habituated to 
everyday usage in ways that are aimed at establishing big tech operations in 
everyday routines, just as enterprise software, digital platforms and web 
applications have been routinized already. Educating users is a strategy for 
increasing the network effects and hyperscalability of AI-based platforms too, 
all intended to ensure long-term investor infusions of finance and growing 
valuations.

More broadly, PedagoGPT synchronizes neatly with governmental enthusiasm for 
STEM (science, technology, engineering and maths) as primary educational aims 
and aspirations. It is little wonder that global leaders simultaneously extol 
the potential of both AI and STEM: they are both central to economic and 
geopolitical ambitions. Steve Rolf and Seth Schindler have recently written on 
the ways state aims are now tied to digital 
platforms<https://journals.sagepub.com/doi/10.1177/0308518X221146545> in a 
synthesis they call “state platform capitalism”. AI and STEM are integral to 
state platform capitalism and the geopolitical contests and rivalries it 
signifies. What has been termed “AI 
Nationalism”<https://www.ianhogarth.com/blog/2018/6/13/ai-nationalism> relies 
on a steady STEM pipeline of users who can operationalize AI. While users are 
made friendly to AI, then, AI isn’t always being put to friendly use.

Finally, PedagoGPT raises the questions of how users are introduced to 
questions of ethics and responsibility. One might argue, for example, that an 
emphasis on STEM-based approaches further marginalizes the social sciences, 
arts and humanities in approaches to 
AI<https://www.adalovelaceinstitute.org/blog/role-arts-humanities-thinking-artificial-intelligence-ai/>,
 when these are the very subject areas where such debates are most advanced. 
Existing PedagoGPT courses reference ethics and responsibility, but often in 
limited form – for example, Microsoft’s Azure OpenAI Service course defers to 
Transparency Reports. Likewise, recent calls for “AI 
literacy”<https://www.edweek.org/technology/ai-literacy-explained/2023/05>, 
often promoted by industry and computer science figures, tend to prioritize 
skills for using generative AI. Here, ethics are addressed as personal 
responsibilities, based on an assumption that AI will be beneficial with the 
right 
“guardrails”<https://www.unesco.org/en/articles/unesco-supports-g7-leaders-calling-ai-guardrails>
 for use and application.

It’s possible to speculate, perhaps, that ethics in public and enterprise AI 
training might become synonymous with so-called “AI 
Safety”<https://www.wired.com/story/effective-altruism-artificial-intelligence-sam-bankman-fried/>,
 a particular approach informed by tech industry enthusiasm for 
“longtermist”<https://www.radicalphilosophy.com/commentary/the-toxic-ideology-of-longtermism>
 thinking and its concerns over “existential risk” rather than the immediate 
impacts of AI. Yet generative AI itself poses a well-reported wide range of 
immediate 
problems<https://iapp.org/news/a/epic-publishes-report-on-generative-ai-harms/>,
 from the generation of false information and reproduction of biases and 
discrimination, to the automation of occupational tasks, to the environmental 
impacts of data processing. These contemporary problems with AI itself may 
remain out of the purview of ethics in PedagoGPT programs, while ideas of AI 
literacy framed by industry-based risk and safety concerns could be scripted 
into PedagoGPT curricula.

It seems likely that PedagoGPT courses will continue proliferating in the 
months and years to come. Who takes such courses, what content they contain, 
and how they affect everyday practices of living and working with AI remain 
open questions. But as AI becomes increasingly infrastructural, these courses 
will play a key role in habituating users, making them friendly to AI, and 
synchronizing everyday routines and desires to giant commercial business 
aspirations.

https://codeactsineducation.wordpress.com/2023/06/10/pedagogpt/
_______________________________________________
nexa mailing list
nexa@server-nexa.polito.it
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa

Reply via email to