Salve Daniela, in un ottica cibernetica, la proposta irlandese cerca di ridurre l'efficacia sociale di alcuni attuatori dei BigTech senza intervenire sugli altri componenti di questi agenti cibernetici (controllore, sensori e altri attuatori).
Per confronto, l'applicazione del GDPR ai BigTech ne accecherebbe i sensori. In sé non è una proposta priva di buon senso (in una democrazia ci si chiederebbe perché non sia già Legge, ma le colonie plutocratiche funzionano diversamente), ma per analizzarne l'efficacia potenziale bisogna considerare il sistema cibernetico in cui tale norma andrebbe ad operare nel suo complesso. Tale norma affida ai cittadini la possibilità di scegliere se rinunciare alle dosi gratuite km0 di dopamina/serotonina cui sono stati assuefatti, liberandosi dell'influenza dei GAFAM esercitano su di loro e sul proprio Paese attraverso uno specifico canale (le raccomandazioni "algoritmiche"). I cittadini però sono agenti cibernetici con risorse infinitamente inferiori a quelle dei GAFAM: sensori (naso, occhi, orecchie..) e attuatori (braccia, gambe..) a corto raggio, ed un centro di controllo che non dispone delle conoscenze necessarie a concepire ancor prima di affrontare questo tipo di controllo sociale statistico. Dall'altro lato abbiamo decine di migliaia di informatici, psicologi, sociologi persino rinomati filosofi al soldo delle BigTech che dispongono di milliardi di sensori, exabyte di dati personali dettagliatissimi e una potenza di calcolo senza precedenti nella storia umana. E i cittadini si confrontano con questi agenti cibernetici individualmente. La proposta irlandese, dunque, non raggiungerebbe gli obiettivi dichiarati più di quanto non li abbia raggiunti, ad oggi, il GDPR. Sperare che individui sorvegliati attentamente 24h si 24, 7 giorni su 7, non possano essere spinte a cliccare sul pulsante di opt-in è come sperare che un tossico-dipendente interrompa l'uso delle sostanze da cui dipende semplicemente perché gli viene chiesto di fare un click in più. Vietare l'uso aziendale dei suggestion systems avrebbe senso. Renderlo opt-in temo sia solo un blando palliativo per una democrazia morente. A pensar məle si fa peccato, ma alla luce di tale analisi, non mi sorprende che un Paese dipendente dai soldi delle BigTech abbia proposto questa soluzione. Chi l'ha imboccato, avrà fatto bene i suoi calcoli e l'avrà considerata una buona tattica elusiva rispetto a normative efficaci. Giacomo che spera di ricevere obiezioni convincenti... che lo rincuorino sul futuro. Il 31 Dicembre 2023 18:05:06 UTC, Daniela Tafani <daniela.taf...@unipi.it> ha scritto: > The EU should support Ireland’s bold move to regulate Big Tech > by Zephyr Teachout and Roger McNamee > > Dublin, Ireland, was stunned a month ago by riots that transformed its > downtown into chaos, the worst rioting in decades, stemming from far-right > online rumors about an attack on children. > > The riots, like Jan. 6, appear to be a direct outgrowth of the amplification > ecosystem supported by social media networks such as TikTok, Google’s YouTube > and Meta’s Instagram, which likely keep their European headquarters in Dublin > for tax reasons. > > Ireland, long ridiculed for bowing to Big Tech, has now come out with a > powerful proposal to address the problems of algorithmic amplification. > Ireland set up Coimisiún na Meán, a new enforcer, this year to set rules for > digital platforms. > > It has proposed a simple, easily enforceable rule that could change the game: > All recommender systems based on intimately profiling people should be turned > off by default. > > In practice, that means that the big platforms cannot automatically run > algorithms that use information about a person’s political views, sex life, > health or ethnicity. A person will be able to switch an algorithm on, but > those toxic algorithms will no longer be on by default. Users will still have > access to algorithmic amplification, but they will have to opt in to get it. > > Today, algorithms feed each user different information. They derive their > power from the trove of personal data that platforms acquire about users, > data that enables the identification and exploitation of emotional weak > points, all to maximize engagement. > > Platforms do not acknowledge responsibility for downstream consequences. Some > users respond best to cat videos, others to hate speech, disinformation and > conspiracy theories. For many, the response to harmful content is > involuntary, driven by flight or fight. Either way, users spend more time on > the platform, which allows the company to make more money by showing them > ads. > > Artificially amplifying outrage may be lucrative, but it carries a terrible > cost. Recommender systems enable to migrate from the fringe to the > mainstream. An investigation revealed that Meta’s algorithms were key > contributors to the murderous hate that cost thousands of people their lives > in Myanmar’s 2017 Rohingya genocide. > > Frances Haugen, a whistleblower, revealed in 2021 that Meta had known the > danger of its algorithm for years. As long ago as 2016, Meta’s internal > research had reported that “64 percent of all extremist group joins are due > to our recommendation tools.” > > It continued: “Our recommendation systems grow the problem.” > > When 37,000 people allowed researchers to monitor their YouTube experience, > nearly all the nasty videos they had encountered were pushed into their feed > by YouTube’s algorithm. An experiment using simulated users found that > Facebook, Instagram and X, the platform formerly known as Twitter, > recommended antisemitic and conspiracy content to test users as young as 14 > years old. > > Earlier this year, the surgeon general spoke about the danger of algorithms > that promote suicide and self-harm. A recent experiment by Amnesty > International proves the point: TikTok’s algorithm recommended videos > encouraging suicide to a test user posing as a 13-year-old only an hour after > the account was created. > > Recommender systems didn’t really take hold until 2010, and we’ve tried > trusting the Big Tech platforms, but 13 years into the experiment, with > plenty of data showing harm, we cannot trust technology companies to regulate > themselves in the public interest. > > We do not want the government to be in the business of sorting through what > is and is not harmful if amplified. The brilliance of the Ireland model is > that it offers a way forward: rules that are content-neutral, giving users > control of one critical aspect of their online experience. > > After years of being a tax haven for Big Tech, Ireland is now offering the > world a groundbreaking rule to protect democracy, public health and public > safety. In under nine months, Coimisiún na Meán has gone from initial launch > to tackling the machine at the heart of the disinformation crisis. > > The rule is necessary because current European Union regulations aren’t > working and the new Digital Services Act is not designed to tackle the core > problem. Under the EU’s General Data Protection Regulation, tech firms are > already supposed to get a person’s “explicit” (two-step) consent to process > inferences about their political views, sexuality, religion, ethnicity or > health. Several complaints that the big firms have failed to seek or receive > this consent have not been resolved years after being brought. But for over > five years, Big Tech’s primary General Data Protection Regulation authority, > which is in Ireland, failed to notice or act. > > Europe often trumpets its regulatory leadership in the world. But the > so-called “Brussels Effect” of other countries heeding its rules began to > dissipate when Europe failed to enforce its most famous law: the General Data > Protection Regulation. The European Commission is understandably focused on > the Digital Services Act, which goes into effect next month, but EU > policymakers should welcome the new Irish rules. > > Coimisiún na Meán’s bold move would ultimately make the Digital Services Act > far more successful. Europe and the Irish government are stepping up at last > to regulate harmful technology products. Social media may become social > again. > > https://thehill.com/opinion/technology/4380369-the-eu-should-support-irelands-bold-move-to-regulate-big-tech/ > _______________________________________________ > nexa mailing list > nexa@server-nexa.polito.it > https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa _______________________________________________ nexa mailing list nexa@server-nexa.polito.it https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa