Buongiorno,

la notizia epocale non sarà sfuggita a nessuno iscritto a questa lista,
ma non si sa mai (e comunque merita di essere archiviata per benino).

Già l'oggetto di questa email (titolo comunicato Meta e frase di
apertura di è Zuckerberg) è TANTA ROBA™ e potrebbe bastare, tuttavia
suggerisco di prendersi il giusto tempo per leggere /attentamente/
quanto DICHIARATO da Meta, oltre ad ascoltare /attentamente/ (o leggere
i sottotitoli) quanto DICHIARATO da Zuckerberg.

In 5+5 minuti c'è la sintesi della storia degli ultimi 20 anni di
censura via social network; è un po' confessione, un po' whistleblowing
(eravamo costretti) e un po' chiedere umilmente scusa.

https://about.fb.com/news/2025/01/meta-more-speech-fewer-mistakes/

«More Speech and Fewer Mistakes»
January 7, 2025
Joel Kaplan, Chief Global Affairs Officer

--8<---------------cut here---------------start------------->8---

[Mark Zuckerberg video message]

<box>

Takeaways
─────────

• Starting in the US, we are ending our third party fact-checking
  program and moving to a Community Notes model.
• We will allow more speech by lifting restrictions on some topics
  that are part of mainstream discourse and focusing our enforcement
  on illegal and high-severity violations.
• We will take a more personalized approach to political content, so
  that people who want to see more of it in their feeds can.

</box>

Meta's platforms are built to be places where people can express
themselves freely. That can be messy. On platforms where billions of
people can have a voice, all the good, bad and ugly is on display. But
that's free expression.

In his 2019 speech at Georgetown University, Mark Zuckerberg argued
that free expression has been the driving force behind progress in
American society and around the world and that inhibiting speech,
however well-intentioned the reasons for doing so, often reinforces
existing institutions and power structures instead of empowering
people. He said: “Some people believe giving more people a voice is
driving division rather than bringing us together. More people across
the spectrum believe that achieving the political outcomes they think
matter is more important than every person having a voice. I think
that's dangerous.”

In recent years we've developed increasingly complex systems to manage
content across our platforms, partly in response to societal and
political pressure to moderate content. This approach has gone too
far. As well-intentioned as many of these efforts have been, they have
expanded over time to the point where we are making too many mistakes,
frustrating our users and too often getting in the way of the free
expression we set out to enable. Too much harmless content gets
censored, too many people find themselves wrongly locked up in
“Facebook jail,” and we are often too slow to respond when they do. 

We want to fix that and return to that fundamental commitment to free
expression. Today, we're making some changes to stay true to that
ideal.

Ending Third Party Fact Checking Program, Moving to Community Notes
────────────────────────────────────────────────────────────────────

When we launched our independent fact checking program in 2016, we
were very clear that we didn't want to be the arbiters of truth. We
made what we thought was the best and most reasonable choice at the
time, which was to hand that responsibility over to independent fact
checking organizations. The intention of the program was to have these
independent experts give people more information about the things they
see online, particularly viral hoaxes, so they were able to judge for
themselves what they saw and read.

That's not the way things played out, especially in the United
States. Experts, like everyone else, have their own biases and
perspectives. This showed up in the choices some made about what to
fact check and how. Over time we ended up with too much content being
fact checked that people would understand to be legitimate political
speech and debate. Our system then attached real consequences in the
form of intrusive labels and reduced distribution. A program intended
to inform too often became a tool to censor.   

We are now changing this approach. We will end the current third party
fact checking program in the United States and instead begin moving to
a Community Notes program. We've seen this approach work on X – where
they empower their community to decide when posts are potentially
misleading and need more context, and people across a diverse range of
perspectives decide what sort of context is helpful for other users to
see. We think this could be a better way of achieving our original
intention of providing people with information about what they're
seeing – and one that's less prone to bias.

• Once the program is up and running, Meta won't write Community Notes
  or decide which ones show up. They are written and rated by
  contributing users. 
• Just like they do on X, Community Notes will require agreement
  between people with a range of perspectives to help prevent biased
  ratings.
• We intend to be transparent about how different viewpoints inform
  the Notes displayed in our apps, and are working on the right way to
  share this information.
• People can sign up today ( [Facebook] , [Instagram] , [Threads] )
  for the opportunity to be among the first contributors to this
  program as it becomes available. 

We plan to phase in Community Notes in the US first over the next
couple of months, and will continue to improve it over the course of
the year. As we make the transition, we will get rid of our
fact-checking control, stop demoting fact checked content and, instead
of overlaying full screen interstitial warnings you have to click
through before you can even see the post, we will use a much less
obtrusive label indicating that there is additional information for
those who want to see it.  

Allowing More Speech
─────────────────────

Over time, we have developed complex systems to manage content on our
platforms, which are increasingly complicated for us to enforce. As a
result, we have been over-enforcing our rules, limiting legitimate
political debate and censoring too much trivial content and subjecting
too many people to frustrating enforcement actions.

For example, in December 2024, we removed millions of pieces of
content every day. While these actions account for less than 1% of
content produced every day, we think one to two out of every 10 of
these actions may have been mistakes (i.e., the content may not have
actually violated our policies). This does not account for actions we
take to tackle large-scale adversarial spam attacks. We plan to expand
our transparency reporting to share numbers on our mistakes on a
regular basis so that people can track our progress. As part of that
we'll also include more details on the mistakes we make when enforcing
our spam policies.

We want to undo the mission creep that has made our rules too
restrictive and too prone to over-enforcement. We're getting rid of a
number of restrictions on topics like immigration, gender identity and
gender that are the subject of frequent political discourse and
debate. It's not right that things can be said on TV or the floor of
Congress, but not on our platforms. These policy changes may take a
few weeks to be fully implemented. 

We're also going to change how we enforce our policies to reduce the
kind of mistakes that account for the vast majority of the censorship
on our platforms. Up until now, we have been using automated systems
to scan for all policy violations, but this has resulted in too many
mistakes and too much content being censored that shouldn't have
been. So, we're going to continue to focus these systems on tackling
illegal and high-severity violations, like terrorism, child sexual
exploitation, drugs, fraud and scams. For less severe policy
violations, we're going to rely on someone reporting an issue before
we take any action. We also demote too much content that our systems
predict might violate our standards. We are in the process of getting
rid of most of these demotions and requiring greater confidence that
the content violates for the rest. And we're going to tune our systems
to require a much higher degree of confidence before a piece of
content is taken down. As part of these changes, we will be moving the
trust and safety teams that write our content policies and review
content out of California to Texas and other US locations.

People are often given the chance to appeal our enforcement decisions
and ask us to take another look, but the process can be frustratingly
slow and doesn't always get to the right outcome. We've added extra
staff to this work and in more cases, we are also now requiring
multiple reviewers to reach a determination in order to take something
down. We are working on ways to make recovering accounts more
straightforward and testing facial recognition technology, and we've
started using AI large language models (LLMs) to provide a second
opinion on some content before we take enforcement actions.

A Personalized Approach to Political Content
─────────────────────────────────────────────

Since 2021, we've made changes to reduce the amount of civic content
people see – posts about elections, politics or social issues – based
on the feedback our users gave us that they wanted to see less of this
content. But this was a pretty blunt approach. We are going to start
phasing this back into Facebook, Instagram and Threads with a more
personalized approach so that people who want to see more political
content in their feeds can.

We're continually testing how we deliver personalized experiences and
have recently conducted testing around civic content. As a result,
we're going to start treating civic content from people and Pages you
follow on Facebook more like any other content in your feed, and we
will start ranking and showing you that content based on explicit
signals (for example, liking a piece of content) and implicit signals
(like viewing posts) that help us predict what's meaningful to
people. We are also going to recommend more political content based on
these personalized signals and are expanding the options people have
to control how much of this content they see.

These changes are an attempt to return to the commitment to free
expression that Mark Zuckerberg set out in his Georgetown speech. That
means being vigilant about the impact our policies and systems are
having on people's ability to make their voices heard, and having the
humility to change our approach when we know we're getting things
wrong.


[Mark Zuckerberg video message]
<https://about.fb.com/wp-content/uploads/2025/01/V2.Single-Take-CS25_MZ_JanAnnouncement_v09_16x9.mp4>

[Facebook] <https://www.facebook.com/help/contact/1914298425761977>

[Instagram] <https://help.instagram.com/contact/1223551615403090>

[Threads] <https://help.instagram.com/contact/1638078013752611>

--8<---------------cut here---------------end--------------->8---

...mumble mumble.

E adesso Diamogli Addosso™ anche al miliardario Mark Zuckerberg che non
ha perso nemmeno un minuto ed è saltato subito sul carro del vincitore,
trasformandosi in una brutta copia di Elon Musk, pur di non perdere i
suoi privilegi...

Saluti, 380°


P.S.: tra tutto quello che hanno dichiarato, la cosa che personalmente
trovo gustosamente curiosa è il trasloco della sede dei "trust and
safety teams" dalla California al Texas... perbacco sta /copiando/ Musk
proprio in tutto, non solo sulle "community notes" :-O

-- 
380° (Giovanni Biscuolo public alter ego)

«Noi, incompetenti come siamo,
 non abbiamo alcun titolo per suggerire alcunché»

Disinformation flourishes because many people care deeply about injustice
but very few check the facts.  Ask me about <https://stallmansupport.org>.

Attachment: signature.asc
Description: PGP signature

Reply via email to