Twitter Has a New Owner. Here’s What He Should Do.
DEEPLINKS BLOG
BY JILLIAN C. YORK, GENNIE GEBHART, JASON KELLEY, AND DAVID GREENE
APRIL 25, 2022

https://www.eff.org/deeplinks/2022/04/twitter-has-new-owner-heres-what-he-should-do

Elon Musk’s purchase of Twitter highlights the risks to human rights and 
personal safety when any single person has complete control over policies 
affecting almost 400 million users. And in this case, that person has 
repeatedly demonstrated that they do not understand the realities of platform 
policy at scale. 

The core reality is this: Twitter and other social networks play an 
increasingly important role in social and political discourse, and have an 
increasingly important corollary responsibility to ensure that their 
decision-making is both transparent and accountable. If he wants to help 
Twitter meet that responsibility, Musk should keep the following in mind: 

Free Speech Is Not A Slogan

Musk has been particularly critical of Twitter’s content moderation policies. 
He’s correct that there are problems with content moderation at scale. These 
problems aren’t just specific to Twitter, though Twitter has some particular 
challenges. It has long struggled to deal with bots and troubling tweets by 
major figures that can easily go viral in just a few minutes, allowing mis- or 
disinformation to rapidly spread. At the same time, like other platforms, 
Twitter’s community standards restrict legally protected speech in a way that 
disproportionately affects frequently silenced speakers. And also like other 
platforms, Twitter routinely removes content that does not violate its 
standards, including sexual expression, counterspeech, and certain political 
speech.

Better content moderation is sorely needed: less automation, more expert input 
into policies, and more transparency and accountability overall. Unfortunately, 
current popular discourse surrounding content moderation is frustratingly 
binary, with commentators either calling for more moderation (or regulation) 
or, as in Musk’s case, far less.

To that end, EFF collaborated with organizations from around the world to 
create the Santa Clara Principles, which lay out a framework for how companies 
should operate with respect to transparency and accountability in content 
moderation decisions. Twitter publicly supported the first version of the Santa 
Clara Principles in its 2019 transparency report. While Twitter has yet to 
successfully implement the Principles in full, that declaration was an 
encouraging sign of its intent to move toward them: operating on a transparent 
set of standards, publicly sharing details around both policy-related removals 
and government demands, making content moderations clear to users, and giving 
them the opportunity to appeal. We call on Twitter’s management to renew the 
company’s commitment to the Santa Clara Principles.

Anonymous and Pseudonymous Accounts Are Critical for Users

Pseudonymity—the maintenance of an account on Twitter or any other platform by 
an identity other than the user’s legal name—is an important element of free 
expression. Based on some of his recent statements, we are concerned that Musk 
does not fully appreciate the human rights value of pseudonymous speech. 

Pseudonymity and anonymity are essential to protecting users who may have 
opinions, identities, or interests that do not align with those in power. For 
example, policies that require real names on Facebook have been used to push 
out Native Americans; people using traditional Irish, Indonesian, and Scottish 
names; Catholic clergy; transgender people; drag queens; and sex workers. 
Political dissidents may be in grave danger if those in power are able to 
discover their true identities. 

Furthermore, there’s little evidence that requiring people to post using their 
“real” names creates a more civil environment—and plenty of evidence that doing 
so can have disastrous consequences for some of the platform’s most vulnerable 
users. 

Musk has recently been critical of anonymous users on the platform, and 
suggested that Twitter should “authenticate all real humans.” Separately, he’s 
talked about changing the verification process by which accounts get blue 
checkmarks next to their names to indicate they are “verified.” Botnets and 
trolls have long presented a problem for Twitter, but requiring users to submit 
identification to prove that they’re “real” goes against the company’s ethos. 

There are no easy ways to require verification without wreaking havoc for some 
users, and for free speech. Any free speech advocate (as Musk appears to view 
himself) willing to require users to submit ID to access a platform is likely 
unaware of the crucial importance of pseudonymity and anonymity. Governments in 
particular may be able to force Twitter and other services to disclose the true 
identities of users, and in many global legal systems, do so without sufficient 
respect for human rights.

Better User Privacy, Safety, and Control Are Essential

When you send a direct message on Twitter, there are three parties who can read 
that message: you, the user you sent it to, and Twitter itself. Twitter direct 
messages (or DMs) contain some of the most sensitive user data on the platform. 
Because they are not end-to-end encrypted, Twitter itself has access to them. 
That means Twitter can hand them over in response to law enforcement requests, 
they can be leaked, and internal access can be abused by malicious hackers and 
Twitter employees themselves (as has happened in the past). Fears that a new 
owner of the platform would be able to read those messages are not unfounded.

Twitter could make direct messages safer for users by protecting them with 
end-to-end encryption and should do so. It doesn’t matter who sits on the board 
or owns the most shares—no one should be able to read your DMs except you and 
the intended recipient. Encrypting direct messages would go a long way toward 
improving safety and security for users, and has the benefit of minimizing the 
reasonable fear that whoever happens to work at, sit on the board of, or own 
shares in Twitter can spy on user messages. 

Another important way to improve safety on the platform is to give third-party 
developers, and users, more access to control their experience. Recently, the 
platform has experimented with this, making it easier to find tools like 
BlockParty that allow users to work together to decide what they see on the 
site. Making these tools even easier to find, and giving developers more power 
to interact with the platform to create more tools that let users filter, 
block, and choose what they see (and what they don’t see), would greatly 
improve safety for all users. In the event that the platform was to pivot to a 
different method of content moderation, it would become even more important for 
users to access better tools to modify their own feeds and block or filter 
content more accurately.

There are more ambitious ways that would help improve the Twitter experience, 
and beyond: Twitter’s own Project Blue Sky put forward a plan for an 
interoperable, federated, standardized platform. Supporting interoperability 
would be a terrific move for whoever controls Twitter. It would help move power 
from corporate boardrooms to the users that they serve. If users have more 
control, it matters less who’s running the ship, and that’s good for everyone.

Reply via email to