Categories: Tech NewsTechCrunch+

Musk’s ‘Twitter Files’ offer a glimpse of the raw, complicated and thankless task of moderation

Twitter’s new owner, Elon Musk, is feverishly promoting his “Twitter Files”: selected internal communications from the company, laboriously tweeted out by sympathetic amanuenses. But Musk’s obvious conviction that he has released some partisan kraken is mistaken — far from conspiracy or systemic abuse, the files are a valuable peek behind the curtain of moderation at scale, hinting at the Sisyphean labors undertaken by every social media platform.

For a decade companies like Twitter, YouTube, and Facebook have performed an elaborate dance to keep the details of their moderation processes equally out of reach of bad actors, regulators, and the press.

To reveal too much would be to expose the processes to abuse by spammers and scammers (who indeed take advantage of every leaked or published detail), while to reveal too little leads to damaging reports and rumors as they lose control over the narrative. Meanwhile they must be ready to justify and document their methods or risk censure and fines from government bodies.

The result is that while everyone knows a little about how exactly these companies inspect, filter, and arrange the content posted on their platforms, it’s just enough to be sure that what we’re seeing is only the tip of the iceberg.

Sometimes there are exposés of the methods we suspected — by-the-hour contractors clicking through violent and sexual imagery, an abhorrent but apparently necessary industry. Sometimes the companies overplay their hands, like repeated claims of how AI is revolutionizing moderation, and subsequent reports that AI systems for this purpose are inscrutable and unreliable.

What almost never happens — generally companies don’t do this unless they’re forced to — is that the actual tools and processes of content moderation at scale are exposed with no filter. And that’s what Musk has done, perhaps to his own peril, but surely to the great interest of anyone who ever wondered what moderators actually do, say, and click as they make decisions that may affect millions.

Pay no attention to the honest, complex conversation behind the curtain

The email chains, Slack conversations, and screenshots (or rather shots of screens) released over the last week provide a glimpse at this important and poorly understood process. What we see is a bit of the raw material, which is not the partisan illuminati some expected — though it is clear, by its highly selective presentation, that this what we are meant to perceive.

Far from it: the people involved are by turns cautious and confident, practical and philosophical, outspoken and accommodating, showing that the choice to limit or ban is not made arbitrarily but according to an evolving consensus of opposing viewpoints.

Leading up to the choice to temporarily restrict the Hunter Biden laptop story — probably at this point the most contentious moderation decision of the last few years, behind banning Trump — there is neither the partisanship nor conspiracy insinuated by the bombshell packaging of the documents.

Instead we find serious, thoughtful people attempting to reconcile conflicting and inadequate definitions and policies: What constitutes “hacked” materials? How confident are we in this or that assessment? What is a proportionate response? How should we communicate it, to whom, and when? What are the consequences if we do, if we don’t limit? What precedents do we set or break?

The answers to these questions are not at all obvious, and are the kind of thing usually hashed out over months of research and discussion, or even in court (legal precedents affect legal language and repercussions). And they needed to be made fast, before the situation got out of control one way or the other. Dissent from within and without (from a U.S. Representative no less — ironically, doxxed in the thread along with Jack Dorsey in violation of the selfsame policy) was considered and honestly integrated.

“This is an emerging situation where the facts remain unclear,” said Former Trust and Safety Chief Yoel Roth. “We’re erring on the side of including a warning and preventing this content from being amplified.”

Some question the decision. Some question the facts as they have been presented. Others say it’s not supported by their reading of the policy. One says they need to make the ad-hoc basis and extent of the action very clear since it will obviously be scrutinized as a partisan one. Deputy General Counsel Jim Baker calls for more information but says caution is warranted. There’s no clear precedent; the facts are at this point absent or unverified; some of the material is plainly non-consensual nude imagery.

“I believe Twitter itself should curtail what it recommends or puts in trending news, and your policy against QAnon groups is all good,” concedes Rep. Ro Khanna, while also arguing the action in question is a step too far. “It’s a hard balance.”

Neither the public nor the press have been privy to these conversations, and the truth is we’re as curious, and largely as in the dark, as our readers. It would be incorrect to call the published materials a complete or even accurate representation of the whole process (they are blatantly, if ineffectively, picked and chosen to fit a narrative), but even such as they are we are more informed than we were before.

Tools of the trade

Even more directly revealing was the next thread, which carried screenshots of the actual moderation tooling used by Twitter employees. While the thread disingenuously attempts to equate the use of these tools with shadow banning, the screenshots do not show nefarious activity, nor need they in order to be interesting.

Image Credits: Twitter

On the contrary, what is shown is compelling for the very reason that it is so prosaic, so blandly systematic. Here are the various techniques all social media companies have explained over and over that they use, but whereas before we had it couched in PR’s cheery diplomatic cant, now it is presented without comment: “Trends Blacklist,” “High Profile,” “DO NOT TAKE ACTION” and the rest.

Meanwhile Yoel Roth explains that the actions and policies need to be better aligned, that more research is required, that plans are underway to improve:

The hypothesis underlying much of what we’ve implemented is that if exposure to, e.g., misinformation directly causes harm, we should use remediations that reduce exposure, and limiting the spread/virality of content is a good way to do that… we’re going to need to make a more robust case to get this into our repertoire of policy remediations – especially for other policy domains.

Again the content belies the context it is presented in: these are hardly the deliberations of a secret liberal cabal lashing out at its ideological enemies with a ban hammer. It’s an enterprise-grade dashboard like you might see for lead tracking, logistics, or accounts, being discussed and iterated upon by sober-minded persons working within practical limitations and aiming to satisfy multiple stakeholders.

As it should be: Twitter has, like its fellow social media platforms, been working for years to make the process of moderation efficient and systematic enough to function at scale. Not just so the platform isn’t overrun with bots and spam, but in order to comply with legal frameworks like FTC orders and the GDPR. (Of which the “extensive, unfiltered access” outsiders were given to the pictured tool may well constitute a breach. The relevant authorities told TechCrunch they are “engaging” with Twitter on the matter.)

A handful of employees making arbitrary decisions with no rubric or oversight is no way to moderate effectively or meet such legal requirements; neither (as the resignation of several on Twitter’s Trust & Safety Council today testifies) is automation. You need a large network of people cooperating and working according to a standardized system, with clear boundaries and escalation procedures. And that’s certainly what seems to be shown by the screenshots Musk has caused to be published.

What isn’t shown by the documents is any kind of systematic bias, which Musk’s stand-ins insinuate but don’t quite manage to substantiate. But whether or not it fits into the narrative they wish it to, what is being published is of interest to anyone who thinks these companies ought to be more forthcoming about their policies. That’s a win for transparency, even if Musk’s opaque approach accomplishes it more or less by accident.

Musk’s ‘Twitter Files’ offer a glimpse of the raw, complicated and thankless task of moderation by Devin Coldewey originally published on TechCrunch

Recent Posts

Unlocking the Secrets of JSON.stringify(): More Than Meets the Eye

JSON (JavaScript Object Notation) is a lightweight data-interchange format widely used in web development. At…

2 months ago

How to Handle AJAX GET/POST Requests in WordPress

AJAX (Asynchronous JavaScript and XML) is a powerful technique used in modern web development that…

3 months ago

Page Speed Optimization: Post-Optimization Dos and Don’ts

Introduction After successfully optimizing your website for speed, it's essential to maintain and build upon…

3 months ago

Ultimate Guide to Securing WordPress Folders: Protect Your Site from Unauthorized Access

Securing your WordPress folders is crucial to safeguarding your website from unauthorized access and potential…

4 months ago

HTML CSS PHP File Upload With Circle Progress Bar

Creating a file upload feature with a circular progress bar involves multiple steps. You'll need…

5 months ago

Using WP Rocket with AWS CloudFront CDN

Integrating WP Rocket with AWS CloudFront CDN helps to optimize and deliver your website content…

5 months ago