Hey, Social Networks: Real Transparency Means Explaining All Content Removals
Page Media
When Twitter released its most recent transparency report in late February, users got their first glimpse into the content the company removes for violating its terms of service after it receives formal legal demands. This is a step forward because these numbers finally tell us something about how often Twitter deletes tweets for breaking its own rules. But those numbers are only one piece of the puzzle.
As the ACLU’s new guide for tech companies points out, real transparency means statistics on all content removals, not just those resulting from formal legal requests such as copyright demands.
Like most social networking platforms, Twitter has a set of rules (the “Twitter Rules”) that dictate what can and cannot be said, including prohibitions on things like graphic content and hate speech. Over the years, Twitter’s list of rules has grown. A lot. Each day, Twitter and other social networks are flooded with user reports and other third party demands to remove content for violating these rules. Some of the demands are legitimate, many are not.
Twitter has long been an industry leader in pushing transparency forward – including about national security requests. These reports allow companies to show how they fight to protect the privacy and free expression of their users. But Twitter’s report – and those of other social networks – omits important information about the content it takes down for violating its terms of service in response to informal demands from users, governments, and other third parties. Twitter is not alone in this respect – Facebook and other social network companies are also silent on these figures in their transparency reports.
But Twitter and its social media peers should do more. For Twitter, a transparency report with more information about all content removal demands – and not just “legal demands” – would help communicate whether the company is making progress towards its stated goal of protecting free speech while keeping users safe. Similar transparency from other platforms would enable a comparison of differing approaches to community policies and moderation.
Additional transparency from Twitter and the other major social networks could lead to improvements in the companies’ online speech processes. We already know that reporting systems used by the major social networks are vulnerable to abuse aimed at silencing users and prone to mistakes by overworked decisionmakers who lack the right information or context. We have seen the accidental removal of everything from photos of breastfeeding mothers to an ACLU blog post about censorship. And the social networks could do a lot more to provide adequate due process for users seeking to appeal these removals.
Platforms like Twitter maintain a powerful influence over user speech and whether it is amplified or suppressed. Comprehensive reporting and accountability for all content removals can help catch mistakes and inform the conversation about how to meaningfully address serious issues like abuse or extremism online while protecting free speech.
The time has come for the social networks to move this conversation forward by releasing useful information about all content removals. Users – and the ACLU – will be watching for the new reports.
Matt Cagle is a Technology & Civil Liberties Attorney with the ACLU of Northern California.