Nearly 200 verified accounts of journalists and news organizations faced 361 legal demands to remove content from governments or law enforcement agencies, Twitter said in a blog post Wednesday. The social network said this is a 26% increase over the previous reporting period.
The information comes from Twitter’s transparency report for the second half of 2020, which details everything from government requests for user data to the social network’s own actions against accounts that violate its rules.
“Over the past year, we’ve experienced and continue to navigate severe global challenges, including the coronavirus pandemic,” the company said in its blog post. “We’ve also seen concerted attempts by governments to limit access to the Internet generally and to Twitter specifically.”
In total, Twitter said it received 38,524 legal demands to remove content posted by 131,933 accounts between July and December 2020. The company removed content in response to 29% of these demands.
The company also said it complied fully or in part to 4,367 requests from governments or law enforcement for user information, about 30% of the requests it received between July and December 2020. India was the largest source of government information requests, followed by the United States.
A new metric
Twitter removed 3.8 million tweets during the second half of 2020 for violating the social network’s rules. The social network also added a new metric to its transparency report, impressions, in order to capture “the number of views a violative Tweet received prior to removal.”
Of the millions of tweets that Twitter removed, 77% received fewer than 100 impression prior to being removed, the company said, adding that 17% received between 100 and 1,000 impressions and 6% received more than 1,000 impressions.
“Our goal is to improve these numbers over time, taking enforcement action on violative content before it’s even viewed,” the company said in its blog post.
The company also noted that it continues to “step up the level of proactive enforcement,” using technology like machine learning to surface 65% of abusive content for human review, instead of relying on it being reported by users.
This contributed to notable increases in the number of accounts it took action against — either suspending the accounts or removing content — in several categories during the second half of 2020, including: a 142% increase for abuse and harassment; a 6% increase for violations of its child sexual exploitation policy; a 194% increase for non-consensual nudity; a 175% increase in civic integrity policy enforcements; a 77% increase for hateful conduct policy violations; and a 192% increase for promoting suicide and self-harm.
Stay in the know. Get the latest tech stories from CNET News every weekday.