X, formerly Twitter, lifted bans on 6103 Australian users, including 194 banned for hateful content in the six months to May 2023.
A month after Elon Musk’s October 2022 takeover, he announced “a general amnesty to suspended accounts, provided that they have not broken the law or engaged in egregious spam”.
The nation’s digital safety watchdog has today - for the first time - revealed the number of pardoned users that her office “understands relates to Australia.”
“Twitter did not place reinstated accounts under additional scrutiny,” the eSafety Commissioner’s report [pdf] also said.
During the six months, X suspended 387,056 Australian users, 1196 of whom violated its hateful content policy.
The data comes from X’s response to legal notices eSafety issued in June 2023 demanding that the company answer questions about how it complies with regulations to protect users from hateful content.
After two extensions, demands X resubmit previous “responses that were incorrect, significantly incomplete or irrelevant,” Commissioner Julie Inman Grant has finally obtained the information she requested but has still found that the delayed response to be “non-compliant” [pdf].
The report also found that the time X takes to respond to reports of hateful content in tweets increased by 20 percent, and the time taken to respond to reports of direct messages that violate the company’s hateful content policy increased by 75 percent over the six months.
The increased response times coincided with X gutting global and local teams dedicated to trust and safety, which the report also provided statistics on.
X’s total number of engineers focused on trust and safety issues dropped from 279 to 55 between October 27 2022 (a day after Musk’s acquisition) and May 31 2023.
Global content moderators, including full-time employees and contractors, went from 2720 (107 full-time and 2613 contractors) to 2356 in May 2023 (27 full-time and 2305 contractors).
During the same period, public policy staff dropped from 68 to 15 globally, and from three to zero in Australia.
However, the report suggested that X did not accept the framing of the commissioner’s questions, which required the company to categorise staff with broad responsibilities into dedicated functions.
“X Corp. stated it had no full-time staff that are specifically and singularly dedicated to hateful conduct issues globally, and no specific team for this policy," the commissioner said.
“It said that instead, a broader cross-functional team has this in scope and collaborates on a set of policies that are considered to be related to toxicity more broadly.”
Between December 2022 and May 2023, X removed roughly 917,300 tweets for violating its hateful content policy; 17,000 were from Twitter Blue subscribers and 900,000 were regular users’ tweets.
eSafety also asked X for the proportion of tweets that were removed for breaching its hateful conduct policy posted by Twitter Blue accounts compared with other users over the period.
For the subscriber class, the percentage of tweets removed for hateful conduct was 1.87 percent; it was 98.13 percent for regular users.
Other reasons a tweet can be removed, according to the ‘X Rules’, include violations such as promoting suicide or publishing other peoples’ private information.
An X spokesperson said in the report that the company was going beyond a “binary take-down/leave-up enforcement approach”, echoing X’s statement in a report it submitted last year about its compliance with codes managed by an Australian platforms association.
The X spokesperson said in today’s report that its “freedom of speech, not reach,” policy involved “where appropriate, restricting the reach of posts that meet the threshold for enforcement according to our terms of service by making the content less discoverable."
“We continue to prohibit posts that target specific individuals with hate, abuse and violence, but adopt a more proportionate remediation for posts or content that does not target specific individuals by restricting the reach of such content," the spokesperson said.