eSafety grills Twitter, Google, TikTok, Discord and Twitch

By

About resourcing, algorithms and detection of abuse material.

Australia’s top content moderator has given Twitter, Google, TikTok, Twitch and Discord 35 days to outline how they’re detecting child abuse material and stopping their algorithms from amplifying it.

eSafety grills Twitter, Google, TikTok, Discord and Twitch

The platforms were yesterday served legal with notices "requiring them to answer tough questions,” about how they would meet the requirements of the Basic Online Safety Expectations (BOSE).

The questions eSafety Commissioner Julie Inman Grant said that she asked vary across the relevant sectors and the specific tech giants in them.  

Grant wants to know what hash matching, classifiers and other AI is used in detecting the harmful content on social media and messaging providers.

She has also questioned search engine providers about how they reduce the accessibility and discoverability of harmful content, and asked Twitter how it can enforce tougher compliance measures when it has culled its Australian workforce. 

The companies face fines of $687,500 per day if they do not “comply with these notices from the eSafety Commissioner” by March 30, according to telecommunications minister Michelle Rowland

The quick turnaround comes after the watchdog rejected eight industry associations’ proposed codes for suppressing content displaying child abuse, terrorist material and extreme violence earlier this month.

She asked associations like the Digital Industry Group Inc (DIGI) to redraft their codes with stronger commitments to blocking the illegal content, and said she aimed to register the codes in March.

Once the codes are registered, eSafety will be able to enforce civil penalties for breaches. However, the transparency notices were issued under the BOSE regulatory scheme, which is unenforcable.

Content algorithms & illegal content 

Grant said that the questions she issued the five providers under BOSE included “the role their algorithms might play in amplifying seriously harmful content.”

This is a step up from the transparency notices she sent in August last year: back then, Apple, Meta (including its WhatsApp operation), Microsoft (including Skype), Snap, and Omegle were only asked about detection technologies and responses to harmful content reports

Grant elaborated more on her expectations around ensuring algorithms don't amplify the harmful content in a letter sent in early February to the association for companies like Google, TikTok, and Twitter: DIGI. 

The letter told DIGI it was “unclear” how its proposed draft codes would “ensure ongoing investments to support algorithmic optimisation.”

It called for stronger commitments “to improve ranking algorithms following the review or testing envisaged, and/or expenditure in research and development in technology to reduce the accessibility or discoverability of class 1A [child abuse] material.”

The draft codes that associations like DIGI and the Communications Alliance submitted were published by the associations on 22 February 2023.

Detecting abuse content 

Grant said the questions would determine if Twitter, Google, TikTok, Twitch and Discord “use widely available technology, like PhotoDNA, to detect and remove this material.”

PhotoDNA is one of many hash matching tools for identifying confirmed child abuse images. It creates a unique digital signature to compare against signatures of other photos, finding copies of the same image.

“What we discovered from our first round of notices sent last August to companies… is that many are not taking relatively simple steps to protect children,” Grant said.

Grant told senate estimates last week that the “variation across the industry” in their use of detection technologies and the fact that companies owning multiple platforms had rolled out effective solutions to some services but not others was “startling".

Although Microsoft developed PhotoDNA, it has not been deployed to OneDrive, Skype and Hotmail. 

The report also found considerable variation in platforms’ use of technologies to detect confirmed videos, new images and live streaming. 

A key premise in eSafety's argument is the strong positive correlation between how many of these forms of child abuse content the platform has installed technology to detect, and the number of reports that the platform has made to anti-child exploitation bodies.

WhatsApp, for instance, which has deployed technology to detect confirmed images and both confirmed and new videos, made 1.37 million content referrals to the US's National Center for Missing and Exploited Children (NCME) in 2021.

iMessage, on the other hand, cannot identify any of these forms of content and only made 160 referrals to NCME during the same time frame. 

Musk's Australian staff cuts

Grant also singled out Twitter, saying “the very people whose job it is to protect children,” were culled when the company finished axing its Australian workforce in January. 

“Elon Musk tweeted that addressing child exploitation was ‘Priority #1’, but we have not seen detail on how Twitter is delivering on that commitment,” Grant, who was herself Twitter’s Australian and South East Asian public policy director until 2016, said today. 

The watchdog told a parliamentary inquiry on Monday that Twitter’s first respondents to harmful content detections in Australia - the staff both designing and enforcing Twitter’s compliance with the Basic Online Safety Expectations - were recently axed. 

“One of the core elements of the basic online safety expectations is a broad user safety component,” eSafety acting chief operating officer Toby Dagg said at the inquiry into law enforcement capabilities in relation to child exploitation. 

“We would say that adequately staffing and resourcing trust and safety personnel constitutes an obvious component of that particular element of the basic online safety expectations,” he added.

Got a news tip for our journalists? Share it with us anonymously here.
Copyright © iTnews.com.au . All rights reserved.
Tags:

Most Read Articles

Macquarie Uni to spend up to $700m on 10-year digital transformation

Macquarie Uni to spend up to $700m on 10-year digital transformation

Australian Federal Police uses cloud, SASE to upgrade reach and capability

Australian Federal Police uses cloud, SASE to upgrade reach and capability

Telstra brings Infosys into engineering transformation

Telstra brings Infosys into engineering transformation

SEEK carves AI responsibility into its own executive role

SEEK carves AI responsibility into its own executive role

Log In

  |  Forgot your password?