iTnews
  • Home
  • News
  • Technology
  • Software

eSafety and foreign regulators to sync content bans and oversight schemes

By Jeremy Nadel
May 27 2024 6:52AM

Australia, UK, France, Korea and others states release joint plans.

Eight nations are syncing their content restriction, user surveillance, corporate disclosure and other oversight powers aimed at mitigating online harms.

eSafety and foreign regulators to sync content bans and oversight schemes

Australian, UK, French, Korean, South African, Fijian, Irish and Slovakian internet regulators want to make their risk assessments, investigations, research and enforcement actions more streamlined and collaborative. 

“By mapping the similarities and differences in our regulatory remits, the network [of regulators] has identified opportunities in multiple areas to pursue coherence between our respective regimes,” the Global Online Safety Regulators Network said on Friday. 

The signatories’ joint statement [pdf] focused on their technologies and legal instruments that they plan to standardise and support each other deploy; with few details on the content types or online behaviours considered “cross-border harm”.

The Australian eSafety commissioner Julie Inman Grant said that “as regulators, we face similar challenges: we’re national entities mandated to regulate a complex set of global harms involving companies principally domiciled offshore.”

Syncing content moderation

The coalition of regulators broke down their similar abilities to restrict what their users see, compel platforms to disclose their internal safety processes and proactively scan user generated content. 

Coordinating their overlapping powers could overcome jurisdictional limitations to mitigate harmful content, the regulators said. 

“Where there are instances of systemic non-compliance across jurisdictions, the network might consider working more closely on investigations and enforcement action.”

Besides France and the UK, all the regulators can "issue content removal and blocking notices".

A “blocking notice", or geofence, refers to restricting material from a specific area, which X successfully argued was consistent with eSafety’s order to restrict footage of the Wakeley church stabbing from its Australian users.  

A “content removal” order is inclusive of an entire platform, which eSafety argued was required because content hosted on X’s US servers could still be viewed through virtual private networks (VPNs).  

Google, Meta, TikTok and other foreign tech giants comply with several thousand global takedowns every year that go unchallenged.

Coordinating geofences won't resolve VPN loophole

If the eight nations were to geofence content each time one regulator issued a takedown notice, it would only reduce the discoverability of the content, which the blocking architecture available within a single jurisdiction already can.

The absence of US-based authorities like the Federal Communications Commission (FCC) hinders the regulatory network from fixing the jurisdictional barriers to compelling companies like X to comply with the disclosure or content removal notices they issue them. 

And even if the FCC were to join, it could not synchronise regulation with the networks’ other members without overturning section 230 of the US Telecommunications Act, and other laws reducing platforms’ liability for third party content.

In addition to the increased difficulty of coordinating bans on content that US institutions would be more likely to consider free speech, it would also have little impact on the collective strength of enforcement actions against content that it is less controversial to regulate. 

The US has passed legislation earlier this month [pdf] bringing its reporting obligations and penalties for failing to meet them in line with the standards of the other nations in the network of foreign regulators, which reduces the incentive to encourage US membership even less. 

Leveraging platforms with service restrictions?

Besides Fiji and South Africa, all the regulators can issue “service blocking or restriction orders". 

Last year when Turkey said that it would shut down X if it kept anti-Erdoğan Tweets and accounts online, Musk complied.

Former Twitter CEO Jack Dorsey said that he similarly caved on blocking accounts criticising the Indian government when faced with a shutdown threat: claims Prime Minister Narendra Modi denies.

Although threatening X by the number of users it stands to lose from an ISP-level block has been more effective than financial penalties in the past, and, if all the members of the network were to shut down X it would make a big impact, eSafety would have to make a very strong case to the Federal Court. 

Defined and elusive harms 

Grant said that the approach to “global collaboration” was aimed at “promoting a degree of alignment in objectives and outcomes” rather than “identical legal and regulatory frameworks.”

However, the approach is also consistent with regulators' domestic strategies. 

The regulators have two goals which relate to two categories of content that they regulate. 

Firstly, the regulators are seeking to set precedents and shared capabilities to remove the loosely defined harmful material that it is in their remit to deem illegal.

Since the regulators gained stronger powers between 2022 to 2023, only X and smaller sites have continued to keep the content online. 

The regulators define only the most egregious harms, which no platforms make freedom of speech arguments to keep online; “child sexual exploitation and abuse material,” which is mentioned in the only other position statement [pdf] released since the Network launched in 2022. 

Platforms agree to remove the content but disagree with regulators about how.

The UK’s plans to automatically detect, block and report the content through device-based, or intermediary government-owned server-based content scanning have been met with threats to cease operating.

Apple's submission to eSafety on the proposal said it “opens the door for bulk surveillance”. 

“Such capabilities, history shows, will inevitably expand to other content types (such as images, videos, text, or audio) and content categories.”

Meta has refused to share automated detections made by its device-based AI but will blur the images as a concession.

Got a news tip for our journalists? Share it with us anonymously here.
Copyright © iTnews.com.au . All rights reserved.
Tags:
content moderationesafetyesafety commissionerjulie inman grantsoftwarex

Related Articles

  • ChatGPT still not meeting European data accuracy standards ChatGPT still not meeting European data accuracy standards
  • Automic CIO scratches itch for weekly security improvements Automic CIO scratches itch for weekly security improvements
  • Home Affairs' ICT modernisation needs backed by another review Home Affairs' ICT modernisation needs backed by another review
  • British tech founder Mike Lynch denies defrauding HP British tech founder Mike Lynch denies defrauding HP

Partner Content

Securing the Future: Identity Takes Centre Stage in Business Operations
Partner Content Securing the Future: Identity Takes Centre Stage in Business Operations
Going digital to turn contact centres into customer experience centres
Partner Content Going digital to turn contact centres into customer experience centres
Cloud Covered
Cloud Covered
The rise of collaboration platforms have created a new threat surface organisations must protect
Partner Content The rise of collaboration platforms have created a new threat surface organisations must protect

Sponsored Whitepapers

Gain an independent witness with body-worn cameras
Gain an independent witness with body-worn cameras
Gain an independent witness with body-worn cameras
Gain an independent witness with body-worn cameras
Trust Imperative 4.0
Trust Imperative 4.0
Centralized Remote Connectivity for State & Local Government
Centralized Remote Connectivity for State & Local Government
Global Employee Experience Trends Report
Global Employee Experience Trends Report
Share on Facebook Share on LinkedIn Share on Whatsapp Email A Friend

Most Read Articles

Macquarie Uni to spend up to $700m on 10-year digital transformation

Macquarie Uni to spend up to $700m on 10-year digital transformation

Australian Federal Police uses cloud, SASE to upgrade reach and capability

Australian Federal Police uses cloud, SASE to upgrade reach and capability

Telstra brings Infosys into engineering transformation

Telstra brings Infosys into engineering transformation

SEEK carves AI responsibility into its own executive role

SEEK carves AI responsibility into its own executive role

Digital Nation

Health tech startup Kismet raises $4m in pre-seed funding
Health tech startup Kismet raises $4m in pre-seed funding
More than half of loyalty members concerned about their data
More than half of loyalty members concerned about their data
State of Security 2023
State of Security 2023
How eBay uses interaction analytics to improve CX
How eBay uses interaction analytics to improve CX
COVER STORY: What AI regulation might look like in Australia
COVER STORY: What AI regulation might look like in Australia
All rights reserved. This material may not be published, broadcast, rewritten or redistributed in any form without prior authorisation.
Your use of this website constitutes acceptance of nextmedia's Privacy Policy and Terms & Conditions.