The federal government has laid out the minimum safety expectations that ‘big tech’ companies will need to adhere under its controversial online safety laws to minimise abusive or harmful content online.
The Department of Infrastructure, Transport, Regional Development and Communications on Sunday opened a consultation on the basic online safety expectations (BOSE) following passage of the Online Safety Act 2021 in June.
The draft determination sets out the government’s demands for providers that offer a social media service, “relevant electronic service” or “designated internet service”, including the nine principle-based “core expectations” included in the Act.
But in addition to the core expectations aimed at reducing abusive conduct and harmful content, the determination outlines “additional expectations” and “reasonable steps” that providers might take to meet core expectations.
Reasonable steps highlighted by Communications Minister Paul Fletcher include “actions against such emerging risks such as ‘volumetric attacks’ where ‘digital lynch mobs’ seek to overwhelm a victim with abuse”.
With many of the expectations to be developed through consultation with eSafety commissioner Julie Inman Grant, the government is using the determination to provide “flexibility” for service providers.
“The expectations do not prescribe how these expectations will be met. Indeed, they have been crafted in a way that allows flexibility in the method of achieving these expectations,” the consultation paper reads.
Under the core expectation that providers take reasonable steps to ensure a service is safe, the determination suggests providers could introduce processes to “detect, moderate, report and remove… material or activity… that is or may be unlawful or harmful”.
For encrypted services, the BOSE asks that the provider “take reasonable steps to develop and implement processes to detect and address material or activity on the service that is or may be unlawful and harmful”.
It comes just days after Apple revealed new plans for on-device machine learning that is capable of identifying sensitive content in its end-to-end encrypted Messages app to prevent the spread of child abuse material.
The determination also suggest that providers take steps to prevent anonymous accounts from being “used to deal with material, or for activity, that is or may be unlawful or harmful”, which could involve requiring “verification of identity or ownership of accounts”.
Providers will also be expected to take reasonable steps to work with each other to promote safe use of their services, including to “detect high volume, cross-platform attacks (also known as volumetric or ‘pile-on’ attacks)”.
As per the Act, providers will be expected to take steps to minimise material that promotes, incites, instructs and depicts abhorrent violent conduct on a service, as well as cyber-bullying and abuse material and non-consensual intimates images of a person.
In taking steps to stop children accessing 'class two' material such as a film or games intended for person over the age of 18, the determination suggests that reasonable steps could include implementing age assurance mechanisms or child safety risk assessments”.
Providers will need to ensure a service has a “clear and readily identifiable mechanisms that enable end users to report, and make complaints about” conduct and material covered by the Act and keep a record of that complaint for five years.
If asked by the eSafety commissioner, providers will have 30 days to provide a statement that sets out the number of complaints made to the provider for a specified period of more than six months about breaches of the service’s terms.
Providers will be expected to provide a statement to the commissioner about how long it look to remove content if issued with a removal notice.
Submissions to the consultation close Friday October 15.