ADVERTISEMENT

Close-up of a page from a screenplay or script in proper Hollywood format, with generic text written by the photographer to avoid any copyright issues.

Federal government proposes AI guardrails to stop content theft

The federal government has proposed new legislation that will stop Artificial Intelligence (AI) models from using content created by artists without their permission.

The 10 mandatory guardrails would apply in high-risk settings, specifically requiring data used to train AI systems be legally obtained, not contain illegal and harmful material, and that data sources must be disclosed.

The Australian Writers’ Guild (AWG), which supported the proposals, said an AI framework must ensure consent, credit and compensation for creatives, including specific provisions around First Nations’ content to ensure cultural material isn’t misappropriated and used without consent.

“These guardrails are a positive step towards establishing a regulatory framework that supports the existing intellectual property and economic rights of Australian writers and creatives,” AWG and AWGACS Group CEO Claire Pullen said.

“We are pleased that the paper clearly states that it is the responsibility of an organisation to ensure that any data they use to train an AI system is legally obtained and that all data sources must be disclosed. We know it is not currently the case, and that our members’ work is being used to train generative AI and large language models (LLMs) without their consent. This won’t change until regulation is mandated.”

Thousands of writers’ work has previously been used to train generative AI systems without permission. Writers in the games sector have also been concerned that AI may result in audiences being exposed to harmful or offensive content.

The new government AI guardrails were also supported by music industry bodies APRA AMCOS and the National Aboriginal and Torres Strait Islander Music Office (NATSIMO). APRA AMCOS CEO Dean Ormston said the guardrails were critical to protect the rights and livelihoods of music creators.

“The introduction of mandatory transparency requirements would be a significant victory for our industry and has the potential to bring Australia into line with European Union and other international jurisdictions that value the economic, social and cultural importance of their arts, creative industries and communities.”

In March, the European Parliament adopted the Artificial Intelligence Act, which will roll out measures related to testing, transparency and accountability for the regulation of high-risk AI systems and general-purpose AI systems.

Ormston said requiring AI systems to disclose data sources would allow artists, creators and rightsholders to negotiate appropriate agreements and ensure that their intellectual property is only used with the appropriate levels of consent, credit and remuneration.

A recent APRA AMCOS study forecast Australian and New Zealand artists, songwriters and composers would suffer a 23 per cent (or $519 million) drop in revenue and income by 2028 if AI technologies continue to operate without proper regulation and licensing.

NATSIMO director Leah Flanagan said the APRA AMCOS report into AI found that 89 per cent of the Aboriginal and Torres Strait Islander songwriters and composers surveyed believed that AI could lead to cultural appropriation.

“While government efforts have begun to address the unauthorised use of ICIP in arts and crafts, particularly in mass-produced items, this work must focus quickly to include all areas of Aboriginal and Torres Strait Islander cultural creation. We encourage government to look to systems of cultural permission within Aboriginal and Torres Strait Islander communities as a blueprint for the use of Indigenous cultural intellectual property within AI.”

Screen Australia also this month released its own set of key principles to guide the agency’s approach to AI. It included similar provisions to protect how screen creatives’ personal information and intellectual property is used in training data, prompts or any generated outputs from AI systems.

Key aspects of the federal government’s proposed guardrails

  1. Establishing clear accountability processes, governance, and strategies for regulatory compliance.
  2. Implementing risk management processes to identify and mitigate risks.
  3. Protecting AI systems and data quality through governance measures.
  4. Testing AI models and systems before deployment and ongoing monitoring.
  5. Enabling meaningful human oversight and intervention in AI systems.
  6. Informing end-users about AI-enabled decisions, interactions, and AI-generated content.
  7. Establishing processes for people impacted by AI systems to challenge outcomes.
  8. Ensuring transparency across the AI supply chain to effectively address risks.
  9. Maintaining records to allow third-party compliance assessments.
  10. Conducting conformity assessments to demonstrate compliance with the guardrails.

Source: Minter Ellison