Regu Report
Monday, October 6, 2025
  • Finance
    • Financial Services
    • Insurance
    • Superannuation
    • Economy
    • Productivity
  • Legal
    • Competition
    • Privacy
    • Intellectual Property
    • Employment & Workplace Relations
    • Communications
    • Human Rights
    • Law Reform
  • Corporate
  • Property
  • Science
    • Environment
    • Technology
  • Agriculture
  • Transport
  • Sport
No Result
View All Result
  • Finance
    • Financial Services
    • Insurance
    • Superannuation
    • Economy
    • Productivity
  • Legal
    • Competition
    • Privacy
    • Intellectual Property
    • Employment & Workplace Relations
    • Communications
    • Human Rights
    • Law Reform
  • Corporate
  • Property
  • Science
    • Environment
    • Technology
  • Agriculture
  • Transport
  • Sport
No Result
View All Result
Regu Report
No Result
View All Result
Home Science Technology

Researchers unveil a way to stop AI from learning from your online content

Clara Hensley by Clara Hensley
1 September 2025
in Technology
Reading Time: 2 mins read
0
12
SHARES
105
VIEWS
Share on LinkedInShare on FacebookShare on X

Australian researchers say they have devised a way to stop unauthorised artificial intelligence systems from learning from online images, offering a formal, measurable check on what models can extract from photos, artwork and other visual content.

The technique, developed by CSIRO with the Cyber Security Cooperative Research Centre and the University of Chicago, subtly alters images so they appear unchanged to people but become uninformative to AI models. The team says it places a hard limit on what a model can learn from protected content and provides a mathematical guarantee that this holds even if an attacker adapts their approach or tries to retrain a system.

RELATED POSTS

CSIRO team enhances AI for smarter, more accurate chest X-ray analysis

CSIRO’s RISE Accelerator drives global growth for Australian startups

“Existing methods rely on trial and error or assumptions about how AI models behave,” Dr Derui Wang, a CSIRO scientist, said. “Our approach is different; we can mathematically guarantee that unauthorised machine learning models can’t learn from the content beyond a certain threshold. That’s a powerful safeguard for social media users, content creators, and organisations.”

The researchers argue the approach could help curb deepfakes by preventing models from learning facial features from social media images, and shield sensitive datasets such as satellite imagery or cyber threat intelligence from being ingested into training pipelines. Dr Wang said the method can be deployed automatically and at scale.

“A social media platform or website could embed this protective layer into every image uploaded,” he said. “This could curb the rise of deepfakes, reduce intellectual property theft, and help users retain control over their content.”

While the current work focuses on images, the team plans to extend it to text, music and video. The method remains theoretical for now, with results validated in controlled lab tests. Code has been released on GitHub for academic use, and the group is seeking collaborators across AI safety and ethics, defence, cybersecurity and academia.

The paper, titled Provably Unlearnable Data Examples, was presented at the 2025 Network and Distributed System Security Symposium (NDSS), where it received the Distinguished Paper Award.

University of Chicago researchers have previously released tools such as Glaze and Nightshade that aim to thwart AI models from training on artists’ work by poisoning training data. Those systems demonstrated empirical effectiveness; the new CSIRO-led work seeks to offer formal guarantees about what models can and cannot learn.

Independent experts are likely to scrutinise how the guarantees translate in the wild, where data is scraped at scale and models evolve rapidly. Real-world effectiveness will hinge on how broadly platforms and publishers adopt the protections and how they stand up against future training and attack strategies.

The announcement comes amid heightened concern over deepfakes and online harms, and as governments, including in Australia, weigh rules on AI training data and provenance measures such as watermarking. CSIRO’s approach aims to give creators and organisations a technical control that can complement policy and legal safeguards.

Interested partners can contact the team via [email protected].

Tags: Artificial IntelligenceCSIROCyber Security Cooperative Research CentreDr Derui WangSocial MediaUniversity of Chicago
Share1Share5Tweet3ShareSend
Clara Hensley

Clara Hensley

Clara Hensley is a graduate journalist reporting on science, environment and technology. She is dedicated to exploring how innovation and sustainability are reshaping the world.

Related Posts

CSIRO team enhances AI for smarter, more accurate chest X-ray analysis

CSIRO team enhances AI for smarter, more accurate chest X-ray analysis

by Clara Hensley
1 September 2025
0

Australia’s national science agency, CSIRO, says it has developed what it describes as a world-first way to train artificial intelligence...

CSIRO’s RISE Accelerator drives global growth for Australian startups

CSIRO’s RISE Accelerator drives global growth for Australian startups

by Clara Hensley
1 September 2025
0

Twenty Australian start-ups developing renewable energy technologies have been chosen for the third round of the India Australia Rapid Innovation...

AI could deliver better insurance outcomes for Australians

AI could deliver better insurance outcomes for Australians

by Clara Hensley
1 September 2025
0

Australia’s national science agency, CSIRO, and the Insurance Council of Australia have released a joint report setting out how artificial...

AI could lighten the load for frontline cybersecurity teams, study finds

AI could lighten the load for frontline cybersecurity teams, study finds

by Clara Hensley
1 September 2025
0

Australia’s national science agency, CSIRO, has analysed a 10‑month workplace trial showing that large language models can help frontline cybersecurity...

Next Post
CSIRO team enhances AI for smarter, more accurate chest X-ray analysis

CSIRO team enhances AI for smarter, more accurate chest X-ray analysis

Australia launches tracker to map genomic biodiversity

Australia launches tracker to map genomic biodiversity

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED

CSIRO launches R&D program for critical minerals SMEs

CSIRO launches R&D program for critical minerals SMEs

4 October 2025
Federal Court extends asset freeze on First Mutual Private Equity and director Gregory Cotton to safeguard investor funds

ASIC wins travel ban and asset freeze in First Guardian probe

4 October 2025
  • 100 Followers

MOST VIEWED

  • Glass repair operators hit with $116,550 in penalties

    12 shares
    Share 5 Tweet 3
  • TAB hit with $4m penalty for spamming VIP customers

    12 shares
    Share 5 Tweet 3
  • Western Sydney café’s former operators appear in court

    12 shares
    Share 5 Tweet 3
  • Power bank recalls surge amid reports of severe burns and property damage

    12 shares
    Share 5 Tweet 3
  • Home values up 1.9% in June

    12 shares
    Share 5 Tweet 3
Regu Report

Bringing you the latest news from the world of regulation, compliance, corporate governance and industry in Australia.

TOPICS

  • Agriculture
  • Communications
  • Competition
  • Corporate
  • Economy
  • Employment & Workplace Relations
  • Environment
  • Finance
  • Financial Services
  • Human Rights
  • Insurance
  • Law Reform
  • Legal
  • Privacy
  • Property
  • Science
  • Superannuation
  • Technology

INFORMATION

  • About Us
  • Terms of Service
  • Privacy Policy
  • Contact Us
  • About Us
  • Terms of Service
  • Privacy Policy
  • Contact Us

© 2025 Regu Report.

No Result
View All Result
  • Homepages
    • Homepage Layout 1
    • Homepage Layout 2

© 2025 Regu Report.