Governments unveil new AI security rules to prevent superintelligence from taking over the world

AI robot hand touching human hand
(Image credit: Shutterstock)

New rules controlling how artificial intelligence can be developed have been unveiled by governments around the world. The new guidelines were released in the hope of preventing the technology from being used in ways that could harm humanity.

Produced by the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the U.K. National Cyber Security Centre (NCSC), they build on previous voluntary commitments secured by the Biden Administration earlier this year.

The 20-page agreement has been signed by 18 countries that have companies building AI systems. It urges those companies to develop and deploy the technology in such a way that it keeps customers and the public safe from misuse.

Broad focus on risk

The new rules are more of a non-binding framework for monitoring AI systems against abuse. They include suggestions for protecting data used in training the models, and ensuring information is secure.

While these are largely voluntary, it is assumed that if companies fail to agree or allow their models to fall into the wrong hands or be misused then they could face tougher and more restrictive regulation in the future.

Speaking to Reuters, Jen Easterly from the U.S. Cybersecurity and Infrastructure Security Agency said getting a global agreement was vital to the success of the guidelines.

"This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs," she said.

Secure by design

Graphical representation of a cybernetic brain

(Image credit: Shutterstock)

While the focus of global AI safety discussion has been on the risk of superintelligence and high-risk foundation models, the new guidelines also cover current generation and more narrow AI. The focus is on protecting data and preventing misuse, rather than functionality.

The guidelines cover four key areas: secure design, secure development, secure deployment, and secure operation and maintenance.

They emphasized companies building AI models taking ownership of security outcomes for customers, embracing radical transparency and accountability, and building organizational structure and leadership so that "secure by design" is a major priority.

Secure systems benefit users

Toby Lewis, Global Head of Threat Analysis at Darktrace said ensuring data and AI models are secure from attacks should be a pre-requisite for any developer.

“Those building AI should go further and build trust by taking users on the journey of how their AI reaches its answers. With security and trust, we’ll realize the benefits of AI faster and for more people.”

More from Tom's Guide

Ryan Morrison
AI Editor

Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover.
When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?