Governments unveil new AI security rules to prevent superintelligence from taking over the world
Companies have to keep data safe
Here at Tom’s Guide our expert editors are committed to bringing you the best news, reviews and guides to help you stay informed and ahead of the curve!
You are now subscribed
Your newsletter sign-up was successful
Want to add more newsletters?
Daily (Mon-Sun)
Tom's Guide Daily
Sign up to get the latest updates on all of your favorite content! From cutting-edge tech news and the hottest streaming buzz to unbeatable deals on the best products and in-depth reviews, we’ve got you covered.
Weekly on Thursday
Tom's AI Guide
Be AI savvy with your weekly newsletter summing up all the biggest AI news you need to know. Plus, analysis from our AI editor and tips on how to use the latest AI tools!
Weekly on Friday
Tom's iGuide
Unlock the vast world of Apple news straight to your inbox. With coverage on everything from exciting product launches to essential software updates, this is your go-to source for the latest updates on all the best Apple content.
Weekly on Monday
Tom's Streaming Guide
Our weekly newsletter is expertly crafted to immerse you in the world of streaming. Stay updated on the latest releases and our top recommendations across your favorite streaming platforms.
Join the club
Get full access to premium articles, exclusive features and a growing list of member rewards.
New rules controlling how artificial intelligence can be developed have been unveiled by governments around the world. The new guidelines were released in the hope of preventing the technology from being used in ways that could harm humanity.
Produced by the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the U.K. National Cyber Security Centre (NCSC), they build on previous voluntary commitments secured by the Biden Administration earlier this year.
The 20-page agreement has been signed by 18 countries that have companies building AI systems. It urges those companies to develop and deploy the technology in such a way that it keeps customers and the public safe from misuse.
Broad focus on risk
The new rules are more of a non-binding framework for monitoring AI systems against abuse. They include suggestions for protecting data used in training the models, and ensuring information is secure.
While these are largely voluntary, it is assumed that if companies fail to agree or allow their models to fall into the wrong hands or be misused then they could face tougher and more restrictive regulation in the future.
Speaking to Reuters, Jen Easterly from the U.S. Cybersecurity and Infrastructure Security Agency said getting a global agreement was vital to the success of the guidelines.
"This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs," she said.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
Secure by design
While the focus of global AI safety discussion has been on the risk of superintelligence and high-risk foundation models, the new guidelines also cover current generation and more narrow AI. The focus is on protecting data and preventing misuse, rather than functionality.
The guidelines cover four key areas: secure design, secure development, secure deployment, and secure operation and maintenance.
They emphasized companies building AI models taking ownership of security outcomes for customers, embracing radical transparency and accountability, and building organizational structure and leadership so that "secure by design" is a major priority.
Secure systems benefit users
Toby Lewis, Global Head of Threat Analysis at Darktrace said ensuring data and AI models are secure from attacks should be a pre-requisite for any developer.
“Those building AI should go further and build trust by taking users on the journey of how their AI reaches its answers. With security and trust, we’ll realize the benefits of AI faster and for more people.”
More from Tom's Guide
- OpenAI is building next-generation AI GPT-5 — and CEO claims it could be superintelligent
- Windows 11 Copilot — 7 best things you can do with Microsoft's AI assistant
- OpenAI launches custom chatbots — this is how they work and what they can do

Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on AI and technology speak for him than engage in this self-aggrandising exercise. As the former AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover.
When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing.
