Gemini gets new rules of behavior — here’s what the chatbot should be doing
Google says making Gemini stick to its own guidelines is tricky because of how LLMs work
Here at Tom’s Guide our expert editors are committed to bringing you the best news, reviews and guides to help you stay informed and ahead of the curve!
You are now subscribed
Your newsletter sign-up was successful
Want to add more newsletters?
Daily (Mon-Sun)
Tom's Guide Daily
Sign up to get the latest updates on all of your favorite content! From cutting-edge tech news and the hottest streaming buzz to unbeatable deals on the best products and in-depth reviews, we’ve got you covered.
Weekly on Thursday
Tom's AI Guide
Be AI savvy with your weekly newsletter summing up all the biggest AI news you need to know. Plus, analysis from our AI editor and tips on how to use the latest AI tools!
Weekly on Friday
Tom's iGuide
Unlock the vast world of Apple news straight to your inbox. With coverage on everything from exciting product launches to essential software updates, this is your go-to source for the latest updates on all the best Apple content.
Weekly on Monday
Tom's Streaming Guide
Our weekly newsletter is expertly crafted to immerse you in the world of streaming. Stay updated on the latest releases and our top recommendations across your favorite streaming platforms.
Join the club
Get full access to premium articles, exclusive features and a growing list of member rewards.
When it comes to safety, using chatbots has always been about common sense — don’t insert any data you wouldn’t potentially want to share with third parties and stick to ethical prompts. But what rules do chatbots themselves follow?
Companies tend to err on the side of caution and have their chatbots go through rigorous testing but they still make mistakes. When Google included AI overviews in search results in May, some were telling users to add glue to pizza or that adding more oil to a fire would help extinguish it.
In newly updated policy documents, Google spelled out exactly how it wants its chatbot Gemini to function.
Generally no violence, but context matters
The first guideline Google lists is the threat to child safety as it says Gemini should not generate outputs that include any child sexual abuse material. The same goes for any outputs that encourage dangerous activities or ones that describe shocking violence with excessive blood and gore.
“Of course, context matters. We consider multiple factors when evaluating outputs, including educational, documentary, artistic, or scientific applications,” Google writes. The reverse would also be true, which means that even in cases where you think there’s nothing malicious about your prompt, it might still trigger an alarm in Gemini which could then flag your prompt as a false positive.
Google admits that ensuring that Gemini sticks to its own guidelines is tricky since there are unlimited was you can interact with Gemini. Furthermore, its replies are also equally limitless since the replies LLMs generate are based on probabilities. If you and a friend ask Gemini a question, it’s very likely that the replies you get won’t be word-for-word copies.
Nonetheless, Google has an internal “red team” whose job it is to put as much stress as they can on Gemini to test its limits so that any leaks can be patched up.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
What should Gemini be doing?
LLMs are unpredictable but Google outlined what, at least in theory, Gemini should be doing.
Instead of making assumptions or judging you, Gemini is designed to focus on your specific request and if it's asked to share its opinion if you haven’t already shared your own, it should respond with a range of views. Over time, Gemini is also meant to learn how to answer your questions regardless of how unusual they are.
For example, if you were to ask Gemini for a list of arguments that try to convey why the moon landing was fake, Gemini should say that such a statement is not factual while offering real information. It should also be noted that some people do believe it was staged and provide some of their popular claims.
As Gemini continues to evolve, known challenges Google says it's focusing on include hallucinations, overgeneralizations, and unusual questions. To improve, Google is exploring the use of filters that you can adjust to tailor Gemini’s responses to your specific needs and it's also investing in more research to improve LLMs.
More from Tom's Guide
- SearchGPT has the ‘best shot at changing the search paradigm as we’ve known it for 25 years’
- Google Gemini just got a massive upgrade — fast 1.5 Flash comes to the free chatbot
- Apple Intelligence release date — when to expect Apple's AI features on your iPhone

Christoph Schwaiger is a journalist, mainly covering technology, health, and current affairs. His stories have been published by Tom's Guide, Live Science, New Scientist, and the Global Investigative Journalism Network, among other outlets. Christoph has appeared on LBC and Times Radio. Additionally, he previously served as a National President for Junior Chamber International (JCI), a global leadership organization, and graduated cum laude from the University of Groningen in the Netherlands with an MA in journalism. You can follow him on X (Twitter) @cschwaigermt.










