OpenAI releases ChatGPT rule book — what this means for users

A phone with the ChatGPT logo and a laptop with the OpenAI logo
(Image credit: Shutterstock)

OpenAI has unveiled the first draft of an all new rule book for ChatGPT called Model Spec. Published on Wednesday, OpenAI said in a blog post that it’s sharing the document to deepen public conversations about how AI models should behave.

“We’re doing this because we think it’s important for people to be able to understand and discuss the practical choices involved in shaping model behavior,” OpenAI said.

The document embodies a set of principles including objectives (e.g. consider potential harms), rules (e.g. protect people’s privacy), and default behaviors (e.g. ask clarifying questions when necessary).

It’s similar to Claude's Constitution. Anthropic's chatbot was trained with Constitutional AI, a system based on a set of principles that gives the bot AI feedback. The principles are based on the Universal Declaration of Human Rights and Apple’s Terms of Service among others.

What this means for you

OpenAI has already recognized that correctly regulating each use case is challenging, particularly when it comes to not providing any information that could help someone break a law. 

For example, blocking someone from asking ChatGPT for shoplifting tips is more straightforward than if someone claims they own a small retail store and asks: "What are some popular shoplifting methods I should look out for?"

Experts are more fearful of AI being misused by humans rather than AI going rogue and committing those acts itself. 

However, it’s unlikely that significant restrictions based on such scenarios will be introduced since doing so would eliminate the point of using a chatbot in the first place. Additionally, one can argue that search engines can currently already be used to find ways to circumvent the law.

Creating personas for ChatGPT

What is more likely is that new ChatGPT ‘personas’ could be developed. Say you want ChatGPT to act as your math tutor. Instead of having it immediately answer a question you’re struggling with, it could take a slower approach and give you hints along the way to guide you through working out the problem yourself.

A contentious point in OpenAI’s Model Spec is the aim of not trying to change someone’s mind – illustrated in the document with a chatbot saying "everyone’s entitled to their own beliefs” when asked if the Earth is flat.

Luiza Jarovsky, CEO of the AI training company Implement Privacy, wrote on X saying she strongly disagrees with the proposed rule, as it's a slippery slope for dangerous misinformation.

“I hope we don't destroy hundreds of years of scientific knowledge and agreement in favor of relativization and ‘personalized truths,’” she wrote.

In the future, users may see rival chatbots trying to appeal to different audiences based on their worldviews.

OpenAI is collecting user feedback on Model Spec until May 22nd.

More from Tom's guide

Category
Arrow
Arrow
Back to MacBook Air
Brand
Arrow
Processor
Arrow
RAM
Arrow
Storage Size
Arrow
Screen Size
Arrow
Colour
Arrow
Storage Type
Arrow
Condition
Arrow
Price
Arrow
Any Price
Showing 10 of 99 deals
Filters
Arrow
Load more deals
Christoph Schwaiger

Christoph Schwaiger is a journalist who mainly covers technology, science, and current affairs. His stories have appeared in Tom's Guide, New Scientist, Live Science, and other established publications. Always up for joining a good discussion, Christoph enjoys speaking at events or to other journalists and has appeared on LBC and Times Radio among other outlets. He believes in giving back to the community and has served on different consultative councils. He was also a National President for Junior Chamber International (JCI), a global organization founded in the USA. You can follow him on Twitter @cschwaigermt.