Skip to main content

OpenAI has been actively banning users if they’re suspected of malicious activities

ChatGPT logo on a smart phone resting on a laptop keyboard, lit with a dark purple light
(Image credit: SOPA Images / Contributor via Getty Images)

OpenAI has removed numerous user accounts globally after suspecting its artificial intelligence tool, ChatGPT, was being used for malicious purposes, according to a new report.

Scammers have been using AI to enhance their attacks, OpenAI notes in a new report outlining the AI trends and features that malicious actors are employing, including case studies of attacks that the company has thwarted. Surpassing over 400 million weekly active users, ChatGPT is freely accessible globally.

OpenAI's outlook on scams and fraudulent uses of ChatGPT

ChatGPT logo on a smartphone screen being held outside

(Image credit: Shutterstock)

While OpenAI has been on the front foot in stopping these malicious uses of ChatGPT, the company has also reiterated that it won't tolerate the misuse of its technology.

"OpenAI's policies strictly prohibit use of output from our tools for fraud or scams. Through our investigation into deceptive employment schemes, we identified and banned dozens of accounts," it wrote.

Through sharing insights with industry peers such as Meta, the company hopes to enhance "our collective ability to detect, prevent, and respond to such threats while advancing our shared safety".

TOPICS
Lucy Scotting
Staff Writer

Lucy Scotting is a digital content writer for Tom’s Guide in Australia, primarily covering NBN and internet-related news. Lucy started her career writing for HR and staffing industry publications, with articles covering emerging tech, business and finance. In her spare time, Lucy can be found watching sci-fi movies, working on her dystopian fiction novel or hanging out with her dog, Fletcher.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.