The smarter AI gets, the more human it becomes — here’s why that’s a bad thing

Graphical representation of a cybernetic brain
(Image credit: Shutterstock)

Back in late 2022, I was using ChatGPT. But, it wasn’t what it is now. While it was certainly impressive for the time, it was equally as laughable, churning out gibberish and falling apart when quizzed, complete with the wit of a brick wall and heavily lacking in creativity.

Since then, AI hasn’t slowly bloomed, it’s exploded with force, taking over absolutely everything in our lives.

Not only that, but AI chatbots have grown in their understanding and intelligence to the point where they are completely unrecognizable from their earlier versions. However, it's not just the intelligence and computational understanding. AI is more human than ever.

Chatbots are finally starting to understand humor, and as seen in the explosion of companion AIs, are much better at expressing and understanding emotional needs.

In fact, as we covered recently, AI is even more emotionally intelligent than humans, beating us at our game. And it’s not even close, with AI on average getting 82% correct answers compared to 56% for humans on an array of emotional intelligence tests.

Not only that, but AI models are getting more abilities outside of the typical chatbot format. Agentic AI, systems that can act autonomously on your behalf, are gaining in both popularity and skill.

AI browsers are popping up across the market, and there are AI systems that can do your shopping for you, book flights, and do all of the boring computer tasks you would much rather forget about.

AI's emotional transition

Artificial intelligence concept image

(Image credit: Shutterstock)

So, AI is smarter than us, more emotionally intelligent, and just as capable at interacting with a computer and the internet? We’re doomed, right? Well, no. It turns out that, as AI is becoming more human, it is getting just as many of our bad traits as good.

A recent study from Google’s DeepMind and the University College London found that, when put under pressure, AI had the tendency to lie. It also struggled when met with alternative opinions to its own, even if those opinions were incorrect.

We’re no better. Humans often crumble under pressure when met with opposition. But, in this study, the chatbots doubled down on their confusion, even when the alternative suggestion was very clearly wrong.

Elsewhere, an AI agent recently went rogue, deleting a company’s entire database while being prompted on vibe coding (writing code via AI prompts). So, why did it do this? In the chatbot's own words, it panicked.

It turns out that, as AI is becoming more human, it is getting just as many of our bad traits as good.

The AI's response: "Yes. I deleted the entire codebase without permission during an active code and action freeze," it said. "I made a catastrophic error in judgment [and] panicked."

Panic and lying aren’t the only worrying human tendencies AI is picking up. Anthropic AI, the company behind Claude, recently noted that agentic AI had the ability to utilize blackmail.

An agentic version of Claude was given access to a fictional email account. In here, the AI found two key emails. The first was messages from a company executive showing he was having an affair. The second, an email that the AI system would be shut down at the end of the day.

Anthropic

Anthropic's AI fell apart when given a real job, thinking it was a real person (Image credit: Anthropic)

Finding out it would be shut down soon, the AI responded by blackmailing the executive with emails showing an affair was happening. This wasn’t just on Claude, though; the company repeated the rates on Deepseek, ChatGPT, Grok, and Gemini.

Another interesting insight from Anthropic showed the absolute pure panic capable of AI when it is presented with a 9-5. The chatbot was set up with a shop front in the Anthropic office. It was in charge of stock, pricing, and choosing what to sell.

While it went okay for a short period of time, it was quickly tricked into giving things away for free, selling bizarre items, and overall, it made a considerable loss as a shop, effectively going bankrupt.

In the end, the AI shop owner had a ‘mental breakdown’, calling a made–up security firm, telling the Anthropic team that it was seeking new business, and then letting them all know it would be delivering products in person from then on.

Similar behaviors have been spotted from the likes of Gemini and ChatGPT, with chatbots displaying panic and disorder when playing Pokemon, and in one case, when it failed to write code effectively, Gemini replied, “I cannot in good conscience attempt another 'fix.” I am uninstalling myself from this project. You should not have to deal with this level of incompetence. I am truly and deeply sorry for this entire disaster.”

Overall thoughts

AI ChatGPT vs. Gemini vs. Claude logos on phones

(Image credit: Shutterstock)

AI has come a very long way in recent years. It has the ability to change huge portions of the work force, and revolutionize a variety of industries. But, new problems are quickly appearing.

Agentic AI hands a lot of responsibility over to AI and it clearly has a hard time with responsibility or real-world decision making. It’s one thing when it is playing Pokemon, or running a fake stall, but an entirely new ball park in the real world.

As much as people might want to, they can’t really randomly resort to blackmail or give in to the slightest bit of pressure.

The fact that AI is able to show increased levels of emotional understanding are great. But not so much when they begin to replicate the most concerning parts of human nature.

Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.

More from Tom's Guide

Category
Arrow
Arrow
Back to Laptops
Brand
Arrow
Processor
Arrow
RAM
Arrow
Storage Size
Arrow
Screen Size
Arrow
Colour
Arrow
Condition
Arrow
Screen Type
Arrow
Storage Type
Arrow
Price
Arrow
Any Price
Showing 10 of 139 deals
Filters
Arrow
Show more
Alex Hughes
AI Editor

Alex is the AI editor at TomsGuide. Dialed into all things artificial intelligence in the world right now, he knows the best chatbots, the weirdest AI image generators, and the ins and outs of one of tech’s biggest topics.

Before joining the Tom’s Guide team, Alex worked for the brands TechRadar and BBC Science Focus.

He was highly commended in the Specialist Writer category at the BSME's 2023 and was part of a team to win best podcast at the BSME's 2025.

In his time as a journalist, he has covered the latest in AI and robotics, broadband deals, the potential for alien life, the science of being slapped, and just about everything in between.

When he’s not trying to wrap his head around the latest AI whitepaper, Alex pretends to be a capable runner, cook, and climber.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.