‘Almost a sociopathic lack of concern’: 5 biggest revelations from The New Yorker’s deep dive into Sam Altman
A new investigation based on 100+ interviews and internal documents raises questions about leadership at OpenAI
Here at Tom’s Guide our expert editors are committed to bringing you the best news, reviews and guides to help you stay informed and ahead of the curve!
You are now subscribed
Your newsletter sign-up was successful
Want to add more newsletters?
Join the club
Get full access to premium articles, exclusive features and a growing list of member rewards.
A sweeping new investigation from The New Yorker is putting one of the most powerful figures in AI under intense scrutiny — and raising difficult questions in the process.
Built on more than 100 interviews, along with internal memos, Slack messages and private notes, the report paints a complicated and often contradictory portrait of Sam Altman, the mastermind behind ChatGPT.
There’s no single “smoking gun.” But taken together, the reporting describes a pattern of concerns raised by some insiders — while others strongly defend Altman’s leadership and impact.
Article continues belowHere are five of the most notable revelations from the report — and why they matter.
1. Internal documents raised concerns about how safety efforts were communicated
One of the most significant elements of the report centers on internal materials attributed to Ilya Sutskever, OpenAI’s former chief scientist.
According to the report, Sutskever compiled memos based on Slack messages and HR documents that raised concerns about whether OpenAI’s board was receiving a complete and accurate picture of internal operations — including how safety efforts were being represented. One memo reportedly opened with a blunt assessment: “Lying," reflecting the severity of those internal red flags.
Those concerns sit at the heart of a broader tension inside OpenAI — how to balance rapid AI development with long-term safety. The concerns described in internal materials cited by the report, are not all that different from reasons why the QuitGPT movement took shape. After OpenAI made a deal with the Pentagon users questioned both safety and ethics.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
2. A former OpenAI leader documented similar concerns independently
The report also points to private notes kept over several years by Dario Amodei, a former OpenAI executive who later founded Anthropic.
According to the article, those notes reflected concerns about leadership, trust and decision-making within the company. The report presents those concerns as separate from — but broadly aligned with — the internal materials attributed to Ilya Sutskever.
While not evidence of wrongdoing, the overlap suggests that questions about leadership were not limited to a single source.
3. The investigation links current tensions to earlier career conflicts
A post shared by Ronan Farrow (@ronanfarrow)
A photo posted by on
Looking beyond OpenAI, the report revists Sam Altman’s earlier career at Loopt and later at Y Combinator.
At Loopt, the article describes tensions with colleagues and concerns about how Altman communicated internally, with some sources raising questions about his communication style and transparency.
During his time at Y Combinator, the report references disagreements with partners and differing views on his leadership approach.
Rather than presenting these as definitive judgments, the article uses these accounts as context for the leadership concerns being raised today.
4. The report highlights just how polarizing Altman has become
The investigation presents sharply divided perspectives on Sam Altman. Some colleagues and collaborators credit him with helping drive OpenAI’s rapid growth and global influence, describing him as highly effective and capable of operating at the speed required in a competitive AI landscape.
Others are more critical. The report includes concerns from former insiders about transparency, communication and decision-making, along with one source who described what they saw as an “almost a sociopathic lack of concern,” referring to Altman’s leadership style.
At the same time, The New Yorker report suggests that the same traits — speed, intensity and decisiveness — are interpreted very differently depending on perspective, highlighting just how polarizing Altman has become.
5. Some insiders warn about long-term reputational risk
The report also includes a comment from an unnamed Microsoft executive, who is quoted as saying there is a “small but real chance” Sam Altman could ultimately be remembered alongside figures involved in major corporate scandals.
The remark is presented as one perspective among many — part of a broader set of views that range from sharp criticism to continued support.
Throughout the article, that tension is evident: while some insiders raise serious concerns, others continue to work closely with Altman and remain aligned with OpenAI’s direction, reflecting ongoing confidence in the company’s leadership and trajectory.
Supporters point to the company’s rapid progress and global impact as evidence of effective leadership during a pivotal moment in technology.
Why this deep dive matters
For those of us who use AI tools every day, this story goes beyond one executive. OpenAI sits at the center of a technological shift that is already reshaping how people work, learn and make decisions. And Sam Altman is one of the people guiding that shift.
The investigation doesn’t offer a definitive conclusion, but it does raise a question that’s becoming increasingly relevant:
How much trust should we place in the people building the systems shaping our future?
This question isn't just academic. In the book "If Anyone Builds It, Everyone Will Die," the argument is made that if we don't solve the problem of trust and control perfectly the first time, the consequences for humanity are irreversible. It frames the OpenAI leadership struggle not just as business, but as a matter of global safety.
Bottom line
There’s no clear-cut verdict in this investigation. Instead, it presents a detailed and sometimes conflicting portrait of one of the most influential leaders in tech today.
For every insider raising concerns, there’s another pointing to results. And in a moment where AI is advancing faster than ever, that tension might just be the most important takeaway here.
Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.
More from Tom's Guide

Amanda Caswell is one of today’s leading voices in AI and technology. A celebrated contributor to various news outlets, her sharp insights and relatable storytelling have earned her a loyal readership. Amanda’s work has been recognized with prestigious honors, including outstanding contribution to media.
Known for her ability to bring clarity to even the most complex topics, Amanda seamlessly blends innovation and creativity, inspiring readers to embrace the power of AI and emerging technologies. As a certified prompt engineer, she continues to push the boundaries of how humans and AI can work together.
Beyond her journalism career, Amanda is a long-distance runner and mom of three. She lives in New Jersey.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.
