I tested ChatGPT vs Perplexity for research — here’s the one that won
After testing three research prompts with ChatGPT and Perplexity, I zeroed in on one chatbot in particular when it came to which one handles that action the best
Here at Tom’s Guide our expert editors are committed to bringing you the best news, reviews and guides to help you stay informed and ahead of the curve!
You are now subscribed
Your newsletter sign-up was successful
Want to add more newsletters?
Join the club
Get full access to premium articles, exclusive features and a growing list of member rewards.
I’m the kind of person who falls down research rabbit holes.
A random topic pops up on my timeline, comes up in conversation, or I overhear something on the subway — and suddenly I need to know everything about it. It could be politics, why kids don’t play with toys like they did in the ’80s and ’90s, or even the mysteries of the Bermuda Triangle.
For years, Perplexity has been my go-to AI for this kind of deep-dive research. Its ability to pull in up-to-date information, cite credible sources and analyze uploaded documents makes it incredibly useful when you want answers fast.
Article continues belowBut with ChatGPT’s newest model now available, I wanted to see if my favorite research assistant finally had real competition.
So I ran both chatbots through the same test — three research prompts designed to see which AI is actually better when you want to learn something new.
The results were more surprising than I expected.
1. Topic brainstorm
Prompt: “Give me 10 research topics on tech journalism, why they matter in 2026, one challenge per topic, and one key research question”
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
ChatGPT did just what I asked. One of the topics it threw into the conversation centered on “The Rise of AI Search and Its Impact on Tech Media Traffic.” Its statement on why that particular subject matters stated “AI answer engines like ChatGPT, Perplexity AI and Google Gemini are reducing traditional search clicks, dramatically affecting media traffic.”
The main challenge connected to that topic was listed as “Tech publications losing referral traffic from search engines.” And finally, the research question came out to be “How do AI-generated search summaries affect audience discovery of tech journalism?” ChatGPT also offered to turn one of its 10 topics into a full research proposal or thesis outline if I wanted to draft an academic study or college paper.
Perplexity gave me the research results I was looking for. It produced a comparable topic to ChatGPT with the mention of “AI-generated news and answer engines.”
The statement on why that matters is important is as follows: “In 2026, AI chatbots and answer engines are becoming primary gateways to news, reshaping traffic flows, trust, and how people encounter journalism at all.”
The main challenge that the subject presents is “Newsrooms have little transparency or control over how their work is surfaced, summarized, or altered by these systems.”
To close it off, the research question given to me on this topic is: “How do AI answer engines change audience trust, source attribution, and traffic patterns for tech news compared with traditional search and direct visits?”
Perplexity also provided me with a list of five follow-up questions and extra topics, such as “How can journalists ethically use AI tools in reporting?” and “Strategies to combat deepfakes in tech reporting.”
Winner: Perplexity
Perplexity not only provided me with more extensive responses for each of the 10 research topics, but it also included links to legit resources that provided research reports from the likes of the Reuters Institute (University of Oxford).
2. Subtopic outline
Prompt: “As a college student, create a list of subtopics for a research paper on the best AI chatbots, including the research question, focus areas like access and affordability, and prioritize quantitative data from the last 30 years”
ChatGPT acknowledged my role as a curious college student and presented me with an outline that emphasizes “data-driven analysis and measurable indicators.” The themes it featured included the title of the actual research paper, the central research question, 10 subtopics and examples of data tables to mention in the research paper.
ChatGPT offered additional assistance via three options, which involved a full research paper outline, 10 academic sources I could cite in the research paper, and a thesis statement and hypothesis to use.
Perplexity noted my status as a college student and kicked off its extensive response with the main research question I’m targeting. Afterward, it presented 10 subtopics broken down into a main focus, quantitative data worth seeking out, and an all-encompassing question focused on that subtopic.
To close out its exhibition, Perplexity gave me a frame to work with that listed the research paper’s working titles and four core chapters. The follow-up questions and examination options it presented included “What metrics best evaluate chatbot affordability for students?” and “Design a survey for user satisfaction with chatbot access.”
Winner: Perplexity
Perplexity put in more of an effort with the way it outlined my research paper project. I also have to compliment its offering of an easy framework to use when I’m looking for a quick and easy method toward narrowing all my information down. And, as expected, Perplexity brought 10 links to my attention (which is something ChatGPT didn’t do) and gave me some more useful follow-up actions to put to use.
3. Question refinement
Prompt: “Turn this idea, the most underrated Disney animated movies, into five research questions tied to 2026 trends, each with a testable hypothesis and potential data sources”
ChatGPT came to me with five topics that were split among the following bullet points: research topic, hypothesis, four potential data sources, and a 2026 trend connection. To combine all my research questions into a single thesis worth pursuing, the chatbot gave me this concept to consider: “Streaming platforms and social media are reshaping cultural perceptions of Disney animated films, allowing previously overlooked titles to gain renewed recognition.”
ChatGPT finished everything off by offering me three extra actions to pursue, with one of them being “a list of the 15 most underrated Disney animated movies (based on data).”
Perplexity also arrived with five topics, which all targeted a central research question, a testable hypothesis, and four potential sources to consider using for this research project. The chatbot pointed out how I could combine the five topics into a single research paper or go deeper into each topic with its own subtopic (an example being regression on box office vs long‑tail popularity).
And to no one’s surprise, Perplexity came forward with five follow-ups, such as “examples of underrated Disney animated movies from pre-2020.”
Winner: Perplexity
Perplexity prevails once again, as the information sent my way was more detailed and vast in comparison to what ChatGPT supplied me with. I was impressed with how it broke down the five principal topics of my topic into three sections, a good collection of follow-ups worth considering, and 10 sources to look over when putting together my research paper.
Overall Winner: Perplexity
Perplexity continues to be the superior chatbot to utilize over ChatGPT when the time comes to go in on some deep research. Perplexity goes out of its way to deliver more comprehensive information with enough valid data sources to back up everything it generates.
ChatGPT has an approach to research that’s not too shabby, but Perplexity just comes out farther ahead due to the extra work it puts in to make sure its information is more reputable.
Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.
More from Tom’s Guide
- Gemini just got a major upgrade in Docs, Sheets, Slides and Drive — here’s how I used it to go from blank page to finished project
- I built a library of 'thinking prompts' for Claude — these are the ones I use most
- I ran 7 real-world prompts on Gemini 3 and Claude Sonnet 4.6 — the results surprised me

Elton Jones came upon the world of AI tools in 2025 and, since then, has learned more about their applications across research, image/audio generation, creative writing, and more. Thanks to these tests, he has acquired the know-how needed to see which ones are the best in key areas and how they can improve their user’s daily habits.
Elton is also a longtime tech writer with a penchant for producing pieces about video games, mobile devices, headsets, and now AI. Since 2011, he has applied his knowledge of those topics to compose in-depth articles for the likes of The Christian Post, Complex, TechRadar, Heavy, ONE37pm, and more.
With a newfound appreciation for all things AI, Elton hopes to make the most complicated topics in that area understandable for the uninformed and those in the know.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.









