AI has made me all but give up on traditional Google searches — here’s why

ChatGPT generated image of man at computer
(Image credit: ChatGPT generated image)

Like most people with an internet connection, I’m used to turning to Google for just about everything. From recipes and cold remedies to recollecting 90s radio hits, I pick up my phone or fire up my laptop and use Google to find answers. But within the last few months, I’ve found myself increasingly using AI tools like ChatGPT, Meta AI, Claude, and Google’s own Gemini for day-to-day answers.

I enjoy the personal aspect of receiving clear, conversational responses from these AI models, especially when compared to sifting through endless links. I prefer one direct answer from a chatbot rather than a string of Google links that I will have to search through again to find what I need. I know I’m not alone in my endeavors to get answers tailored to me, especially with OpenAI launching SearchGPT later this year.

The value of AI in search

When I first started using AI, I was impressed by how effortlessly I could ask ChatGPT or Claude nuanced questions and get instant, thorough answers without wading through advertisements or SEO-optimized pages.

Meta AI, with its integration into platforms like Instagram and Facebook, also proved highly intuitive without sponsored or unrelated links. Even Google’s Gemini AI has made strides in delivering responses that feel conversational rather than coldly algorithmic.

Yet, this AI-first approach isn’t without its drawbacks. One major criticism is AI’s tendency to hallucinate — giving confidently wrong or biased information. Although I usually know when the AI is wrong, particularly if I already have an inkling about the topic, I don’t always.

This can be a real problem if I am using a chatbot for answers in the same way I use Google. Yes, Google can also return inaccurate results, but AI models often don’t provide clear citations, making it harder to verify the information. For instance, just today I called out ChatGPT for giving me a bit of wrong information about tech, and Meta AI has been criticized for delivering contextually incorrect answers.

OpenAI SearchGPT interface on purple background

(Image credit: OpenAI / Future)

Unfortunately, AI models tend to function in a closed-loop ecosystem, which narrows the diversity of content I’d otherwise discover through Google’s endless search results. Google's traditional search, with its links and varied sources, provides a level of transparency and cross-referencing that remains the victor. It’s also concerning that as AI models become dominant for searches, they may deprioritize original content creators, as there’s less incentive to click on articles or visit websites when you get answers directly from the AI.

Of course, privacy is another growing concern. While Google has been criticized for its data-harvesting practices, AI models require immense data to train and improve. This includes the content of user queries, leading to potential privacy issues. ChatGPT, Meta AI and other AI model’s reliance on this data collection raises questions about how much information they’re gathering and whether AI models could pose an even greater threat to privacy than Google.

Getting personal with AI

In terms of customization, these AI models often provide hyper-personalized responses based on past interactions, which I enjoy, but this can create a bubble effect — limiting exposure to new ideas or alternative perspectives.

There have been times when I’ve asked ChatGPT a question about a topic, then another completely different topic and it will somehow combine answers or skip the second question all together. Google's traditional search, with its more varied returns, still feels like a broader look at the web.

In short, AI has revolutionized the way I search for and receive information, often outpacing Google in convenience and speed. However, the risks of misinformation, privacy concerns, and a shrinking pool of content sources remind me that AI isn’t perfect. While I’ve all but given up on Google, I’m cautious about relying too heavily on any one tool — AI included — in this rapidly evolving tech landscape.

More from Tom's Guide

Category
Arrow
Arrow
Back to MacBook Air
Brand
Arrow
Processor
Arrow
RAM
Arrow
Storage Size
Arrow
Screen Size
Arrow
Colour
Arrow
Storage Type
Arrow
Condition
Arrow
Price
Arrow
Any Price
Showing 10 of 85 deals
Filters
Arrow
Load more deals
TOPICS
Amanda Caswell
AI Writer
  • GeekBone
    Your article made my case for me... AI, when trying to answer questions, is often wrong. I hate there are people out there who don't know enough to question AI or who do not do their own research. These are the people who believe things said in memes or parroted from a questionable source. Now it's a lot easier and faster to get the wrong information.

    AI works for me for things like writing item descriptions on eBay or other straightforward tasks. I am a tax preparer and I can tell you, 50% of the tax questions I have proposed have come back with wrong, partially correct or misleading answers. We are currently expecting too much from AI and articles like yours, with questionable headlines, will just encourage people who do not read the entire article, to use and rely on AI without doing their due diligence.
    Reply