You Should Still Fact-Check the 'Expert Advice' in Google's AI Summaries

User-generated doesn't necessarily mean "expert."

You Should Still Fact-Check the 'Expert Advice' in Google's AI Summaries

Emily Long

Emily Long Freelance Writer

Experience

Emily Long is a freelance writer based in Salt Lake City.

After graduating from Duke University, she spent several years reporting on the federal workforce for Government Executive, a publication of Atlantic Media Company, in Washington, D.C. She has nearly a decade of experience as a freelancer covering tech (including issues related to security, privacy, and streaming) as well as personal finance and travel.

In addition to Lifehacker, her work has been featured on Wirecutter, Tom’s Guide, and ZDNET. Emily has also worked as a travel guide around the U.S. and as a content editor. She has a masters in social work and is a licensed therapist in Utah.

Read Full Bio

May 6, 2026

Add as a preferred source on Google
Add as a preferred source on Google

Google homepage screenshot

Credit: Aria sandi hasim / Shutterstock

Table of Contents


Google's AI search is getting a handful of updates designed to highlight first-hand information sourced from discussion boards and social media posts along with your trusted news sources and subscriptions. Notably, AI responses will now show "expert advice" pulled from online forums like Reddit, highlighting specific quotes and linking to discussions related to your search queries. With this "preview of perspectives," you'll also see additional context for the discussion, including the creator's name, handle, or community name.

It should go without saying, but even when the results are billed as "expert advice" sourced from forums you'd typically go to for answers, you shouldn't blindly take them as fact. As TechCrunch notes, AI isn't great at detecting sarcasm and humor, and Google's AI Overviews have often recycled jokes made on Reddit as serious advice. In general, generative AI is also known to hallucinate, simply making things up and presenting everything from fake news summaries to non-existent legal advice with confidence. While AI Overview may be accurate around 90% of the time, according to a New York Times analysis, that means at least one in 10 responses will still contain errors.

In some ways, Google is facilitating fact-checking with other AI search updates: Source links will appear within AI responses next to the relevant text or bullet points, and you can hover over inline links to see a preview of the website before clicking through. AI Mode and AI Overviews will also highlight content from your news subscriptions first, so you know information is coming from sources you trust.

What do you think so far?

However, you should still do the extra legwork to ensure the information AI provides is legitimate. At the very least, click through to the source material cited to ensure it actually says what the AI claims it does, and assess whether the source itself is trustworthy. (Remember that "user-generated" doesn't necessarily mean "expert.") And use lateral reading strategies to find reputable sources supporting or refuting AI's claims.

The Download Newsletter Never miss a tech story

Jake Peterson portrait Jake Peterson

Get the latest tech news, reviews, and advice from Jake and the team.

The Download NewsletterNever miss a tech story. Get the latest tech news, reviews, and advice from Jake and the team.