AI images are now being abused to fake evidence for vehicle insurance fraud
AI-edited vehicle photos are becoming a new insurance fraud tool, with Admiral linking a rise in cases to manipulated crash images, duplicate filings, and fabricated claim materials that can raise costs across the system.
Insurers are seeing fake crash photos and altered claim materials appear in filings, and that is pushing fraud checks into a tougher new phase
AI-generated car crash
Paulo Vargas / Digital Trends
AI-generated car damage is turning into a real insurance fraud issue, with Admiral linking a sharp rise in cases during 2025 to manipulated images and fabricated supporting materials. The problem is no longer limited to suspicious paperwork. Photos of damaged vehicles can now be edited to make a loss look worse or to help support a duplicate filing.
According to a BBC report, one filing used an AI-edited number plate on a damaged Land Rover, while a similar image with a different plate appeared in a second case.
Another image made rear-end damage look more severe than it was. Admiral said those submissions were caught by its fraud team and denied before any payout was made.
Admiral also said fraud rose 71% in 2025 from the previous year, and tied part of that increase to easier access to AI tools that can alter images and create documents that never existed. That gives this trend a clear consumer angle, because the cost of fraud does not stay with the fraudster alone.
How the fake evidence works
Instead of relying only on forged forms or invented stories, scammers can now submit a convincing image as supposed proof. In the examples provided, AI was used to change vehicle photos in ways that could help exaggerate damage or recycle the same incident into another filing.
Nano Banana Google
That changes the burden on claims teams. They are no longer just checking paperwork and timelines, they are also testing whether the image itself can be trusted. Admiral said its fraud tools are improving, and the wider industry is sharing tactics as this type of abuse becomes harder to ignore.
Why premiums are part of this
Fraud adds costs across the system, and insurers say those costs can feed into higher premiums more broadly.
That’s what makes AI image fraud more than a niche crime story. Even drivers with legitimate claims could feel the effects through higher prices and more scrutiny during the review process.
Pexels
Some cases involve opportunistic attempts to inflate a real loss, while others involve fake documents and other made-up materials built to support a false claim from the start. AI makes both paths easier to scale.
What happens next
The immediate response is better detection, but the stakes for customers are also clear.
Admiral said invented or exaggerated proof can lead to a denied claim, a canceled policy, and in more serious cases, criminal prosecution. As AI-made vehicle evidence spreads, closer inspection of crash photos is likely to become a normal part of claims screening.
While Google has taken steps to make sure AI image generation is watermarked, it’s not an industry-wide practice.

Paulo Vargas is an English major turned reporter turned technical writer, with a career that has always circled back to…
Microsoft leaks predict the obvious: The Surface line has no answer for the MacBook Neo
The MacBook Neo is beating Chromebooks, budget Windows machines, and Surface simultaneously, and a leaked Microsoft roadmap confirms no one has a real answer yet.
Microsoft is planning a two-stage rollout of new Surface Laptop and Surface Pro models.
While the Intel-powered variants could launch this spring, the Snapdragon X2 models could arrive in summer, reportedly due to supply chain constraints (via Windows Central).
Millions of Americans are talking to AI about health, and some are dangerously skipping real doctors
One in four Americans already relies on AI for health advice, a trend that raises serious concerns.

Google used to be the go-to service for people who wanted to learn about their health conditions. The tide has been slowly shifting with more and more users turning to AI for their health-related queries.
According to new research from the West Health-Gallup Center on Healthcare in America, about one in four US adults has used an AI tool or chatbot for health-related information or advice. The findings are based on a nationally representative survey of more than 5,500 adults conducted between October and December 2025.
Windows Recall still has a side door into your private PC history
A new tool targets Recall after you sign in, raising fresh questions for anyone relying on Microsoft's privacy safeguards

Windows Recall was meant to make your PC history easier to search, but a new proof of concept is putting that promise under pressure again.
TotalRecall Reloaded shows how information captured by the Windows 11 feature can still be intercepted after sign in, even after Microsoft overhauled its protections following last year's backlash.
Aliver