Turns out, if you ask an AI to play an expert, it gets less reliable

Telling an AI to "act like an expert" sounds like a great idea, but a new study suggests it can actually hurt its accuracy.

Turns out, if you ask an AI to play an expert, it gets less reliable

you’ve probably seen the tip floating around: tell AI to act like an expert in a field, and you’ll get better answers. It’s popular advice, and it does work, sometimes. However, a new study suggests that using AI personas may not be as effective as we thought it would be.

Researchers from the University of California tested 12 different personas across six language models. The personas ranged from math and coding experts to creative writers and safety monitors. The goal was to find out how well AI performs when it is instructed to act as an expert.

The results were mixed. Adopting a persona made the AI sound more professional and follow the rules better. But it also made the AI worse at recalling facts. According to the study, using an AI persona shifts it into an instruction-following mode rather than a knowledge-retrieval mode, and that tradeoff costs you accuracy.

What’s the solution?

To fix this problem, the researchers developed PRISM, which stands for Persona Routing via Intent-based Self-Modeling. Instead of always using a persona or never using one, PRISM teaches AI to decide what’s best for itself.

When you ask a question, PRISM generates two answers: one from its default mode and one from its persona. It then compares the two and delivers the answer that performs better for a specific query. 

Asking AI to act like persona resultsarXiv

The expert answer isn’t discarded even when the default answer wins. Instead, the reasoning style is saved in a lightweight component called a LoRA adapter, which the AI can draw from later when needed. The solution sounds simple, and yet, it’s effective.

How did PRISM perform?

PRISM raised AI’s overall score by one to two points on the MT-Bench, a test that measures how well an AI follows instructions and stays helpful. For writing and safety tasks, personas helped. For raw knowledge questions, skipping the persona proved to be the better option.

The researchers plan to test PRISM with more personas and refine its ability to provide better answers. It’s early days, but this could change how we prompt AI for good.