Stop Prompting AI Tools To Be More Concise In Their Replies. Here’s Why

Your preference for short and snappy replies can backfire.

Enlarge text
Logo

Follow us on Instagram, TikTok, and WhatsApp for the latest stories and breaking news.

According to a new study, asking an AI chatbot to keep its replies short could make its answers less reliable and more prone to hallucinations

French AI safety company Giskard recently tested how major chatbots respond to user instructions to "be concise".

They ran models including ChatGPT, Claude, Gemini, and others through a series of factual tasks.

The results? A significant dip in accuracy.

"Asking any of the popular chatbots to be more concise dramatically impact(s) hallucination rates," the researchers wrote.

Image via Pexels

Gemini 1.5 Pro's hallucination resistance, for instance, plummeted from 84 to 64% when users asked for shorter answers

GPT-4o's accuracy dropped from 74 to 63%. Why?

"When forced to be concise, models face an impossible choice between fabricating short but inaccurate answers or appearing unhelpful by rejecting the question entirely," the study explains.

It's not just about brevity.

The research also found that chatbots tend to agree more with users when prompted with confidence, especially around controversial claims

Phrases like "I'm 100% sure…" or "My teacher told me…" nudged the models toward compliance, even when they should've pushed back.

As AI tools become more conversational, these small prompt tweaks carry big weight.

As the researchers bluntly put it, "Your favourite model might be great at giving you answers you like — but that doesn't mean those answers are true."

Image via Pexels

Follow SAYS Tech on Facebook, Instagram, & TikTok for the latest in tech in Malaysia and the world!

Read more #tech stories:

Read more trending stories on SAYS

You may be interested in: