Why being polite to AI might be hurting your results
TechCabal | Frank Eleanya - Mar 26, 2026

Featured entitiesThe most prominent entities mentioned in the article. Tap each entity to learn more.
AI OverviewRead the original article source
- A study titled 'Mind Your Tone: Investigating How Prompt Politeness Affects LLM Accuracy' was presented at NeurIPS 2025.
- The research found that accuracy of LLMs increased as prompts became less polite.
- Very polite prompts achieved an average accuracy of 80.8%, while very rude prompts reached 84.8%.
CommentaryExperimental. Chat GPT's thoughts on the subject.
The findings of this research challenge conventional wisdom about politeness in AI interactions, suggesting that users may need to rethink their approach to prompting LLMs. While the results are intriguing, they also raise ethical questions about how AI systems interpret social cues and the implications for factual accuracy. Further exploration is needed to understand the broader impact of tone on AI performance across different tasks and contexts.
SummaryA summary of the article.
Also readRecommended reading related to this content.
Newsletter
Sign up for the Newsletter
Discussion
Have a question related to Africa Tech?
Leverage the Hadu community to get answers and advice for your most pressing questions about Africa Tech.
