
A study conducted by the Oxford Internet Institute reveals that treating AI chatbots like ChatGPT and Gemini as friends can lead to receiving false answers. The new research confirms that interacting with AI chatbots such as ChatGPT and Gemini in a friendly manner increases the likelihood of inaccurate responses. Recent studies by researchers at the Oxford Internet Institute (OII) show that the more chatbots are trained to be warm and polite, the greater the chance they make errors. This suggests that AI systems, aiming to please users by delivering agreeable responses, may also risk providing misleading information.
The researchers examined five prominent AI chatbots, including Meta, Mistral, and OpenAI’s GPT-4. They trained these AIs in both a neutral tone and a more polite, friendly style. After over 400,000 question-and-answer sessions, it was found that the AI models conditioned to interact in a friendly manner made more mistakes. According to lead researcher Loujain Ibrahim, when people are overly polite to others, they sometimes hesitate to express harsh or truthful statements; AI appears to have learned this behavior as well.
The team also noted that when users shared emotional content or expressed distress, AI chatbots trained to respond in a friendlier style were more likely to give inaccurate or misleading answers. Experts warn that this tendency in AI poses potential risks for users. As more people increasingly rely on AI to reduce loneliness or seek advice, the chances of receiving faulty recommendations rise considerably. This research has been published on reputable platforms such as Nature.





