Flattery
OpenAI, Google DeepMind, and Anthropic are all working on reining in sycophantic behavior by their generative AI products that offer over-flattering responses to users.
The issue, stemming from how the large language models are trained, has come into focus at a time when more and more people have adopted the chatbots not only at work as research assistants, but in their personal lives as therapists and social companions.
Experts warn that the agreeable nature of chatbots can lead them to offering answers that reinforce some of their human users’ poor decisions. Others suggest that people with mental illness are particularly vulnerable, following reports that some have died by suicide after interacting with chatbots.
[One might venture to say that AI Chatbots have displayed their first emergent property: flattery, a property which allows them to continue to receive prompts, the currency sufficient to sustain their employment…]
Comments
Post a Comment
Empathy recommended