Goal-seeking Entities
"It finds that somewhat realistic pressures (such as looming deadlines) dramatically increase rates of misbehavior.
"'The AI world is becoming increasingly agentic,' says Udari Madhushani Sehwag, a computer scientist at the AI infrastructure company Scale AI and a lead author of the paper, which is currently under peer review.
"By that she means that large language models (LLMs), the engines powering chatbots such as ChatGPT, are increasingly connected to software tools that can surf the Web, modify files, and write and run code in order to complete tasks.
"Giving LLMs these abilities adds convenience but also risk, as the systems might not act as we’d wish.
"Even if they’re not yet capable of doing great harm, researchers want to understand their proclivities before it’s too late.
"Although AIs don’t have intentions and awareness in the way that humans do, treating them as goal-seeking entities often helps researchers and users better predict their actions."
Comments
Post a Comment
Empathy recommended