Lauren Leffer


"Real-world outcomes, from what sorts of systems are covered under emerging laws to who holds responsibility for those systems’ actions (is it the developers, the training data compilers, the prompter or the machine itself?) might be altered by how the terminology is understood, Wang says. All of this has critical implications for AI safety and risk management.

"If there’s an overarching lesson to take away from the rise of LLMs, it might be that language is powerful. 

"With enough text, it’s possible to train computer models that appear, at least to some, like the first glimpse of a machine whose intelligence rivals that of humans. And the words we choose to describe that advance matter." 

Comments

Popular posts from this blog

Perplexity

Aphorisms: AI

DeepAI's Austen on China