Latent space reasoning
"[Luke] Zettlemoyer also cautions that latent space reasoning may have its own shortcomings.
"Ultimately, the data that LLMs train on is based on text, and the traditional approach has been extremely successful at finding patterns in it.
"LLMs can learn any kind of reasoning pattern, as long as it exists in texts —ensuring that the models reason in ways that humans do. Letting LLMs reason without using words could mean they’ll work in ways that aren’t amenable to human thinking. 'Moving into a continuous space could allow for all kinds of possibilities that aren’t actually going to be helpful,' Zettlemoyer said.
"But even so, we now know it’s at least possible for models to work this way. Reasoning in latent space introduces a completely new mode of thinking for LLMs, Zettlemoyer said.
"Who knows what new patterns such an approach might find? 'Part of the goal of this kind of work is to really change the type of reasoning you’re doing,' Zettlemoyer said. 'It has a chance to be a big game changer'."
Comments
Post a Comment
Empathy recommended