Emergence

Even if emergence in today’s LLMs can be explained away by different measuring tools, it’s likely that won’t be the case for tomorrow’s larger, more complicated LLMs. 

“When we grow LLMs to the next level, inevitably they will borrow knowledge from other tasks and other models,” said Xia “Ben” Hu, a computer scientist at Rice University.

This evolving consideration of emergence isn’t just an abstract question for researchers to consider. For Tamkin, it speaks directly to ongoing efforts to predict how LLMs will behave. “These technologies are so broad and so applicable,” he said. “I would hope that the community uses this as a jumping-off point as a continued emphasis on how important it is to build a science of prediction for these things. How do we not get surprised by the next generation of models?”

Comments

Popular posts from this blog

Perplexity

Aphorisms: AI

DeepAI's Austen on China