Julien Garran
"'We don’t know exactly when LLMs might hit diminishing returns hard, because we don’t have a measure of the statistical complexity of language. To find out whether we have hit a wall we have to watch the LLM developers. If they release a model that cost 10x more, likely using 20x more compute than the previous one, and it’s not much better than what’s out there, then we’ve hit a wall,' he says.
"And that’s what has happened: ChatGPT-3 cost $50 million, ChatGPT-4 cost $500 million and ChatGPT-5, costing $5 billion, was delayed and when released wasn’t noticeably better than the last version.
"It’s also easy for competitors to catch up."
Comments
Post a Comment
Empathy recommended