Scanning the brains of LLMs


[Sandra] Wachter says that LLMs aren’t categorized as high-risk and might escape this legal need for explainability except in some specific use cases.

But this shouldn’t let the makers of LLMs entirely off the hook, says [David] Bau, who takes umbrage over how some companies, such as OpenAI — the firm behind ChatGPT — maintain secrecy around their largest models. OpenAI told Nature it does so for safety reasons, presumably to help prevent bad actors from using details about how the model works to their advantage.

Companies including OpenAI and Anthropic are notable contributors to the field of XAI. In 2023, for example, OpenAI released a study that used GPT-4, one of its most recent AI models, to try to explain the responses of an earlier model, GPT-2, at the neuron level. 

But a lot more research remains to be done to unpack how chatbots work, and some researchers think that the companies that release LLMs should ensure that happens. “Somebody needs to be responsible for either doing the science, or enabling the science,” Bau says, “so that it’s not just a big ball of lack of responsibility.”


Comments

Popular posts from this blog

Perplexity

Aphorisms: AI

DeepAI's Austen on China